Minimizing Your Trust Footprint

I originally published the following last year as an article in The Recompiler, a magazine focused on technology through the lens of feminism. I present it here with as few modifications from its original publication as possible.

For everyone who chooses to engage with the Internet, it poses a conflict between convenience and control of our identities and of our data. However trivially we interact with online services—playing games, finding movies or music, connecting to others on social media—we leave identifying information behind, intentionally or not. In addition, we relinquish some or even all rights to our own creations when we offer our content to share with others, such as whenever we write on Medium.

Most of us give this incongruity some cursory thought—even if we don’t frame it as a conflict—such as by when we set our privacy settings on Facebook. With major data breaches (of identifying, health, financial, or personal info) and revelations of widespread, indiscriminate government surveillance in the news over the last few years, probably more of us are thinking about it these days. In some way or another, we all must face up to the issue.

At one extreme, it’s possible to embrace convenience completely. Doing so means handing over information about ourselves without regard for how it will be used or by whom. At the other extreme, there’s a Unabomber-like strategy of complete disconnection. This form of non-participation comes along with considerable economic and social disenfranchisement.1

The rest of us stride a line between, maybe hewing nearer to one extreme or another as our circumstances allow. This includes me—and as time passes, I am usually trying to exert more control over my online life, but I still trade off for convenience or access. I use an idea I call my trust footprint to make this decision on a case-by-case basis.

For example, I realized I began to distrust Google because the core of their business model is based on advertising. I wrote a short post on my personal website about my motives and process, but to sum up, I didn’t want to be beholden to a collection of services that made no promises about my privacy or their functionality or availability in the future. I felt powerless using Google, and I knew this wouldn’t change because they have built their empire on advertising, a business model which puts the customers’ privacy and autonomy at odds with their success.

Before I began to distrust Google, I didn’t give my online privacy or autonomy as much thought as I do today. When I began getting rid of my Google account and trying to find ways to replace its functionality, I had to examine my motives, in order to clarify the intangible problem Google posed for me.

I concluded that companies which derive their income from advertising necessarily pit themselves adversarially against their customers in a zero-sum game to control those customers’ personal information. So I try to avoid companies whose success is based on selling the customer instead of a product.

Facebook, as another example, needs to learn more about their users and the connections between them in order to charge advertisers more and, in turn, increase revenue. To do so, they encourage staying in their ecosystem with games and attempt to increase connections among users with suggestions and groups. As noted in this story about Facebook by The Consumerist last year:

Targeted ads are about being able to charge a premium to advertisers who want to know exactly who they’re reaching. Unfortunately, in order to do so, Facebook has to compromise the privacy of its hundreds of millions of users.

Most social networks engage in similar practices, like Twitter.

Consequently, my first consideration when gauging my trust footprint is to ask who benefits from my business: What motivates them to engage with users, and what will motivate them in the future? This includes thinking about the business model under which online services I choose operate—to the extent this information is available and accurate, of course.

Of course, this information often isn’t clear, up front, available, or permanent, so it’s really a lot of guessing. The “trust” part is quite literal—I don’t actually know what’s going to happen or if my information will eventually be leaked, abused, or sold. Some reading and research can inform my guesses, but they remain guesses. I don’t trust blindly, but it is still something of an act of faith.

It’s for that reason my goal isn’t to completely avoid online services or only use those who are fully and radically transparent. I only want to minimize the risk I take with my information, to reduce the scale of the information I provide, and to limit my exposure to events I can’t control.

The second consideration I make in keeping my trust footprint in check is to question whether a decision I make actually enlarges it. For instance, when I needed a new calendaring service after leaving Google, I realized that I could use iCloud to house and sync my information because I had already exposed personal information to iCloud. I didn’t have to sign up for a new account anywhere, so my trust footprint wasn’t affected.

The tricky part about that last consideration is that online services have tendrils that themselves creep into yet more services. In the case of Dropbox, which provides file storage and synchronization, they essentially resell Amazon’s Simple Storage Service (AWS S3), so if you don’t trust Amazon or otherwise wish to boycott them, then avoiding Dropbox comes along in the bargain. The same goes for a raft of other services, like Netflix and Reddit, who all use Amazon Web Services to drive their technology.

That means it’s not just home users who are storing their backups and music on servers they don’t control. Whether you call it software-as-a-service or just the “cloud,” services have become interconnected in increasingly techological and political ways.

It doesn’t end with only outsourcing the services themselves. All these online activities generate vast amounts of data which must be refined into information—for which there is copious value, even for things as innocuous as who’s watching what on TV. Nielsen’s business model of asking what customers are watching has already become outdated. Nowadays, the media companies know what you watch; the box you used to get the content has dutifully reported it back, and in turn, they’ve handed that data over to another company altogether to mine it for useful information. This sort of media analytics has become an industry in its own right.

As time passes, it will become harder to avoid interacting with unknown services. Economies of scale have caused tech stacks to trend more and more toward centralization. It makes sense for companies because, if Amazon controls all their storage, as an example, then storage becomes wholly Amazon’s problem, and they can offer it even more cheaply than companies which go out and build their own reliable storage.

Centralization doesn’t have to be bad, of course. It’s enabled companies to spring up which may not have been viable in the past. For example, Simple2 is an online bank which started from the realization that to get started with an entirely new online bank, “pretty much all you need is a license from the Fed and a few computers.”

The upshot is that the process of managing your online life to be entirely within your control becomes increasingly fraught as centralization proceeds. When you back up to “the cloud,” try to imagine whether your information is sitting on a hard disk drive in northern Virginia, or maybe a high-density tape in the Oregon countryside.3

It’s not even necessary to go online yourself to interact with these business-to-business services. Small businesses have always relied upon vendors for components of their business they simply can’t provide on their own, and those vendors have learned they can resell other bulk services in turn. The next time you see the doctor, ask yourself, into which CRM system did your doctor just input your health information? Where did the CRM store that information? Maybe in some cosmic coincidence, it’s sitting alongside your backups on the same disk somewhere in a warehouse. Probably not, but it could happen.

My trust footprint, just like my carbon footprint, is a fuzzy but useful idea for me, which acknowledges that participation in the online world carries inevitable risk—or at least an inevitable cost. It helps me gauge whether I’m closer or further away from my ideal privacy goals. And just the same way that we can’t all become carbon neutral overnight without destroying the global economy, it’s not practical to run around telling everyone to unplug or boycott all online services.

Next time you’re filling out yet another form online, opening yet another service, trying out one more new thing, remember that you’re also relinquishing a little control over what you create and even a small part of who you are. And if this thought at all gives you pause, see if there’s anything you can do to reduce your trust footprint a little. Maybe you can look into hosting your own blog for your writing, getting network-attached storage for your home instead of using a cloud service, limiting what you disclose on social media, or investing in technology that takes privacy seriously.

Beginning with Regular Expressions

I originally published the following last year as an article in The Recompiler, a magazine focused on technology through the lens of feminism. It began as a primer on picking up regular expressions for a friend who was learning to program at the time. I regarded it as an exercise in making a complex topic as accessible as possible.

It assumes an audience familiar with general computer concepts (such as editing text), but it does not necessarily assume a programming background. I present it here with as few modifications from its original publication as possible.

Regular expressions are short pieces of text (often I’ll call a single piece of text a “string,” interchangeably) which describe patterns in text. These patterns can be used to identify parts of a larger text which conform to them. When this happens, the identified part is said to match the pattern. In this way, unknown text can be scanned for patterns, ranging from very simple (a letter or number) to quite complex (URLs, e-mail addresses, phone numbers, and so on).

The patterns shine in situations where you’re not precisely sure what you’re looking for or where to find it. For this reason, regular expressions are a feature common to many technical programs which focus on using lots of text. Most programming languages also incorporate them as a feature.

One common application of regular expressions is to move through a body of text to the first part which matches a pattern—in other words, to find something. It’s then possible to build on this search capability then to replace a pattern automatically. Another use is to validate text, determining whether it conforms to a pattern and acting accordingly. Finally, you (or your program) may only care about text which matches a pattern, and all other text is irrelevant noise. With regular expressions, you can cull a large text down to something easier to use, more meaningful, or suitable to further manipulation.

A Simple First Regular Expression

A regular expression, like I said, is itself a short piece of text. Often, it’s written in a special way to set it apart as a regular expression as opposed to normal text, usually by surrounding it with slashes. Whenever I write a regular expression in this post, I will also surround it with slashes on both sides. For example, /a/ is a valid regular expression which matches the string a. That particular expression could be used to find the first occurrence of the letter a in a longer string of text, such as, Where is the cat?. If the pattern /a/ were applied against that sentence, it would match the a in the middle of cat.

There’s a clear benefit to using regular expressions to do pattern matching in text. They let you ask for what you want rather than specifying how to find it. To be technical, we’d say that regular expressions are a kind of declarative syntax; contrast that with an imperative method of asking for the same thing. In this case, to do this in an imperative way, you’d have to write instructions to loop through each letter in the text, comparing it to the letter a. In the case of regular expressions, the how isn’t our problem. We’re left simply stating the pattern and letting the computer figure it out.

Regular expressions are rather rigid and will only do what you say, sometimes with surprising results. For example, /a/ only matches a single occurrence of a, never A, nor à, and will only match the first one. If it were applied to the phrase “At the car wash”, it would match against the first a in car. It would skip over the A at the beginning, and it would stop looking before even seeing the word wash.

As rigid as regular expressions are, they have an elaborate syntax which can describe vast varieties of patterns. It’s possible to create patterns which can look for entire words, multiple occurrences of words, words which only happen in certain places, optional words, and so on. It’s a question of learning the syntax.

While I intend to touch on the various features which allow flexible and useful patterns, I won’t exhaust all the options here, and I recommend consulting a syntax reference once the idea feels solid. (Before getting into some of the common features of regular expression syntax, it’s important to note that regular expressions vary from implementation to implementation. The idea has been around a long time and has been incorporated into countless programs, each in slightly different ways, and there have been multiple attempts to standardize them. Despite the confusion, though, there is a lot of middle ground. I’m going to try to stay firmly on this middle ground.)


Let’s elaborate a bit on our first pattern. Suppose we’re not sure what we’re looking for, only that we know it begins with a c and ends with a t. Let’s think about what kinds of words we might want to match, so we can talk intelligently about what patterns exist in those words. We know that /a/ matches cat. What if we want to match cut instead? We could just use /u/, but we know this also matches unrelated strings, like bun or ambiguous.

Now, /cat/ is a perfectly reasonable pattern, and so is /cut/, but we’d probably have an easier go if we create a single pattern that says we expect the letter c, some other letter we don’t care about, and then the letter t. Regular expressions let us use metacharacters to describe the kinds of letters, numbers, or other symbols we might expect to find without naming them directly. (“Character” is a useful word to encompass letters, numbers, spaces, punctuation, and other symbols—anything that makes up part of a string—so “metacharacter” is a character describing other characters.) In this case, we’ll use a .—a simple dot. In regular expression patterns, a dot metacharacter matches any individual character whatsoever. Our regular expression now looks like /c.t/ and matches cat, cut, and cot, among other things.

In fact, we might describe metacharacters as being any character which does not have its literal meaning, and so regular expressions may contain either characters and metacharacters. Occasionally, it can be confusing to know which is which. Mostly, it will be necessary to consult a reference for regular expressions which best suits your situation. Sometimes, even more confusingly, we want to use a metacharacter as a character, or vice versa. In that situation, we need to escape the character.


We can see in the above example that a dot has a special meaning in a regular expression. Sometimes, though, we might wish to describe a literal dot in a pattern. For this reason, we need a way to describe literal characters which don’t carry their ordinary meaning, as well as employ ordinary characters for new meanings. In a regular expression pattern (as in many other programming languages), a backslash (\\) does this job. Specifically, it means that the character directly after it should not be interpreted as usual.

Most often, it can be used to define a pattern containing a special character as an ordinary one. In this context, the backslash is said to be an escape character, which lets us write a character while escaping its usual meaning.

For example, suppose we cared about situations where a sentence ends in the letter t. The easiest pattern to describe that situation might be the letter, followed by a period and a space, but we can’t type a literal dot for that period, or else we’d match words like to. Therefore, our pattern must escape the dot. The pattern we want is written as /t\. /.


Metacharacters may do more than stand in for another kind of character. They may modify the meaning of characters after it (as we’ve already seen with the escape metacharacter) or those before it. They may also stand in for more abstract concepts, such as word boundaries.

Let’s first consider a new situation, using a metacharacter to modify the preceding character. Think back to earlier, when we said we know we want something that begins with a c and ends with a t. Using the pattern /c.t/, we already know that we can match words like cut and cat.

We need a few more special metacharacters, though, before our expression meets our requirements. /c.t/ won’t match, for example, carrot, but it will match concatenate and subcutaneous.

First of all, we need to be able to describe a pattern that basically leaves the number of characters in the middle flexible. Quantifiers allow us to describe how many occurrences of the preceding character we may match. We can say if we expect zero or more, one or more, or even a very particular count of a character or larger expression.

Such patterns become far more versatile in practice. Take, for example, the quantifier +. It lets us specify that the character just before it may occur one or more times, but it doesn’t name an upper limit.

Remember the pattern we wrote to match sentences ending in t? What if we wanted to make sure we matched all the spaces which may come after the sentence? Some writers like to space twice between sentences, after all. In that case, our pattern could look like /t\. +/. This pattern describes a situation in which the letter t is followed by a literal dot and then any number of spaces.

Quantifiers may also modify metacharacters, which make them truly powerful and very useful. Using the + again, let’s insert it into our /c.t/ pattern to modify the dot metacharacter, giving us /c.+t/. Now we can match “carrot”! In fact, this pattern matches a c followed by any number of any character at all, as long as a t occurs sometime later on.

There are a few other quantifiers needed to cover all the bases. The following three quantifiers cover the vast majority of circumstances, in which you’re not particularly sure what number of characters you intend to match:

  • * matches zero or more times
  • + matches one or more times
  • ? matches exactly once or zero times

On the other hand, you may have a better idea about the minimum or maximum number of times you need to match, and the following expressions can be used as quantifiers as well.

  • {n} matches exactly n times
  • {n,} matches at least n or more times
  • {n,m} matches at least n but not more than m times


We still have “concatenate” and “subcutaneous” to deal with, though. /c.+t/ matches those because it doesn’t care about what comes before or after the match. One strategy we can use is to anchor the beginning or end of the pattern to stipulate we want the text to begin or end there. This is a case where a metacharacter matches a more abstract concept.

Anchors, in this case, let us match the concept of the beginning or the end of a string. (Anchors really refer to the beginning and ends of lines, most of the time, but it comes to the same thing in this case. See a reference guide for more information on this point.) The ^ anchor, which may only begin a pattern, matches the beginning of a string. Likewise, a $ at the end means the text being matched must end there. Using both of these, our pattern becomes /^c.+t$/.

To break this pattern down, we’re matching a string which begins with a c, followed by some indeterminate number of characters, and finally ends with a t. As ^ and $ represent the very beginning and end of the string, we know that we won’t match any string containing anything at all on the line other than the pattern.

Character Classes

Using anchors, though, may not be the best solution. It assumes the string we’re searching within may only contain the pattern we’re looking for, and so often, this is not the case.

The dot is a very powerful metacharacter. Its biggest flaw is that it is too flexible. For example, /^c.+t$/ would match a string such as cat butt. Patterns try to match as much as possible. Some regular expression implementations allow you to specify a non-greedy pattern (which I won’t cover here—see a reference), but a better approach is to revisit our requirements and reword them slightly to be more explicit.

We want to match a single word (some combination of letters, unbroken by anything that’s not a letter) which begins with c and ends with t. Considering this in terms of the kinds of characters which may come before, during, and after the match, we want to match something which contains not-alphabetical characters before it, followed by the letter c, then some other alphabetical letters, then the letter t, and then something else that’s not alphabetical.

In the /^c.+t$/ pattern, we need to replace both of the anchors and the middle metacharacter .. Assuming words come surrounded by spaces, we can replace each anchor with just a space. Our pattern now looks like / c.+t /.

Now, as for the dot, we can use a character class instead. Character classes begin and end with a bracket. Anything between is treated as a list of possibilities for the character it may match. For example, /[abc]/ matches a single character which may be either a, b, or c. Ranges are also acceptable. /[0-9]/ matches any single-digit number.

We can use a range which captures the whole alphabet, and luckily, a character class is considered a single character in the context of a pattern, so the quantifier after refers to any character in the class. Putting all this together, we end up with the pattern / c[a-z]+t /.

If we want to mix up upper- and lower-case letters, character classes help in this situation, too: / [Cc][a-z]+t /. Now we can match on names like Curt.

Our assumption that words will be surrounded by spaces is a fragile one. It falls apart if the word we want to match is at the very beginning or end, or if it’s surrounded by quotation marks or other punctuation. Luckily, character classes may also list what they do not include by beginning the list with a ^. When ^ comes within brackets, instead of at the beginning of a pattern, instead of serving as an anchor, it inverts the meaning of the character class.

If we consider a word to be a grouping of alphabetical characters, then anything that’s around the word would be anything that’s not alphabetical. Let’s adjust our pattern accordingly: /[^A-Za-z0-9][Cc][a-z]+t[^A-Za-z0-9]/. We’re using the same pattern as before, but the beginning and ending space have become [^A-Za-z0-9].

Escape Sequences

If our pattern is starting to look cumbersome and odd to you, you’re not alone in thinking that. There’s absolutely nothing wrong with the pattern we just wrote, but it has gotten a bit long-winded. This makes it difficult to read, write, and later update.

In fact, many character classes get used so often (and can otherwise be so annoying to write repeatedly) that they’re usually also available as backslashed sequences, such as \b or \w. (This is escaping, again, as I mentioned before, but instead of escaping a special character’s meaning, we’re escaping these letters’ literal meaning. In other words, we’re imbuing them with a new meaning.)

The availability and specific meaning of these escape sequences vary a bit from situation to situation, so it’s important to consult a reference. That said, in our case, we only need a couple which tend to be very common to find.

One of the very most common such escape sequences is the \w which stands in for any “word” character. For our purposes, it matches any alphanumeric character. This is good enough for the inside of a word, so we can revisit our pattern and turn it into /[^\w][Cc]\w+t[^\w]/. Our pattern reads a little more logically now: We’re searching for one not-word character (like punctuation or whitespace) followed by an upper- or lower-case c, some indefinite count of word characters, the letter t, and then finally one not-word character.

Notice how I used the escape sequence inside the character classes at the beginning and end of the word. This is perfectly valid and sometimes desirable. For example, it would allow us to combine escape sequences for which there’s no single suitable one.

It also lets us invert their meaning, as you saw in the most recent example, but many escape sequences can be modified in the same way by capitalizing them, such as \W. As a mnemonic to remember this trick, think of it as shifting the escape sequence (using shift to type it). In cases where a character class may be inverted in meaning, often a capitalized counterpart exists.

Using \W, now we can pare down the pattern back to something a little more readable: /\W[Cc]\w+t\W/.

More Reading

For today, I’m satisfied with our pattern. In a string like I would like some carrot cake., it matches carrot with no trouble, but it doesn’t match subcutaneous tissue.

There are many more ways to improve it, though. We’ve only laid the groundwork for understanding more of the advanced concepts of regular expressions, many of which could help us make our expression even more powerful and readable, such as pattern qualifiers and zero-width assertions.

Concepts like grouping allow you to break up and manipulate matches in fine-grained ways. Backtracking and extended patterns allow patterns to make decisions based on what they’ve already seen or will see. Some programmers have even written entire programs based on regular expressions, only using patterns!

In short, regular expressions are a deep and powerful topic that very few programmers completely master every corner of. Don’t be afraid to keep a reference close at hand—hopefully it will now empower you instead of daunt you, now that you have a grasp of how to get started composing patterns.

Ripples Crossing the Crescent Moon

The waxing crescent moon, as seen on the evening of 9 May 2016. Video taken using a Sony α6300 camera attached to a Celestron 11-inch Schmidt-Cassegrain telescope.

Jupiter on 29 March

Jupiter as seen on the evening of 29 March. Video taken using a Sony α6300 camera attached to a Celestron 11-inch Cassegrain-Schmidt telescope, viewed through a 25 mm eyepiece.


Filmed at dusk at 25% speed, a hummingbird finishes its meal and flits away.

Beginning Astrophotography: Jupiter Ascending

Since I got my first telescope, I quickly discovered its abilities and limits, and I knew I wanted more. In fact, I wanted to be able to show other people what I saw, even if they couldn’t be there themselves. That meant learning how to do astrophotography.

I learned from reading online—and by using it myself—that my first telescope wasn’t suitable for astrophotography for a number of reasons. It was too light (being a large, empty tube for the most part), shifted too easily, not mechanized in any fashion, the Dobsonian mount was too simple, and I lacked any adapters to allow me to connect my camera to it. Taking a photo through it meant getting an adapter which would overweight the end, and so I’d have to constantly hold the whole thing still and counterbalance the weight, manually find and track to objects, and somehow manually follow the motion of things in the sky—with an altazimuth mount not designed for tracking, which didn’t have measurements, markings, or indices. To be honest, I was having trouble even finding objects in the first place, needing sometimes several minutes to track in on naked-eye objects (which would then flit out of view in seconds).

So I had to get a new telescope. I wanted something slightly more compact so I could carry it out to sites more easily, so I chose a Schmidt-Cassegrain telescope, which combines lenses and mirrors into a relatively compact body. I wanted to increase my aperture even more, so I looked for eleven-inch options and settled on a set from Celestron which combines the telescope I want with a computerized mount that can automatically find and track objects.

Celestron 11-inch Schmidt-Cassegrain telescope on an Advanced VX computerized mount

Celestron 11-inch Schmidt-Cassegrain telescope on an Advanced VX computerized mount

Putting all this together and learning how to use it has been a trial, but I’m getting better. It’s been cloudy here for weeks, so I’ve been messing around with it indoors learning how to align it and get it ready. A few nights ago, it finally cleared up enough to try it out.

Outside my backdoor, between the house and a nearby fence, there’s a sliver of sky through which the ecliptic passes, meaning I can watch planets rise and pass overhead. Recently, Jupiter has been rising early at night, right around the time the moon rises.

I took the pieces outside to the back walkway—tripod, mount, eyepieces, tube—and set up, switched on the mount, and did a quick alignment. Jupiter was low on the horizon to the east, climbing, visible as an unmistakably bright point. Once the tube was lined up close enough to see through the finderscope sitting on top of the main tube, Jupiter was obvious, a brilliant point surrounded by four smaller points.

I started with my largest (and least magnifying) eyepiece, the forty-millimeter one (giving me seventy-times magnification). It’s the one visible in the photo of the telescope above. Jupiter was clearly in view but out of focus, so I saw it as a large, diffuse disc with a large hole in the middle, the way light looks through a telescope which happens to have a large obstruction. In this case, the obstruction is built into the telescope, the center corrector mirror in the middle of the tube. I began focusing, and Jupiter came into view, no longer a point but a disc which was obviously wider than tall, with four points of light scattered in a line along the bulge. The shapes shifted and scintillated slightly as the air moved around.

This was my first time using computerized tracking, and I was so happy that the planet stayed in view rather than drifting just off out of view in a matter of moments. I found that I had to make adjustments after fifteen minutes or so, but these were pretty minor, likely because I didn’t do a proper alignment. When this happened, I could make them as fine adjustments with the handheld controller pretty easily, so it was so much less exhausting than my last experience. This automatic help made it easier to work on getting the best view I could.

Focusing in on a tiny object like a planet is a little frustrating because it’s plain to see there’s more detail there, but when you try to focus on it, you find yourself hitting a point beyond which it only gets blurrier again. Some of this is attributable to the limits of my optics, but most of it is due to the turbulence of the atmosphere, called seeing. From the ground, unless the circumstances are exceptional, atmospheric seeing limits the detail available to telescopes, meaning that more magnification usually doesn’t help.

Still, I tried. I brought out my twenty-five millimeter eyepiece, a shorter eyepiece with higher magnification (around 112 times). To my surprise, that gave me more detail this time. This proved to me that the tube itself was of higher quality than my previous telescope—or maybe just better seeing than last time. With careful focusing, I make out the cloud bands with my eye, and I thought I could even glimpse the Great Red Spot if I squinted. What I saw was a lot like the video I posted at the top of this entry (only brighter with more obvious moons).

I’ve never tried photographing a planet before, so I decided it was time to try now. I had found an adapter kit for my camera (a Sony NEX-6), so I took out the eyepiece and used the prime focus attachment, which essentially uses the telescope tube itself as a massive zoom lens, but without attaching any additional eyepieces. This meant no magnification beyond the tube’s own field of view, meaning that all the light the mirror was gathering was pulled into a small disc which shone too brightly to see any detail. Below is a picture showing roughly what this looks like.

Jupiter and moons

Jupiter and moons

After recording some pictures and video in this setup, I tried using an adapter that lets me use an eyepiece, getting me a lot closer to what I was seeing. I used the twenty-five millimeter eyepiece and the adapter to attach to the telescope, and after some careful focusing, I got several shots like the following one.

Jupiter at 112-times magnification

Jupiter magnified 112 times

To get this shot, I had to fool around with the camera a bit, speeding up its shutter speed to cut down on the light that was obscuring details. This one was exposed for only 250th of a second. The moons are no longer bright enough to show up except as pale specks seen on zooming in. I applied some minor processing to improve the clarity, but that’s all. Otherwise, this is just a single snapshot of exactly what I saw that night.

What’s next? My current Jupiter shots impressed me more than I’d hoped for, but because of seeing, it’ll take some tricks to get more detail and more impressive photos. I’m looking into image stacking software which will let me combine many individual pictures into a single, more detailed shot. I’ll need it if I want to photograph deep-sky objects like nebulae or galaxies. And I’ll need to improve at aligning my telescope so it can track more accurately. It might take until summer, but I’ll update here when I try again.

Who to Call When Someone is Having a Mental Health Crisis in Portland

Last week, I read a piece aimed at San Franciscans by a tech blogger who was so oblivious and insensitive that I got vicariously ashamed before the end. Soon after, I read another article that restored my hope—what San Franciscans can do when they encounter homeless individuals having a crisis.

Portland’s in the middle of its own crisis—one of dwindling housing and skyrocketing rates of homelessness which has led to a state of emergency. People sleeping outside in Portland now number in the thousands.

Despite Portland’s efforts in de-escalation training and its dedicated Behavioral Health Unit, the police may still not be the best option to call when someone is experiencing a crisis. The article I mentioned earlier does a great job of explaining why calling the police is not always the right answer.

Below I’m compiling resources to use in Portland if someone you know, yourself, or someone on the street is experiencing a crisis and needs intervention right away. I intend this post to be a living document—I may update it as I learn about more resources or make corrections. (The most recent update was on 22 Feb 16 at 16:47 PST).

Right now, the best resource I know of in Portland is the Multnomah County Mental Health Crisis Intervention service. They offer

  • crisis counseling by phone, with translation;
  • mobile crisis outreach for in-person assessment;
  • referrals to low-cost and sliding-scale services;
  • information on community resources; and
  • a no-cost urgent walk-in clinic at 4212 SE Division St, operated by OHSU and Cascadia Health, open daily from 7 a.m to 10:30 p.m.

Their number is (503) 988-4888, available twenty-four hours a day, seven days a week. (Their toll-free number is (800) 716-9769, and as of the time I write this, they can be texted at (503) 201-1351, a number which is monitored once a day.) Their page includes information on nearby counties as well.

Accessible through the above service is also the Multnomah County Crisis Assessment and Treatment Center (CATC) (direct number (503) 232-1099) which provides a temporary facility for people needing to stabilize from mental illness symptoms (provided by Central City Concern).

Cascadia Project Respond is a crisis service provided by Cascadia Behavioral Healthcare, also available through the the Multnomah Call Center (same number as the first resource,  (503) 988-4888). Project Respond also works with the Portland Police Behavioral Health Unit I mentioned before, pairing officers responding to crisis with mental health professionals in situations where 911 dispatches officers to incidents involving mental health. (I must state, from my experience, officers may not always be accompanied by mental health professionals when intervening.)

Rose City Resource offers a smattering of resources—hotline numbers and explanations of rights—targeted at homeless people, and they print these resources as a portable booklet. They’re a resource provided by Street Roots which provides jobs for homeless and indigent individuals via the local newspaper and media they provide.

If you can’t look up or remember the above resources, Oregonians always have 211 at their disposal to find resources on the fly. Call it from any phone!

Decentering Self

In school, we learn two things about Aristotle. First, we learn that he was profoundly influential for millennia, and probably smarter than you. Second, we learn that he was mostly wrong about everything having to do with the real world.

It’s not hard to figure out why. He used a rationalistic methodology rather than an empirical one, meaning that rather than going out to examine and measure firsthand, he mostly explored the universe as a mental exercise. Unfortunately, this means he didn’t have enough information available to challenge his own biases in his worldview. He even believed that women had fewer teeth than men, which is demonstrably false.

There are numerous examples, from Aristotle and afterwards directed by his influence, for which his methodology’s flaws are made manifest as theories in contradiction of available evidence. For example, Ptolemy’s Almagest protected the Aristotelian geocentric worldview, even in the face of evidence to the contrary, by inventing epicycles to explain planetary motions which didn’t make sense otherwise.

I see the same willful ignorance play out today in discussions regarding equality, empathy, and justice. Without giving specific examples or links, I have observed two major problems with arguments I hear from racists, men’s rights activists, and others.

First, their arguments only make sense if they completely deny the evidence they hear which conflicts with their point of view. Men’s rights activists thrive on stories of false rape accusations. Racists need racism to be “over.”

Second, relatedly, they believe they don’t even need to hear others’ points of view to inform their worldview. In other words, it’s not necessary to listen to people of color to learn what they need about racism. They don’t need to listen to women to invalidate their discomfort or fears. They don’t have to listen to disabled people. It all makes sense to them, without the inconvenience of going into the world.

I can forgive Aristotle drawing the wrong conclusions about nature, but I have trouble with those who apply methods of rationality to the people around them. It’s leaping to conclusions. It’s judging a book by its cover. It’s hubris. Aristotle thought the world was the center of the universe because it just seemed that way. Do you feel like the center of your own universe?

We all need to take time to stop and listen. We need to make room for others’ feelings in our world. We need to decenter ourselves sometimes. Not always, but very often, kindness flows from doing so.

First Stargazing

Telescope detail revealing eyepiece assembly

I’d wanted a telescope for a really long time.

I guess I should say, I’d wanted a real telescope for a very long time. I had one as a kid, one of those small telescopes that leap to mind when you hear the word “telescope”: a long tube on a tripod tapering to an eyepiece on one end. I tried to use it, but I had no guidance. I don’t know much about that telescope’s provenance—at that time, I was content to know it came from Santa Claus—but it wasn’t the highest quality. The experience never worked out for me. All I saw through it were bright, blurry dots streaking briefly in and out of view.

Not long ago, I got really curious about what would be possible if I bought a new telescope today. It turns out, this is a huge hobby with a lot of writing about it online, and I had to spend weeks reading up before I knew what I wanted. I finally got something called an “Orion SkyQuest XT8 Classic Dobsonian Telescope.” All my reading had led me to the conclusion that I wanted to set aside all other considerations in favor of the most power for the dollar. In terms of telescopes, that came down to things like focal length and aperture, so I didn’t get cool things like a tracking computer.

I thought it was going to take several days to arrive, but it came the next morning after I ordered it, and I wasn’t prepared for the ridiculous size. All that I said about aperture and focal length? That’s all size, and this thing is a bit silly in that regard. The telescope tube came whole, looking like a bathroom trashcan that grew up to stand nearly as tall as a person and with a large makeup mirror in the bottom. I spent maybe an hour putting the base together, on which I mounted the tube like a cannon.

I was pretty jazzed about using it right away, but I had to find a place and wait till nightfall. I asked around and looked online, and several sources mentioned a place called Stub Stewart State Park. My friend Shawna ended up coming with me, and it was just as well no one else joined me that night because the telescope tube alone occupied my entire backseat.

We headed out of town around eleven at night on the very same day I got it. I’d never been to this park, and even though it’s only about forty minutes out of town, the roads out that way dwindle quickly in size, and the darkness made it feel very remote and a touch creepy. So I was surprised when I ended up at a large parking area filled with cars in the darkness. It was actually a bit crowded, though so dark and moonless that I never did end up seeing another human. The spot was popular enough for astronomers that the bathrooms were lit with red bulbs, making the only visible edifice seem hellish.

Shawna helped me drag my telescope out to an area that seemed clear enough. I had done some research to figure out what things I might’ve wanted to look at, and it turns out all those things had set below the horizon by midnight, so I had no idea what to do from that point. I only had one eyepiece with me, a twenty-five millimeter eyepiece which gave me forty-eight-times magnification. Regardless of what all that magnification may have been suited for, that’s all I had to work with.

The sky, even without any aid, was striking. Without a moon or any light for miles, the Milky Way could be seen clearly spreading across the entire sky. Once our eyes adjusted, the sky was full, and it would’ve been worth the trip for that view alone.

I was really anxious to try out the telescope because I didn’t even know if it’d work or not—my childhood telescope had been a complete disappointment. I took out my phone and used an app to see what was around, and pretty soon I saw Saturn sitting some degrees above the horizon. Taking my phone down, I saw some fuzzy stars in roughly the same direction and had to figure out which one of these dots might’ve been Saturn. I made a guess and worked on aiming the telescope that way.

My aim was off at first, so I slid my telescope around till a bright yellowish blur was in view. While unfocused, it was like a fat, bright dot, but I noticed it had a bit of an oval shape, and that oval became more pronounced as I focused. When it finally became crisp, I noticed the oval had gaps in it. I was actually seeing rings, around Saturn.

It was an unimpressive speck and dazzling sight at the same time. What had first been a tiny dot as anonymous as the rest was now familiar and improbable at the same time, like spotting a celebrity. The magnification rendered it quite small, little more than a bulge with a ring-like shape around it, but it was hard to look away. I let Shawna look, to share it but also confirm that the thing I was seeing was actually Saturn: I had trouble believing I’d found it.

If I’d been alone and had thought to bring a chair, I probably would’ve just sat there and looked at it for a while, but we were getting cold and uncomfortable, and I wanted to see if I could find anything else. I instantly thought of the Andromeda Galaxy, so I pulled out the Sky Guide app and found it high up in the sky in the other direction. When I put the app out of view, up in the sky, I could see stars, but I couldn’t see Andromeda (which wasn’t surprising).

I didn’t really have a choice, so I put my telescope in the neighborhood where it was supposed to be and just started scanning around. This took considerably longer without a clear dot at least to aim for, but eventually a very large oval smear came into view. I tried focusing on it, but it didn’t improve much. I didn’t figure it out at the time, but here was another situation where my eyepiece was inappropriate, this time because it magnified too much. I was seeing only most of the middle portion, and finer details had been dimmed by the magnification.

So Andromeda was even less impressive a sight than Saturn, and somehow even more. Featureless as it seemed, it filled the field of view. Seeing another galaxy was more meaningful to me than seeing a planet or a star. Coming from so far away, Andromeda’s light is not just ancient but primordial. We on Earth can visit Saturn with probes, but we’ll never touch Andromeda. I thought of Edwin Hubble, spotting a Cepheid variable star there and knowing for the first time what an immense chasm of time and space lay between that “island universe” and us. Andromeda taught us just how large the universe could be, and I remembered this as I looked at it.

Before we left, we took a last look at Saturn—I couldn’t resist. Then Shawna and I started on the trip home, by this time very early Sunday morning. We shared an exhilaration from the experience. I know I have to do this again soon, and I don’t doubt Shawna will be willing to join me.

In the Back of the House

I got my first job at fifteen, going on sixteen. I worked for my hometown newspaper as an inserter, and as time passed, I began filling in occasionally as a “pressman.” Inserters were a collective bunch of old ladies (and me) who made spare money assembling the newspaper sections and stuffing in the ad inserts. When I got to help with the actual printing, it took the form of developing, treating, and bending the lithographic plates in preparation for printing. More often, I caught the papers as they rolled off the press to bundle them up for distribution. I also cleaned up, sweeping and trash takeout and the like, but I wasn’t good at it. I liked to take breaks to play my guitar at the back of the shop, so I think the editor-in-chief who ran things probably was annoyed as piss at me half the time.

There was no question I worked in the bowels of the operation. The real fun (and to the extent a small, rural paper could afford it, the real money) happened at the front of the building where the editor-in-chief and reporters worked. I passed through to gather up trash a few times a week. As I went, I admired the editor-in-chief’s ancient typewriter collection in his office. I enjoyed talking to the lead reporter, who loved Star Trek. The layout team’s work fascinated me, especially as they transitioned to digital layout from cutting and splicing pieces of paper together.

After my tour, I returned to the back, and I only heard from the front when it was time to go to press or when we had to stop the presses. We weren’t a separate world by any means, but we had a job to do, and that job was entirely a pragmatic one, keeping the machinery running and enabling the actual enterprise which paid us. Inasmuch as I felt like an important part of the whole, it was in a sense of responsibility toward the final product.

About a decade later, I stumbled across my current programming thing. Now I find myself at the back of the house again. The work echoes my first job sometimes—working on the machinery, keeping things running, along with other programmers and operations folks. This time the job comes with a dose of values dissonance for me. It feels like a wildly inverted amount of prestige goes to us, to the people running the machines, instead of the others who are closer to the actual creation (and the customers using it).

I’m not sure our perceived value is unwarranted—programming is hard. I’m more concerned about the relationship between the front and back of the house. It could be that we, as programmers and tech people, undervalue the people making the content and interacting with the customers. I see the skewed relationship when I look at inflated tech salaries. It makes itself evident in startups made up of all or mostly engineers. I felt it most acutely when I considered becoming a tech writer, only to be reminded it could derail my career and cost me monetarily.

I don’t think my observation comes with a cogent point. Maybe only that tech can’t be just about the engineering, no more than a newspaper can be only a printing press.

« Older posts

© 2016 Emily St*

Theme by Anders NorenUp ↑