Beginning Astrophotography: The Deep Sky

I’ve spent so long looking at Jupiter in my backyard that I finally decided I wanted to see if I could spot anything outside of our solar system. Light pollution sorely inhibited my efforts, but I managed to capture a few things! I’ll keep this post short and just share two representative photos I took.

Each photo has a small bit of blur in the direction of about eleven o’clock. This is due to a slight jostle that happened as I lifted my finger from the camera shutter. I’m still quite new at this—these are the first extended exposures I’ve taken through a telescope—and I didn’t know how much this would show up. Next time, I’ll use a remote shutter or a timer.

Ring Nebula

The Ring Nebula

The Ring Nebula photographed on the night of 20 May 2017 at 23:10 PDT

One of the best things I saw last night was the Ring Nebula. It was one of only two nebulae that I was able to get any sort of decent view of, given the light pollution. It’s a planetary nebula, and it subtends a disc roughly the same size as a planet like Jupiter. It, like all the rest of the photos in this post, were taken by my usual setup, with my telescope stopped down to f/6 by a reducer (which makes everything seem smaller and brighter). No physical filters were applied (meaning, nothing to block out light pollution). It’s been edited lightly to remove the light pollution haze and bring out the color and contrast.

Seen with my actual eye, it looked largely like this photo, but the color was more difficult to make out. It looked ghostly and pale, like a puff of vapor. Color was a little easier to see if I looked just off to the side of it.

Hercules Globular Cluster

Hercules Globular Cluster photographed on the night of 20 May 2017 at 23:01 PDT

Hercules Globular Cluster photographed on the night of 20 May 2017 at 23:01 PDT

I didn’t expect a globular cluster to be any interesting to look at. Most of the targets of opportunity from my backyard were globular clusters, though, and I looked at a few. I looked at the Hercules Globular Cluster (Messier 13) first. It was like a diffuse scattering of dew drops spread on the petals of a flower too dark to see. Each of the individual stars were a bit difficult to see individually. But it photographed decently well.

I saw, and photographed, a couple of others, but their photos were not entirely as impressive, and I failed to note which was which, so I could not properly identify them for this post.

Filters

Future photos I plan to take will use either a narrowband O-III filter or a broadband UHC/LPR filter. The former permits a specific sort of light to pass through, while the latter tries to filter out particularly problematic types of light. Either should help both with photography and viewing. So hopefully the next few photos will be improved! I’ve learned a lot already.

Our Shitposts Only Feed the Beast

We’re drowning each other out with shitposts, and Twitter profits from it.


I literally have no idea what’s going on in my friends’ lives anymore because there are so many posts to wade through. Twitter gave up on the firehose approach of showing us everything, and now it tries to curate for us, but its algorithms have narrowed my age down to somewhere between thirteen and fifty-four, and it thinks I’m interested in—not kidding—dads.

To get back my firehose, I use Tweetbot. I just took a quick estimate of my extensive mute list (which I personally curate), and it weighs in at over eight hundred mutes at this point. The vast proliferation of image posts, a workaround for the strict character limit Twitter imposes, has made these mutes almost worthless, so I’ve had to mute entire people. Occasionally Tweetbot freezes when I mute a person who’s particularly prolific.

What am I muting?

  • Laborious, overwrought, played out jokes. (But usually these are spread in images, so I have to mute people. Sometimes they are blessedly hashtags.)
  • Conferences I’m not attending due to health reasons. (But often the conference has no official hashtag, and—in the case of Google I/O this year—I muted something like five hashtags, three people, and Google itself.)
  • People who repeatedly retraumatize me with what amounts to little more than virtue signaling and activism theater by putting violence, threats, horrific news and images, extensive and voluminous threads, and psyche-eroding reminders of hatred (much of which is aimed at me) each and every day—all streaming through my timeline in order to rail against it publicly.
    • There is little I can do but mute these people entirely. Though they often need dozens or hundreds of tweets to spread their message, and though Twitter is itself a centralized and proprietary platform, they do not use any long-form, self-owned medium to promulgate their message. Why?
  • One-off news stories or other events.
    • In 2016, each celebrity death garnered a mute. I share in the psychic pain each caused, but each person’s reaction flared it anew, and it’s not that each person had one reaction, but some repeatedly brought it up for days.
    • In 2017, each news story echoes for hours over dozens or hundreds of tweets, despite every mute on the subject matter I can put up. Much of it is speculations or jokes.
    • Movie releases, sports events, galas and parties, press events, and a million other things I am literally not healthy enough to properly participate in, enjoy, or motivate myself to find interest in.
  • Downright awful, hateful stuff that my friends ought to know better than to share but just don’t.
    • “Drumpf” jokes or fat jokes about Trump.
    • Transphobic shit and people who are on my shit list for it: Erika Moen, Margaret Cho, RuPaul’s Drag Race, Tyra Banks, etc.
  • Shit I just can’t handle, like horrific prison conditions, or other specific situations and people: “triggers”. Nothing anyone can do about this. I mute it the best I can.

I recognize that this problem can be read as mine rather than Twitter’s. My thought is, fewer tweets altogether comes out to higher value for each tweet. So I try to restrain myself a bit, though I’m not always successful.

But here’s the whole damnable hitch: the more restraint I show, the more likely whatever fewer tweets I do emit get lost in the noise. Or, alternatively: the more I value a tweet, the fewer people who will probably see it. And it’s cyclical. Someone who only tweets when it really counts for them might get fewer followers in the first place and will be more likely to have their tweet drown in the ocean.


I recently wrote something close to seven thousand words about my astrophotography hobby. Then I shared it in a tweet since a blog post is a rather dormant thing on its own. It did get a decent amount of engagement, but I discovered something strange happening afterwards. I noticed after a while I had followers who would see me mention space sometime after and be surprised.

Haven’t they seen any of my tweets over the last year about it? Any of the photos? Any of the posts I’d written on this site and then shared? No. None of that, they would say. They didn’t even know I had a site.

If this had happened just once or twice, I might’ve dismissed it. But this has happened repeatedly. These tweets just get lost somehow. If it’s not a tedious thread, or tweeted at the precise right moment, or retweeted by the right person, or some other magical thing I haven’t found, then it seems not to exist, I guess. I’m not sure what’s going on.


Or, maybe I do. Twitter turned off their firehose last year. Facebook did years ago. This game went pay-to-play. Either you’re already a person who drives a lot of engagement, who gets visibility, or you pay for the same.

Except, it would be super clumsy to literally have people pay to get their tweets seen. That’s just an ad, and it’s going to look like an ad, and nobody wants to click an ad, right?

But, like, right now, some people are living as ads. They drive particular kinds of traffic, specific kinds of engagement. They don’t look like ads. They target niches with surgical precision. They do this by churning out bulks of, more or less, “pulp” tweets. Each drives more engagement and synergistically works with the others.

It doesn’t particularly matter what they post. Could be they shitpost a very specific thing that a very specific set of weird Twitter just really likes. Could be @dril is an ad. Are you more likely to see a @dril retweet or one of mine? Which one profits Twitter more?


If this whole profit motive part of my post seems vague, it’s because I’m speculating on the mechanism, and I’m sure there are experts who have already figured all this out. You should find and read what they’ve written. From my point of view, it’s plain to see it’s likely Twitter has already learned to capitalize upon making some people more visible than others. This fact is beside my point.

More to the point, our intemperate shitposting has abetted this imbalance of visibility and allowed it to become profitable. It justifies the algorithmic curation, and the rest—pay-for-views, filter bubbles, propaganda, outright abuse—follows from there.

I see no easy way to turn it back. It is what it is.

Beginning Astrophotography: Journey to Jupiter

An enhanced photo of Jupiter, as viewed from the earth from an amateur telescope

Jupiter on the night of 3 May 2017, just before 9 p.m.

It’s been over a year since I wrote my first post in this series, Beginning Astrophotography: Jupiter Ascending. I’ve learned a great deal about what’s possible with the equipment I have on hand and what it takes to acquire a photograph like the one I took of Jupiter this May, with which I’ve begun this post. It represents both a rare night of luck but also a couple of years of practice and reading.

This post is going to be a long one, with lots of sections, each describing a piece of my journey toward grabbing that photo. In my previous posts, I’ve withheld a lot of detail in order to focus on my personal story. My audience has consisted of my friends with whom I want to share my enthusiasm, whether or not they care about the practicalities.

Now I want to circle back and fill in those gaps. In this post, along with the story, I’m intentionally targeting an audience interested in the marrow of astrophotography, with its attendant detail.

I am an amateur, pursuing astronomy as a hobby in my free time, as I have done for less than two years now. What I describe below, I hope, lies within the reach of motivated hobbyists who may be fortunate enough to find themselves with the time, money, and circumstances to support the pursuit for themselves.

I have also written a separate post answering questions I frequently get when I share photos like these. If you’ve become curious about this hobby for yourself, it may help you set expectations about what amateur astrophotography involves.

The Outlay

In my earlier post, I discussed equipment choice a bit. Now I want to talk more about why I have the equipment I have, what its capabilities are, and what its limits are.

When I think of hobbies, I think of, say, knitting, drawing, fishing, hiking, or building things out of matchsticks. Each of these hobbies lets you start off with a handful of dollars, a few odds and ends lying around the house, or a castoff from a friend. What you get out of each depends a great deal on the effort and practice you put in up front. If you want to spend hundreds or thousands later on, that’s fine, but your results won’t commensurately improve without that effort first.

Then, I’ve found there’s a whole world of hobbies that are rather pay-to-play—photography, for example. You save up for that first camera, and maybe it comes with a lens, but gosh, the result leaves something to be desired. You need another lens. But this one won’t zoom in! Before you know it, you’re a handful of lenses deep and realize that you need a camera bag. Now you’re realizing your new camera takes photos faster than the SD card can save them, so you need a new one of those, and you might as well have a spare. And so on.

Astronomy as a hobby can go this way. Once you’ve got an entry-level telescope, you might be set, but then you might begin to see its shortcomings. Last year, I found myself at this point, considering my first upgrades. I feel extremely lucky that, at this point in my life, I can indulge in one of these pay-to-play hobbies.

Combining photography with astronomy just multiplies the effect. I began with a really modest budget, and then I leapt in with both feet.

First Telescope

The first budget I set for myself was about $300, but I ended up stretching to about $400. I chose a budget small enough that if I had a bad experience, I could eat the cost without too much pain. If I had it to do again, I might have set a budget closer to $200, and I would have come out of the experience just as informed and enriched.

I had had no intention of doing any photography yet because I had literally no idea it was possible, what equipment was necessary, or how hard it would be. I figured it was out of reach, so I ignored it as a consideration.

With astrophotography out of the picture, I only considered what would give me the best view for my dollar. I began trying to search for how magnification worked until I learned that magnification was practically limited by other factors, like eyepiece choice, focal length, and aperture. In fact, the more I read, the more aperture stood out as the one most salient attribute of a telescope’s viewing ability.

I also explored a maze of other features, like fancy, computerized controls and such, but I found those dug significantly into the price. When telescopes in my price range included fancy features, they also invariably had smaller apertures.

So I had to trade off between fancy features and sheer viewing power. I decided to prioritize for aperture. I didn’t know what I’d be looking at, so I thought having as much aperture as I could afford would accommodate the most situations. And I thought the fancy features would be intimidating and hinder me from learning the mechanics of using a telescope.

I ended up buying an eight-inch reflector. It cost me $380. Reflectors use an extremely simple design—I was paying for little more than a metal tube and a couple of mirrors. If I had known I’d be primarily looking at bright targets (moon and planets), I might have made a different choice and not prioritized aperture as much. In fact, the telescope I got was right at the edge of what I could carry in my car or by hand.

When it arrived, I took it out that very night and saw Saturn.

Second Telescope

I had tried to take a picture of Saturn that first night, but I didn’t get anything recognizable. It didn’t take long for me to decide both that, yes, I definitely wanted to pursue this hobby further, and I definitely wanted to share it with others who couldn’t be there with me.

As I’ve mentioned, I feel I’ve had a lot of personal luck in being able to set a much larger budget for my second telescope. I believe that I budgeted around $3,000, but in the end, I’ve probably invested, all told, $4,500 in it and accessories. Not all of that has been spent at once, though. In fact, again, some of it was possibly overspent since I didn’t know exactly what I needed.

In fact, I felt comfortable with a larger budget because I had decided I was investing for the longer term—I do not intend to buy another telescope for a very long time, if ever again. So I thought of this as my “lifetime” telescope.

In buying the second telescope, I wanted a more compact tube (in length), mistakenly thinking it would mean a lighter overall telescope. I was dreadfully wrong—the current telescope altogether weighs something like a hundred pounds assembled. I also thought it would be more portable, but again, I was wrong—a more complicated setup has led to many more (heavy) pieces to set up and break down each time I want to use it.

I continued to focus on aperture (forgive the pun), but I also wanted computerized tracking, a hard requirement for more serious astrophotography. Computerized tracking lets the telescope follow an object in the sky as it moves—as the Earth moves—so that the object doesn’t slide out of view or move around.

In my budget, my requirements meant buying a SchmidtCassegrain telescope kit, including a computerized mount. A SchmidtCassegrain telescope (SCT) is a kind of compact reflector telescope combined with a special lens, called a corrector plate.

I was daunted by the prospect of learning to put it together and break it back down—each time I wanted to use it. I was daunted by the prospect of figuring out how to align it to the sky—each time I so much as moved it a few inches. I’ve gotten better at these things over time, and they’re not so bad, but if I had begun with this telescope, I might have literally cried and given up at some point. Learning to use it has been, in itself, a journey for another time.

I also got a few accessories to go with this telescope, too, including a camera adapter. (I’ll mention other accessories as they’re relevant.)

Camera and Adapter

I already owned a camera for taking photos, and I needed to figure out how to connect this thing, somehow, to the telescope. It turns out that adapters exist that lock onto the camera body like a lens would, while the other end is shaped like an eyepiece that goes into the telescope. They do nothing more particularly special than hold the camera’s sensor at a fixed position and distance from the telescope’s back opening (or an eyepiece, if one’s in there). From there, you focus the telescope’s light onto the sensor, and the entire telescope functions as one giant lens for your camera.

The adapters are usually pretty (relatively) inexpensive. One I’ve used recently is on Amazon, and an earlier adapter kit costs the same.

As I mentioned in my FAQ, it’s even possible with some practice simply to hold any camera up (with a lens) to the eyepiece of a telescope, focus, and take a photo. This works, even with a smartphone. There exist adapters to help with this.

My camera is a Sony α6300 with an APS-C CMOS sensor. It’s a mirrorless camera, making it like a smaller version of a DSLR camera. I chose it for more general photography, but it works decently for astrophotography because it’s light and takes 4K-quality video.

Focus and Seeing

Once my telescope was assembled and ready to use for the first time, Jupiter provided the first target of opportunity from my house. The first challenge I had was focusing on Jupiter properly.

I live in the Pacific Northwest, where conditions usually aren’t conductive to astronomical observation in the first place. Even when the sky clears, that isn’t the end of the story. For planetary viewing, astronomical seeing plays a huge role. Without good seeing, Jupiter’s disc appears to smear and soften randomly, no matter what I do or how hard I try to focus. Magnifying more closely doesn’t matter; it doesn’t help.

Below, I’ve added a small video clip of what Jupiter looks like under relatively poor seeing. It wobbles, shimmers, and smears.

Seeing changes from moment to moment, so maybe if you’re patient, the seeing will clear for a moment on a given night, and you can take good photos or video. The problem is, without good conditions to start with, it’s tough to know if you’ve focused properly in the first place.

Another problem is that observing Jupiter actually requires some study and practice, to become accustomed to its appearance through the telescope: how it should look when it’s perfectly in focus, what distortions come from bad seeing, and what distortions come from bad focus.

Last year, I used a lot of trial and error. I found that each night I got a little better, saw a little more detail. Where first I saw a mottled disc, I wondered later, were those cloud bands? Was that the spot? Is that how it really looks, pale and pink, instead of blood red like I’ve seen on TV?

I learned to use the moons, which appear to be much smaller and nearly points, to improve my focus. I also tried using a device called a Bahtinov mask, which is a simple piece of plastic with slots that goes over the end of the telescope. Its job is to distort point sources of light in a specific way such that, when something’s slightly out of focus, it’s more obvious.

See the two examples below. The first is slightly out of focus, while the second is perfectly in focus.

 

Both photos look almost identical, but look closely. The diffraction spikes (the lines of light) don’t quite meet in the center in the first image. In the second one, they do. The smaller star off to the left looks a bit softer in the first photo, while it looks sharper in the second. The difference is subtle, but it makes a world of difference—literally.

Since the whole sky is at the same focal distance, I can use the Bahtinov mask to improve my focus on a small point source of light, and then I can home in on Jupiter. Since I know it’s precisely focused at that point, I know any additional distortion is due to other factors, such as the atmosphere.

Seeing Further

Individual frame from a video of Jupiter

Individual frame from a video of Jupiter

Now, assuming that I have a night of clear conditions and decent seeing, I’m still limited in the detail I can observe in any instant. At right, I’ve added an image of an individual frame from a video of Jupiter I took the night of 3 May 2017. It has not been altered in any way, except that it’s been cropped and rotated. The exposure length was (if I recall correctly) one eightieth of a second.

It was chosen from among thousands as representing one of the very best possible frames I took. The Great Red Spot is clearly visible in the lower left quadrant. There are distinguishable cloud bands, but their finer details are not present; they appear to be even, smeared stripes across the surface.

This is as far as I’ve ever gotten with an individual photo. I have literally hundreds of similar photos, all taken under slightly different circumstances and with slightly different methods, but they all end up looking roughly like that one. More detail eludes me, at least on a sensor.

(With the bare eye, a little more detail is to be found. The eye can see things the sensor can’t, and I can use nice eyepieces that aren’t compatible with my camera.)

I know I can buy yet more stuff and get more detail. It’s out there. I’m only a couple of years into my hobby here, and I haven’t explored CCD sensors or apochromatic refractors, and I’ve barely begun to learn to get all the detail I can from the photos I have taken. But this is the place I’m stuck at now.

So, if computers didn’t exist? The story would end here. But again, I count myself lucky.

Lucky imaging

Computers have brought lucky imaging within the reach of amateurs like me. Specifically, I’ve been practicing a technique called image stacking. The idea is that, with some software I can find online, I can take lots of individual photos and combine them into a single better photo. That’s how I created the photo of Jupiter at the top of this post, along with the one below.

Instead of just taking hundreds and hundreds of photos, my feeling is, it’s easier just to take a video over several minutes. Here’s where the benefit of 4K video really comes into play. By taking a video over several minutes, also, I increase the odds of encountering a few moments of exceptional seeing. I can even fool around with the focus during the video, sacrificing some frames as “first pancakes” while I get things right. The software later can identify the best frames and use those.

With Jupiter, I can’t video too long, though. Jupiter makes a full rotation inside of ten hours. This means that its features will move across its surface and blur an exposure over the course of some minutes, even visibly from Earth! To play it safe, I try not to use frames across a time period wider than about a minute or two. (A lot of software comes with a “de-rotation” feature for this reason, but it’s better to avoid it in the first place.)

The software I’ve found online so far is pretty daunting, confusing, and flaky. Most of it only works in Windows. I’ll describe here what I do, but I strongly encourage you to find what works for you because I am pretty sure I am doing something wrong or sub-optimally. I only hit upon this workflow after trying many, many different things over several nights and weekends, until the end result was somewhat presentable.

Preprocessing

The first thing I do is take the video file I’ve imported off my camera after observing and load it into a piece of software I found called PIPP. Its job is to take the video, crop it down, rotate it, find the best frames, extract those, put them in order, and output them.

Screenshot of PIPP

It took a lot of trial and error to get some output that worked, and I’m still not sure I’ve done it right. Problem is that with a video of any size, it takes most of an hour to do its job, so I usually make my best guess and look at what it outputs to see if it’s reasonable.

From a video of several thousand frames, I usually cull off about 1,200 of the best frames (as PIPP determines them).

Stacking

Once those 1,200 frames are sitting in a folder, I’ve been using a piece of software called RegiStax to turn them into a single detailed image.

 

I’ve added some screenshots above of RegiStax, as I’ve used it to prepare an image of Jupiter from frames similar to the one I included above. My experience of using this software is that it’s extremely confusing and took many hours of practice to get to work. Making things worse is the fact that any misstep would cause the software to misbehave or outright crash, so I became accustomed to simply closing and reopening RegiStax—and starting from scratch—anytime I did something wrong.

Finally, compounding the whole unpleasantness, I couldn’t see whether my result would turn out worthwhile until the very end when I began applying wavelet filters. I found myself flying blind, from beginning to end, until a planet popped out, usually wasting an hour each time.

As near as I can tell, though, here’s roughly the process from RegiStax, though.

  1. Hit “Select” in the upper left and open up all the images to stack at once.
  2. At this point, you’re looking at the “Align” tab, and you’re expected to align the images. (Nothing tells you this. You’re expected to have read it on the site.)
  3. First, hit the “Set alignpoints” button. (I found that I had to tweak the alignpoint parameters to allow a few more alignpoints. It took me hours to figure this out.) This happens quickly and automatically.
  4. Then click “Align”. This takes a moment.
  5. Finally, hit “Limit”. I found through trial and error that a smaller limit was better in my case, likely because my photos were somewhat less detailed. I ended up limiting down to something like 20% to 40% of frames.
  6. At this point, the software automatically moves you over to the “Stack” tab. I mostly left what I saw alone and hit the “Stack” button. This takes a moment. The image looks strangely blurry after this.
  7. Finally, I found myself at the “Wavelet” tab. I had no idea whatsoever what to do here, so I searched online for things to try. I’ll relate what worked for me (specifically, what I changed from the default).
    • I used the dyadic instead of linear scheme.
    • I used the Gaussian instead of the default filter.
    • I believe I linked the wavelets, but I only dimly recall.
    • The first wavelet filter I used aggressively, with denoise set to 0.11 and desharpen set to 0.125 or so. These values can be tweaked. Then I moved the slider to the left, and this is when I finally saw some detail emerge.
    • The second wavelet filter I slid without changing any values.
    • I tried adjusting the sixth filter very slightly, but its changes were extremely aggressive.
    • The wavelet filters add some aggressive artifacts which I compensated for by clicking the button on the right called “Denoise/Deringing” and used some of its sliders slightly until the ring artifacts softened.

Once all that was done, I saved the resulting image, the one I began this post with. I also tried this with a second video and had similar (but slightly less impressive) results. The original video was slightly differently taken, and some of the processing I used was also slightly different.

 

These photos represent the very best I’ve ever managed to take of any celestial object so far. Finer details are visible, such as some finer cloud bands, and a hint of the small white clouds between the Great Red Spot and its adjacent cloud band.

Lucky Stars (and Asterisks)

I’ve learned a lot along the way, and having done so, I can usually process a video of Jupiter in about an hour into something clearer. There’s a ton of room for improvement. RegiStax is literally just the first piece of software I managed to figure out enough to get any kind of result. There are probably better processes, better pieces of software. And there are definitely better pieces of hardware, better photographic and noise reduction techniques.

I’ll update this post with clarifications and additional information as needed. Feel free to contact me (especially on Twitter) to let me know what I can improve. Thanks so much for reading about what has been a labor of love for me.

Beginning Astrophotography: Frequently Asked Questions

Wide-field view of Andromeda Galaxy, individual photo taken without a telescope

Wide-field view of Andromeda Galaxy, individual photo taken without a telescope

When I mentioned I was writing another post about astrophotography, I also asked if there were any questions I should make sure I answer.

I’m very grateful especially to Julia Evans for asking several very good ones! A lot of these questions have come up more than once, so I thought they deserve their own post. I’ll answer these below to the best of my ability.

My answers are all based on my own experiences and limited by my own knowledge, of course. Many answers will vary based on the experience and equipment of the observer, too. I will try to address this in each answer.

Can I do astrophotography in my city?

Yes!

There are certain kinds of astrophotography which are relatively easy to do within a city, and some other kinds are rather difficult. It depends a great deal on the subject you choose, the equipment you have, and where your city is located.

The biggest challenge to pursuing astrophotography in a city is light pollution. (Aside from this, cities also obstruct the sky with its buildings and other structures.) But it isn’t hopeless! Bright objects are still visible. Think about how Venus remains visible even at dusk when the sky isn’t even fully dark yet!

Screenshot of DarkSiteFinder.com Light Pollution Map near Portland, OR

A map of the light pollution around Portland, Oregon

Depending on where you are, various objects may or may not be visible. A good way to get a sense of what’s visible with the naked eye or with a telescope is by using the Bortle scale. From there, you can try to identify which zone you’re in using a map such as the light pollution map at the DarkSiteFinder. See the screenshot of what the area of northwestern Oregon and southwestern Washington looks like.

The planets are so bright, it won’t matter where you are: light won’t drown those out. Likewise for the moon. Through a telescope, bright star clusters and nebulae would remain visible as well, and you can sometimes squint and see those even without. I’ve also seen decent photos of some very bright nebulae (like the one in Orion) from inside cities.

Other subjects, like wide-field views of the stars, details of dusty or dark nebulae, and faint galaxies will be very challenging to photograph. The kinds of exposures needed to capture these will also capture lots of incidental light.

A telescope’s primary job is not magnification but light-gathering. The bigger the telescope, the more light it gathers. It’s the same way a magnifying glass can turn sunlight into a spot hot enough to start a fire. A telescope will make any light in the sky much brighter. In the city, it unfortunately can capture a lot of light you don’t want to see, and details will look pale and indistinct.

How fancy does my equipment need to be?

Just like with any photography, there’s a huge range in fanciness, costing anywhere from a couple hundred dollars to many, many thousands.

A beginner’s telescope of any significance might start around two or three hundred dollars, in the United States. There are less expensive ones, to be sure, and any telescope is better than no telescope. In fact, there is a huge market for used telescopes—find it if you’re on a budget! But here, I define “any significance” as a telescope flexible enough for looking at many categories of things and able to be accessorized.

Photo of a detail of the moon taken by an iPhone through the eyepiece of a reflector telescope

My very first astrophoto taken of the moon through the eyepiece of my first telescope with an iPhone

I began by using one of those and putting my iPhone up to the eyepiece of my first telescope. This would be the very first astrophoto I’ve ever taken, and you can see it on the right. There’s a lot of light leaking in, and the photo is really indistinct. All of this could have been easily fixed with a twenty-dollar accessory, which would have held it still, at the right distance, and blocked out the extra light. This is a perfectly fine way to get started.

Photo of the moon taken through the eyepiece of a reflector

Another photo of the moon taken through the same telescope ten days later

Later on, I took some better moon photos doing basically the same thing with another camera. The only improvement I made was just holding it a little differently and manually focusing (the manual focus is the reason I switched out the cameras). I never really subjected these photos to any real editing besides some light touching up in Apple Photos.

Nevertheless, these were pretty challenging to take because I was literally just holding up the camera to the eyepiece of the telescope, making sure they were lined up perfectly, and moving the telescope to track the moon at the same time. I wish I’d gotten that smartphone accessory, but I decided instead to upgrade everything.

My post Beginning Astrophotography: Jupiter Ascending describes the first upgrade I made. In Beginning Astrophotography: Journey to Jupiter, I go into much greater detail about my equipment choices, budget, and capabilities—but remember that my choices don’t have to be your choices.

What kind of camera do you use? Does it have to be a fancy camera?

I currently happen to use a Sony α6300 E-mount camera with an APS-C CMOS sensor. This is a mirrorless camera, meaning it’s significantly smaller and lighter than a full DSLR camera but gives me a lot more control than either a smartphone camera or most point-and-shoot cameras. I chose this one with astrophotography in mind because of its extremely rapid autofocus (when using a lens, good for wide-field) and its ability to take 4K video, but I use it for general photography too.

A lot of people taking pictures of the sky at large seem to use DSLRs because of the quality and size of the sensor and because of the fine control it gives them (allowing them to expose for a long time, for example). When it comes to the telescope astrophotography, I almost always see people using a purpose-made CCD (charge-coupled device) camera. These cameras are almost like purpose-built webcams that strap onto the back of the telescope and are specially made for gathering space photos. They can run a few hundred or on up.

All that said, the difference in sensor really becomes relevant once you start using techniques that exaggerate the noise it gathers. A smartphone camera is a fine place to start, and remains part of my repertoire because it’s just so damn easy. These phones’ cameras are becoming indistinguishable from mid-range consumer point-and-shoot cameras.

What kind of stuff can I see through a telescope with my eyes?

Oh, all sorts of things! But they may not look as you expect. Telescopes do funny things that defy all our expectations.

  • Everything you see is turned upside down. (There are a few telescopes which don’t do this, but they have drawbacks and are seldom used.) This isn’t a big deal for looking at the sky, usually, but makes it hard to orient yourself.
  • Everything is much brighter. Depending on the aperture (width) of your telescope, a sky which appears pitch black will have a soft blue glow. The moon will become bright enough to leave spots on your vision and even be painful to look at for long. Planets will glow like headlights in the distance.
  • Stars will be much brighter and much more numerous. There’s nowhere you can point your telescope that some stars won’t be visible, especially in a dark area. A star will never appear to be more than a very bright point at most, though, unless the star is actually something else (two stars, a small nebula, a planet, or whatever). No telescope on earth can zoom in enough to see a star as more than a point.
  • Star clusters will look like bright scatterings of jewels, like little private Milky Ways only you can see sometimes, or like indistinct smudges at other times. The Pleiades will have bits of dust around them.
  • Bright nebulae will look indistinct to the naked eye and will vary a lot by light pollution.  The Orion Nebula will be dark and dusty and look as if there were glowing pearls seen on a sea floor among sand. The may be difficult to make out any color if you’re standing in a city or using a smaller telescope.
  • How much a telescope can magnify depends on the eyepiece you use, its focal length, and its aperture. You change out the amount of magnification by using different eyepieces. The more you magnify, the dimmer the image becomes, and the more distorted it gets. Beyond a certain point (which differs by a telescope’s length and aperture), there’s no point in trying to magnify further. Distortions come from both the air moving around constantly and the bending of light itself. (Imagine using a magnifying glass—it magnifies slightly more as you hold it away from the subject, but up to a point, and beyond that, it just distorts.)
  • Planets and the moon will appear to shimmer, as if viewed distantly on a very hot day. In particular, most of the time, focusing on a planet will seem challenging, as if just when it’s about to come into focus, it goes right back out. There are times when you’ll have better luck than other times—when the seeing is good.
  • Unless you’re using a telescope with automatic tracking, everything is going to move—fast. To be sure, this is the Earth’s motion, but you’ll be surprised just how quickly things move out of view. When I first began looking at planets, I had perhaps twenty seconds to look at them before they were totally out of view. The less magnified they are, though, the less the motion is magnified.

It’s one thing to know all these things. It’s another to put your eye to the eyepiece and look and make sense of what you’re seeing. It literally takes practice, over minutes, hours, and several occasions, to get better at actually seeing what you see because they are so outside of our experience.

You’ve never really seen anything like, say, the Orion Nebula—you’ve seen evenly exposed, two-dimensional halftone prints, or you’ve seen pixellated digital images constrained within the sRGB color gamut. The actual celestial body—the scintillating, shimmering indistinct dust cloud, stars littered within it, fanned out between the poles of dimness and brilliance, filled with colors and forms you’ve never seen before—is indescribable. Our brains are not designed for this sight, and our lives have not prepared us for the experience.

What objects are there? Is it only Earth’s moon and some planets, or are there other things you can look at?

I’ve named several already, but lemme categorize!

  • Yes, there’s the moon.
  • There are planets! The classical ones (out to Saturn), and some people like to look at the ones further out.
  • Then, there are the planets’ moons! I’ve seen photos and even animations of the Galilean moons transiting Jupiter.
  • When there are comets around, people with telescopes can spot these well before the rest of us.
  • Further out, there are
    • double stars;
    • variable stars (ones which slowly get brighter and dimmer);
    • star clusters, like the Pleiades;
    • bright nebulae like the Orion Nebula or the Crab Nebula; and
    • galaxies like the Andromeda Galaxy, Triangulum, etc.

A guy looking for comets in the eighteenth century, Charles Messier, got really annoyed by all the smudges he saw which were not comets, and he began listing them as Messier objects. The current list is a decent smattering of deep sky objects, all of which are decently easy to observe with a telescope.

Can you see satellites?

I originally got this question in a longer form, with several questions clustered together that I wanted to answer at once.

can you see GPS satellites? (or are those in geosynchronous orbit so no? what even is geosyncronous orbit?) can you see the international space station?

In general, low-earth-orbit satellites are often visible. Their motion usually makes them obvious. They’ll be a brightish dot drifting along very obviously. It won’t generally be able to see much more detail than that. It’s possible to distinguish from an airplane because an airplane will often have blinking colors to it and will appear to approach from the horizon, grow faster and brighter, and then dim and appear to slow toward the opposite horizon. Satellites are usually singular, steady lights which move at a fixed speed.

One phenomenon to know about is the “satellite flare,” also known as an “Iridium flare,” where a satellite will suddenly grow bright in the sky for a moment and then dim again. This is where a satellite catches the light from the sun on its surfaces (such as its solar panels) and reflects it back down to us.

The International Space Station is the brightest satellite of all and is very often visible. It moves very quickly because it circles the entire earth in about ninety minutes, which makes photographing it challenging. That doesn’t mean people haven’t tried—and succeeded! I’m not an expert on this, so check out this rad article on spotting and photographing satellites, including the ISS, which includes some ISS photos!

GPS satellites are not in geosynchronous orbits. They’re in medium earth orbits, about twelve thousand miles above the earth, which is fifty times further away than the ISS or most other satellites. They take about twelve hours to go around the earth. I don’t really know precisely how big GPS satellites are, but if they’re about the size of a bus, they subtend maybe roughly a tenth of an arcsecond at that distance (based on some quick back-of-the-envelope math I did). This article says the limiting resolution of the atmosphere is usually about two or three arcseconds (rarely less, but never a tenth). This is assuming the satellite puts off enough light to be visible at such a small size and you know precisely where to find such a minuscule thing. It’s probably out of the question that you could see a GPS satellite, even under the best conditions and with the best telescope.

Geosynchronous orbits are twice as far up still as GPS satellites, perhaps twenty-five thousand miles up or so. This far up, a satellite moves along slowly enough that it’s always above the same longitude of the earth, and if it were visible (though it wouldn’t be unless it were huge), it’d appear not to move eastward or westward. If the satellite is above the equator, it’s called geostationary, and the position of the satellite would appear to be fixed like a star.

These orbits are useful for communications and weather satellites, but this is getting off the subject of astrophotography, so I won’t get into the mechanics of this.

How do you figure out when the stuff you want to see is going to be in the sky? Do you use an app or something?

I use an app called SkySafari. It has a feature that can prepare viewing lists for a given evening. I can also choose a time and see what the sky looks like at that time. It helps me determine what magnitude a given object will have, sort by magnitude or category, and other features.

I also use a website called the Clear Sky Chart to know whether conditions nearby will be amenable to observation. Here’s the Clear Sky Chart for Portland. And here’s the chart for a state park where I can more easily access dark sky conditions.

Can I see cool stuff on the moon?

Depends on what you mean by cool!

The moon, viewed through a telescope, is indescribably vivid, immense, and gorgeous. Even looking at it through binoculars can give you a sense of what I mean. Where it’s always been a perfect sphere with indistinct features on it, it becomes a landscape with mountains casting shadows, basalt plains, and craters that look as fresh as when they were made.

If you mean man-made things, unfortunately, no. Man-made artifacts on the moon are much too tiny. The moon is huge.

Why do you love having a telescope?

The first thing I remember loving, when I was old enough to love something, was space. I began with early-1960s Childcraft books and encyclopedias my grandma had when I was very young. The information they had was dated, sketchy, and incomplete, making everything seem mysterious, dim, and distant. Most things were still shown as illustrations, or at best, blurry photos. For example, at that point they didn’t even contain any clear photos of Mars, making the idea of canals seem reasonable. (There were no such things as space telescopes, and lucky imaging was just beginning to become feasible.)

Later, I got newer books from the library, with clearer photos and more precise facts, and I filled in the gaps in my knowledge. But the more I learned, the more complicated everything became. The new information didn’t just bring more answers—they brought more questions, more mystery. And the mystery drove me on.

It’s like this. I grew up in an isolated and insular place. My upbringing didn’t give me the opportunity to travel. I’d never seen mountains, deserts, big cities, foreign countries. Books describing space and all the things in it gave my imagination worlds to seize upon, and I imagined adventures to inhabit them. The subjects they described, the planets and galaxies and nebulae, loomed large in my heart and in my mind’s eye. Actual stars—and not celebrities—were my stars.

This is why I said, in my First Stargazing post, that seeing Saturn for the first time was like seeing a celebrity for me. It was literally difficult to accept the reality once it finally presented itself. I had always wanted to see these things with my own eyes, through no one’s filter. I had assumed they were out of reach until I looked into it and realized I was simply wrong. With a bit of equipment and a lot of practice, I could get started and see for myself the things I’d been reading about all my life.

When I first saw another galaxy, I had seen across a timeless gulf that no person could ever cross. I’ve seen a primordial storm on Jupiter older than anyone alive today. I’ve felt the Earth wheel under my feet so fast I could barely keep up. I’ve gotten lost among stars without names. Seen without the intercessor of someone else’s words, or someone else’s photos, new parts of the sky opened to me which had previously been ineffable and therefore lost.

Almost Human

Not that anyone asked, but here’s what skeeves me out about Mark Zuckerberg’s recent attempts to tour the nation and pretend to be a normal person to everyone he meets.

He hasn’t announced a single thing of the sort, but no breathing human can doubt he’s considering running for president of the US. His ambitions are as naked as they are clumsy. This comes from a man who has zero experience in the political arena and, when he inevitably announces, will only reveal the extent of his entitlement to a candidacy to the absolute apex of political accomplishment.

This ham-fisted tour shows his lack of agility or circumspection. So does running a company which fostered an attitude that speed trumped care, craft, or empathy. So does making statements about the death of privacy as a “social norm” and then walking it back.

The bottom line for me is, the whole thing bespeaks a man who simply feels he gets to run if he wants. Trump opened up a kind of permission effect: qualifications are now off the table. Now only volume matters. 2020 will see a field of clowns campaign, each jockeying for attention. And Zuckerberg is entitled to his captive nation.

Speculating on Applications of the Hypothetical Interstellar Origin of the Zodiacal Dust Cloud

I had read part of Brian May’s thesis, A Survey of Radial Velocities in the Zodiacal Dust Cloud, a week or so ago, and one section of it had stuck in my mind. Here I quote it below (ellipses mine).

It is remarkable that so few earlier records are available, since…the mysterious Light of the Zodiac ought to have been very conspicuous in the dark skies of earlier civilisations, comparable in dimensions with the Milky Way, and at its brightest, more luminous. … Could it be that our view of the Zodiacal Light is highly variable? Cassini was convinced that it disappeared completely between 1665 and 1681, and this man was certainly no casual observer. I have no solution to the puzzle. In the light of Cassini’s reports, along with Jones’s observations detailed below, and the curious dearth of references pre-Cassini, I am convinced that, despite the lack of any recent confirmation, we must admit the possibility that the Zodiacal Light has not always been what it is today.

The passage in full is worth reading, but the salient detail is that the zodiacal light may be a relatively recent phenomenon.

Later in the same chapter, another note got my attention, here quoted.

One of the great ‘surprises’ in space observations, as noted by Sykes et al (2004), was that the detectors on board the space vehicle Ulysses, once past the orbit of Jupiter, began to register particle impacts from the opposite direction to that expected from interplanetary dust particles, and at high velocities, clearly indicating an interstellar component to the dust cloud, which predominates at these distances from the Sun, but has been now detected, by the Hiten detector, even at 1 AU (Grün et al 1993).

Here we see that the dust particles are interstellar, as well, meaning from outside the solar system.

My mind returned to these two details today, and it occurred to me that the latter detail easily accounts for the former if I posit the explanation that the zodiacal dust cloud originates from interstellar dust through which the solar system has moved during its galactic orbit, encountering it in much the same way the Earth encounters cometary dust during its orbit, giving rise to meteor showers.

In fact, the zodiacal dust cloud’s interplanetary dust particles (IDPs) are located asymmetrically with respect to both the ecliptic and to the circumstellar disc, and I speculate it is for this very reason. (That is to say, in reading Brian May’s summarizations of other surveys’ findings of the locations of the IDPs in our solar system, they tend to be located in particular areas, not in a smooth ring.)

When I turned to chapter four of his thesis, Brian May’s interpretations of his results already include hypotheses about IDPs flowing in from the interstellar medium, both as the solar system moves through it and as the IDPs flow inward. These were vindicated by more recent observations.

The novelty of my speculations, which I’m sharing here, lies in the recency and asymmetrical nature of the zodiacal dust cloud, and the implications it has on understanding the nature of the galaxy.

By this, I mean that although the interstellar dust throughout the galaxy cannot be observed directly, we can extrapolate from our knowledge of the zodiacal dust cloud—our knowledge of the velocities and positions of the IDPs within our solar system—to use our sun’s past as a kind of lantern to shine a light on the dark, dusty path through which we have passed over the past centuries or millennia. In other words, if the spatial distributions of unseen interstellar dust in our galaxy are uneven, we can gain insight into these distributions.

If this is possible, I foresee implications in dark matter physics because it would help refine our knowledge of the mass of the galaxy, which would in turn help narrow the parameters of dark matter.

I’ve also wondered if this has climate science implications. I’ve read about hypotheses involving climate cycles tied to phenomena from the interstellar medium, and clouds of dust may be one such phenomenon.

The Helicopter Analogy of Mental Health

I have decided that helicopter controls make the perfect analogy for mental health.

  • When descending (settling) under power, a thing called a vortex ring state can occur where you’re basically sucking yourself down along with all the air around you. If you don’t escape, you crash and die. Paradoxically, struggling against it by applying more power, which should theoretically pull you upward, just makes the problem worse, and you fall faster! The only way out is a lateral move.
    • This feels a lot like spiraling out of control! In such times, all you can do is step out of your own downwash, distract yourself, or seek help. If you’ve got coping mechanisms lined up ahead of time, these can help a lot!
  • If you go up too fast, the rotors over you—which are designed to flex!—will actually bend down enough to strike your own tail, which will of course cause you to crash and die. To counteract this, you can ascend more slowly or move laterally while you ascend to direct the acceleration in multiple directions.
    • It’s not only okay but recommended to make gradual and measured progress toward a goal. This also means that taking an indirect path there may also be the safest!
  • Hovering is the hardest part of learning to fly a helicopter. Aerodynamic forces are constantly moving the helicopter in every axis, and moving any one control has implications in the other axes which entail touching the other controls too.
    • Our mental health tends not to remain in a steady state either, I’ve found. We naturally fluctuate between highs and lows. But as with flight, we do learn to maintain some control over time and not to veer too far into either extreme, as these can be dangerous or lead to overcorrection.

I have probably stretched the analogy too far already, so I’m not even going to mention the rad one-step-forward-one-step-back analogy I have for retreating blade stall.

Minimizing Your Trust Footprint

I originally published the following last year as an article in The Recompiler, a magazine focused on technology through the lens of feminism. I present it here with as few modifications from its original publication as possible.


For everyone who chooses to engage with the Internet, it poses a conflict between convenience and control of our identities and of our data. However trivially we interact with online services—playing games, finding movies or music, connecting to others on social media—we leave identifying information behind, intentionally or not. In addition, we relinquish some or even all rights to our own creations when we offer our content to share with others, such as whenever we write on Medium.

Most of us give this incongruity some cursory thought—even if we don’t frame it as a conflict—such as by when we set our privacy settings on Facebook. With major data breaches (of identifying, health, financial, or personal info) and revelations of widespread, indiscriminate government surveillance in the news over the last few years, probably more of us are thinking about it these days. In some way or another, we all must face up to the issue.

At one extreme, it’s possible to embrace convenience completely. Doing so means handing over information about ourselves without regard for how it will be used or by whom. At the other extreme, there’s a Unabomber-like strategy of complete disconnection. This form of non-participation comes along with considerable economic and social disenfranchisement.1

The rest of us stride a line between, maybe hewing nearer to one extreme or another as our circumstances allow. This includes me—and as time passes, I am usually trying to exert more control over my online life, but I still trade off for convenience or access. I use an idea I call my trust footprint to make this decision on a case-by-case basis.

For example, I realized I began to distrust Google because the core of their business model is based on advertising. I wrote a short post on my personal website about my motives and process, but to sum up, I didn’t want to be beholden to a collection of services that made no promises about my privacy or their functionality or availability in the future. I felt powerless using Google, and I knew this wouldn’t change because they have built their empire on advertising, a business model which puts the customers’ privacy and autonomy at odds with their success.

Before I began to distrust Google, I didn’t give my online privacy or autonomy as much thought as I do today. When I began getting rid of my Google account and trying to find ways to replace its functionality, I had to examine my motives, in order to clarify the intangible problem Google posed for me.

I concluded that companies which derive their income from advertising necessarily pit themselves adversarially against their customers in a zero-sum game to control those customers’ personal information. So I try to avoid companies whose success is based on selling the customer instead of a product.

Facebook, as another example, needs to learn more about their users and the connections between them in order to charge advertisers more and, in turn, increase revenue. To do so, they encourage staying in their ecosystem with games and attempt to increase connections among users with suggestions and groups. As noted in this story about Facebook by The Consumerist last year:

Targeted ads are about being able to charge a premium to advertisers who want to know exactly who they’re reaching. Unfortunately, in order to do so, Facebook has to compromise the privacy of its hundreds of millions of users.

Most social networks engage in similar practices, like Twitter.

Consequently, my first consideration when gauging my trust footprint is to ask who benefits from my business: What motivates them to engage with users, and what will motivate them in the future? This includes thinking about the business model under which online services I choose operate—to the extent this information is available and accurate, of course.

Of course, this information often isn’t clear, up front, available, or permanent, so it’s really a lot of guessing. The “trust” part is quite literal—I don’t actually know what’s going to happen or if my information will eventually be leaked, abused, or sold. Some reading and research can inform my guesses, but they remain guesses. I don’t trust blindly, but it is still something of an act of faith.

It’s for that reason my goal isn’t to completely avoid online services or only use those who are fully and radically transparent. I only want to minimize the risk I take with my information, to reduce the scale of the information I provide, and to limit my exposure to events I can’t control.

The second consideration I make in keeping my trust footprint in check is to question whether a decision I make actually enlarges it. For instance, when I needed a new calendaring service after leaving Google, I realized that I could use iCloud to house and sync my information because I had already exposed personal information to iCloud. I didn’t have to sign up for a new account anywhere, so my trust footprint wasn’t affected.

The tricky part about that last consideration is that online services have tendrils that themselves creep into yet more services. In the case of Dropbox, which provides file storage and synchronization, they essentially resell Amazon’s Simple Storage Service (AWS S3), so if you don’t trust Amazon or otherwise wish to boycott them, then avoiding Dropbox comes along in the bargain. The same goes for a raft of other services, like Netflix and Reddit, who all use Amazon Web Services to drive their technology.

That means it’s not just home users who are storing their backups and music on servers they don’t control. Whether you call it software-as-a-service or just the “cloud,” services have become interconnected in increasingly techological and political ways.

It doesn’t end with only outsourcing the services themselves. All these online activities generate vast amounts of data which must be refined into information—for which there is copious value, even for things as innocuous as who’s watching what on TV. Nielsen’s business model of asking what customers are watching has already become outdated. Nowadays, the media companies know what you watch; the box you used to get the content has dutifully reported it back, and in turn, they’ve handed that data over to another company altogether to mine it for useful information. This sort of media analytics has become an industry in its own right.

As time passes, it will become harder to avoid interacting with unknown services. Economies of scale have caused tech stacks to trend more and more toward centralization. It makes sense for companies because, if Amazon controls all their storage, as an example, then storage becomes wholly Amazon’s problem, and they can offer it even more cheaply than companies which go out and build their own reliable storage.

Centralization doesn’t have to be bad, of course. It’s enabled companies to spring up which may not have been viable in the past. For example, Simple2 is an online bank which started from the realization that to get started with an entirely new online bank, “pretty much all you need is a license from the Fed and a few computers.”

The upshot is that the process of managing your online life to be entirely within your control becomes increasingly fraught as centralization proceeds. When you back up to “the cloud,” try to imagine whether your information is sitting on a hard disk drive in northern Virginia, or maybe a high-density tape in the Oregon countryside.3

It’s not even necessary to go online yourself to interact with these business-to-business services. Small businesses have always relied upon vendors for components of their business they simply can’t provide on their own, and those vendors have learned they can resell other bulk services in turn. The next time you see the doctor, ask yourself, into which CRM system did your doctor just input your health information? Where did the CRM store that information? Maybe in some cosmic coincidence, it’s sitting alongside your backups on the same disk somewhere in a warehouse. Probably not, but it could happen.

My trust footprint, just like my carbon footprint, is a fuzzy but useful idea for me, which acknowledges that participation in the online world carries inevitable risk—or at least an inevitable cost. It helps me gauge whether I’m closer or further away from my ideal privacy goals. And just the same way that we can’t all become carbon neutral overnight without destroying the global economy, it’s not practical to run around telling everyone to unplug or boycott all online services.

Next time you’re filling out yet another form online, opening yet another service, trying out one more new thing, remember that you’re also relinquishing a little control over what you create and even a small part of who you are. And if this thought at all gives you pause, see if there’s anything you can do to reduce your trust footprint a little. Maybe you can look into hosting your own blog for your writing, getting network-attached storage for your home instead of using a cloud service, limiting what you disclose on social media, or investing in technology that takes privacy seriously.

Beginning with Regular Expressions

I originally published the following last year as an article in The Recompiler, a magazine focused on technology through the lens of feminism. It began as a primer on picking up regular expressions for a friend who was learning to program at the time. I regarded it as an exercise in making a complex topic as accessible as possible.

It assumes an audience familiar with general computer concepts (such as editing text), but it does not necessarily assume a programming background. I present it here with as few modifications from its original publication as possible.


Regular expressions are short pieces of text (often I’ll call a single piece of text a “string,” interchangeably) which describe patterns in text. These patterns can be used to identify parts of a larger text which conform to them. When this happens, the identified part is said to match the pattern. In this way, unknown text can be scanned for patterns, ranging from very simple (a letter or number) to quite complex (URLs, e-mail addresses, phone numbers, and so on).

The patterns shine in situations where you’re not precisely sure what you’re looking for or where to find it. For this reason, regular expressions are a feature common to many technical programs which focus on using lots of text. Most programming languages also incorporate them as a feature.

One common application of regular expressions is to move through a body of text to the first part which matches a pattern—in other words, to find something. It’s then possible to build on this search capability then to replace a pattern automatically. Another use is to validate text, determining whether it conforms to a pattern and acting accordingly. Finally, you (or your program) may only care about text which matches a pattern, and all other text is irrelevant noise. With regular expressions, you can cull a large text down to something easier to use, more meaningful, or suitable to further manipulation.

A Simple First Regular Expression

A regular expression, like I said, is itself a short piece of text. Often, it’s written in a special way to set it apart as a regular expression as opposed to normal text, usually by surrounding it with slashes. Whenever I write a regular expression in this post, I will also surround it with slashes on both sides. For example, /a/ is a valid regular expression which matches the string a. That particular expression could be used to find the first occurrence of the letter a in a longer string of text, such as, Where is the cat?. If the pattern /a/ were applied against that sentence, it would match the a in the middle of cat.

There’s a clear benefit to using regular expressions to do pattern matching in text. They let you ask for what you want rather than specifying how to find it. To be technical, we’d say that regular expressions are a kind of declarative syntax; contrast that with an imperative method of asking for the same thing. In this case, to do this in an imperative way, you’d have to write instructions to loop through each letter in the text, comparing it to the letter a. In the case of regular expressions, the how isn’t our problem. We’re left simply stating the pattern and letting the computer figure it out.

Regular expressions are rather rigid and will only do what you say, sometimes with surprising results. For example, /a/ only matches a single occurrence of a, never A, nor à, and will only match the first one. If it were applied to the phrase “At the car wash”, it would match against the first a in car. It would skip over the A at the beginning, and it would stop looking before even seeing the word wash.

As rigid as regular expressions are, they have an elaborate syntax which can describe vast varieties of patterns. It’s possible to create patterns which can look for entire words, multiple occurrences of words, words which only happen in certain places, optional words, and so on. It’s a question of learning the syntax.

While I intend to touch on the various features which allow flexible and useful patterns, I won’t exhaust all the options here, and I recommend consulting a syntax reference once the idea feels solid. (Before getting into some of the common features of regular expression syntax, it’s important to note that regular expressions vary from implementation to implementation. The idea has been around a long time and has been incorporated into countless programs, each in slightly different ways, and there have been multiple attempts to standardize them. Despite the confusion, though, there is a lot of middle ground. I’m going to try to stay firmly on this middle ground.)

Metacharacters

Let’s elaborate a bit on our first pattern. Suppose we’re not sure what we’re looking for, only that we know it begins with a c and ends with a t. Let’s think about what kinds of words we might want to match, so we can talk intelligently about what patterns exist in those words. We know that /a/ matches cat. What if we want to match cut instead? We could just use /u/, but we know this also matches unrelated strings, like bun or ambiguous.

Now, /cat/ is a perfectly reasonable pattern, and so is /cut/, but we’d probably have an easier go if we create a single pattern that says we expect the letter c, some other letter we don’t care about, and then the letter t. Regular expressions let us use metacharacters to describe the kinds of letters, numbers, or other symbols we might expect to find without naming them directly. (“Character” is a useful word to encompass letters, numbers, spaces, punctuation, and other symbols—anything that makes up part of a string—so “metacharacter” is a character describing other characters.) In this case, we’ll use a .—a simple dot. In regular expression patterns, a dot metacharacter matches any individual character whatsoever. Our regular expression now looks like /c.t/ and matches cat, cut, and cot, among other things.

In fact, we might describe metacharacters as being any character which does not have its literal meaning, and so regular expressions may contain either characters and metacharacters. Occasionally, it can be confusing to know which is which. Mostly, it will be necessary to consult a reference for regular expressions which best suits your situation. Sometimes, even more confusingly, we want to use a metacharacter as a character, or vice versa. In that situation, we need to escape the character.

Escaping

We can see in the above example that a dot has a special meaning in a regular expression. Sometimes, though, we might wish to describe a literal dot in a pattern. For this reason, we need a way to describe literal characters which don’t carry their ordinary meaning, as well as employ ordinary characters for new meanings. In a regular expression pattern (as in many other programming languages), a backslash (\\) does this job. Specifically, it means that the character directly after it should not be interpreted as usual.

Most often, it can be used to define a pattern containing a special character as an ordinary one. In this context, the backslash is said to be an escape character, which lets us write a character while escaping its usual meaning.

For example, suppose we cared about situations where a sentence ends in the letter t. The easiest pattern to describe that situation might be the letter, followed by a period and a space, but we can’t type a literal dot for that period, or else we’d match words like to. Therefore, our pattern must escape the dot. The pattern we want is written as /t\. /.

Quantifiers

Metacharacters may do more than stand in for another kind of character. They may modify the meaning of characters after it (as we’ve already seen with the escape metacharacter) or those before it. They may also stand in for more abstract concepts, such as word boundaries.

Let’s first consider a new situation, using a metacharacter to modify the preceding character. Think back to earlier, when we said we know we want something that begins with a c and ends with a t. Using the pattern /c.t/, we already know that we can match words like cut and cat.

We need a few more special metacharacters, though, before our expression meets our requirements. /c.t/ won’t match, for example, carrot, but it will match concatenate and subcutaneous.

First of all, we need to be able to describe a pattern that basically leaves the number of characters in the middle flexible. Quantifiers allow us to describe how many occurrences of the preceding character we may match. We can say if we expect zero or more, one or more, or even a very particular count of a character or larger expression.

Such patterns become far more versatile in practice. Take, for example, the quantifier +. It lets us specify that the character just before it may occur one or more times, but it doesn’t name an upper limit.

Remember the pattern we wrote to match sentences ending in t? What if we wanted to make sure we matched all the spaces which may come after the sentence? Some writers like to space twice between sentences, after all. In that case, our pattern could look like /t\. +/. This pattern describes a situation in which the letter t is followed by a literal dot and then any number of spaces.

Quantifiers may also modify metacharacters, which make them truly powerful and very useful. Using the + again, let’s insert it into our /c.t/ pattern to modify the dot metacharacter, giving us /c.+t/. Now we can match “carrot”! In fact, this pattern matches a c followed by any number of any character at all, as long as a t occurs sometime later on.

There are a few other quantifiers needed to cover all the bases. The following three quantifiers cover the vast majority of circumstances, in which you’re not particularly sure what number of characters you intend to match:

  • * matches zero or more times
  • + matches one or more times
  • ? matches exactly once or zero times

On the other hand, you may have a better idea about the minimum or maximum number of times you need to match, and the following expressions can be used as quantifiers as well.

  • {n} matches exactly n times
  • {n,} matches at least n or more times
  • {n,m} matches at least n but not more than m times

Anchors

We still have “concatenate” and “subcutaneous” to deal with, though. /c.+t/ matches those because it doesn’t care about what comes before or after the match. One strategy we can use is to anchor the beginning or end of the pattern to stipulate we want the text to begin or end there. This is a case where a metacharacter matches a more abstract concept.

Anchors, in this case, let us match the concept of the beginning or the end of a string. (Anchors really refer to the beginning and ends of lines, most of the time, but it comes to the same thing in this case. See a reference guide for more information on this point.) The ^ anchor, which may only begin a pattern, matches the beginning of a string. Likewise, a $ at the end means the text being matched must end there. Using both of these, our pattern becomes /^c.+t$/.

To break this pattern down, we’re matching a string which begins with a c, followed by some indeterminate number of characters, and finally ends with a t. As ^ and $ represent the very beginning and end of the string, we know that we won’t match any string containing anything at all on the line other than the pattern.

Character Classes

Using anchors, though, may not be the best solution. It assumes the string we’re searching within may only contain the pattern we’re looking for, and so often, this is not the case.

The dot is a very powerful metacharacter. Its biggest flaw is that it is too flexible. For example, /^c.+t$/ would match a string such as cat butt. Patterns try to match as much as possible. Some regular expression implementations allow you to specify a non-greedy pattern (which I won’t cover here—see a reference), but a better approach is to revisit our requirements and reword them slightly to be more explicit.

We want to match a single word (some combination of letters, unbroken by anything that’s not a letter) which begins with c and ends with t. Considering this in terms of the kinds of characters which may come before, during, and after the match, we want to match something which contains not-alphabetical characters before it, followed by the letter c, then some other alphabetical letters, then the letter t, and then something else that’s not alphabetical.

In the /^c.+t$/ pattern, we need to replace both of the anchors and the middle metacharacter .. Assuming words come surrounded by spaces, we can replace each anchor with just a space. Our pattern now looks like / c.+t /.

Now, as for the dot, we can use a character class instead. Character classes begin and end with a bracket. Anything between is treated as a list of possibilities for the character it may match. For example, /[abc]/ matches a single character which may be either a, b, or c. Ranges are also acceptable. /[0-9]/ matches any single-digit number.

We can use a range which captures the whole alphabet, and luckily, a character class is considered a single character in the context of a pattern, so the quantifier after refers to any character in the class. Putting all this together, we end up with the pattern / c[a-z]+t /.

If we want to mix up upper- and lower-case letters, character classes help in this situation, too: / [Cc][a-z]+t /. Now we can match on names like Curt.

Our assumption that words will be surrounded by spaces is a fragile one. It falls apart if the word we want to match is at the very beginning or end, or if it’s surrounded by quotation marks or other punctuation. Luckily, character classes may also list what they do not include by beginning the list with a ^. When ^ comes within brackets, instead of at the beginning of a pattern, instead of serving as an anchor, it inverts the meaning of the character class.

If we consider a word to be a grouping of alphabetical characters, then anything that’s around the word would be anything that’s not alphabetical. Let’s adjust our pattern accordingly: /[^A-Za-z0-9][Cc][a-z]+t[^A-Za-z0-9]/. We’re using the same pattern as before, but the beginning and ending space have become [^A-Za-z0-9].

Escape Sequences

If our pattern is starting to look cumbersome and odd to you, you’re not alone in thinking that. There’s absolutely nothing wrong with the pattern we just wrote, but it has gotten a bit long-winded. This makes it difficult to read, write, and later update.

In fact, many character classes get used so often (and can otherwise be so annoying to write repeatedly) that they’re usually also available as backslashed sequences, such as \b or \w. (This is escaping, again, as I mentioned before, but instead of escaping a special character’s meaning, we’re escaping these letters’ literal meaning. In other words, we’re imbuing them with a new meaning.)

The availability and specific meaning of these escape sequences vary a bit from situation to situation, so it’s important to consult a reference. That said, in our case, we only need a couple which tend to be very common to find.

One of the very most common such escape sequences is the \w which stands in for any “word” character. For our purposes, it matches any alphanumeric character. This is good enough for the inside of a word, so we can revisit our pattern and turn it into /[^\w][Cc]\w+t[^\w]/. Our pattern reads a little more logically now: We’re searching for one not-word character (like punctuation or whitespace) followed by an upper- or lower-case c, some indefinite count of word characters, the letter t, and then finally one not-word character.

Notice how I used the escape sequence inside the character classes at the beginning and end of the word. This is perfectly valid and sometimes desirable. For example, it would allow us to combine escape sequences for which there’s no single suitable one.

It also lets us invert their meaning, as you saw in the most recent example, but many escape sequences can be modified in the same way by capitalizing them, such as \W. As a mnemonic to remember this trick, think of it as shifting the escape sequence (using shift to type it). In cases where a character class may be inverted in meaning, often a capitalized counterpart exists.

Using \W, now we can pare down the pattern back to something a little more readable: /\W[Cc]\w+t\W/.

More Reading

For today, I’m satisfied with our pattern. In a string like I would like some carrot cake., it matches carrot with no trouble, but it doesn’t match cake or even subcutaneous tissue.

There are many more ways to improve it, though. We’ve only laid the groundwork for understanding more of the advanced concepts of regular expressions, many of which could help us make our expression even more powerful and readable, such as pattern qualifiers and zero-width assertions.

Concepts like grouping allow you to break up and manipulate matches in fine-grained ways. Backtracking and extended patterns allow patterns to make decisions based on what they’ve already seen or will see. Some programmers have even written entire programs based on regular expressions, only using patterns!

In short, regular expressions are a deep and powerful topic that very few programmers completely master every corner of. Don’t be afraid to keep a reference close at hand—hopefully it will now empower you instead of daunt you, now that you have a grasp of how to get started composing patterns.

Ripples Crossing the Crescent Moon


The waxing crescent moon, as seen on the evening of 9 May 2016. Video taken using a Sony α6300 camera attached to a Celestron 11-inch Schmidt-Cassegrain telescope.

« Older posts

© 2017 Emily St*

Theme by Anders NorenUp ↑