Gravitational-wave detection began just a few short years ago, and only starting in 2017, scientists began to get the first really interesting observations. These most recent events have enabled us to understand fundamental physics better by ruling out or circumscribing a few grand theories of everything, and they have allowed us to see the universe in ways we never could before.
Yet the events themselves are mere “inspiral chirps” which happen in less than a second and usually at a remove of a billion parsecs or more. How can something so brief and so distant, be so meaningful? That’s what today’s explainer is all about.
To get the big deal about gravitational waves, let’s review what we know about gravity.
What Are Gravitational Waves?
Gravity happens because mass warps space and time. All mass does this, and the warping effect stretches out across the universe infinitely. Even you, as you sit reading this, cause a very tiny change in the space in your vicinity and in the passage of time, and that extends beyond you out into space forever, however faintly.
Two things are important to note, though. First, the degree to which any object warps space around it weakens very sharply as you move away from it. This is because the farther you get from it, the amount of space there is to warp grows exponentially, so the warpage falls off in turn. (This is a kind of conservation law, known as the inverse-square law.) Second, that warpage—that gravity—doesn’t travel instantly. It’s bound by the same speed limit as everything else, just as special relativity requires.
So given these facts, we know that gravity spreads out in much the same way light does. How does it form waves? Waves result from any cyclical phenomenon which traverses a distance. Think of a cork bobbing in a pond. The cork, stationary, merely moves up and down, but the ripples move out in waves. Waves made of gravity can therefore ripple outward anytime a source of mass changes in a repeating way.
This is what gravitational waves are—cyclical changes in gravity. The ones we can detect result from vastly large masses spiraling in toward one another extremely rapidly and colliding. Their circling motion causes the gravity from them to ripple out in a pattern of repetitive change—in waves—as the masses revolve around each other. This spiraling pattern causes the masses to alternate positions quickly, sometimes lining up or sometimes sitting side-by-side (from our point of view). As they revolve, they also draw closer and closer to one another. Finally, when the masses collide, the wave source stops in a sudden “inspiral chirp”—so called because the gravitational wave is so rapid and stops so suddenly, it sounds like a chirping sound when played as audio.
How Do We Detect Them?
We detect gravitational waves with a lasers, of course. Actually, it’s a bit more complicated. There are multiple lasers. And we bounce them down tunnels (called “arms”) over four kilometers long—long enough that the Earth curves downward by a meter over their length—and back.
When a gravitational wave ripples through a LIGO facility, the phenomenon literally causes those arms to change shape and size according to general relativity. The facility is a vast instrument called an interferometer which causes the lasers to interfere with one another in a very specific and measurable way.
The idea is that a pair of lasers are fired over a very great distance (through the arms) and bounced back to the detector, which is a specialized kind of digital camera. When they bounce back, they’re meant to interfere with each other in a very precise way because of how they overlap when they hit the detector. However, minute vibrations upset the delicate interference pattern, and the detector can see that.
The lasers have to be so long because they’re directly measuring very tiny warpages in the shape of spacetime itself. Gravitational waves ripple out from violent but brief, distant events, and so these instruments must be extraordinarily sensitive. LIGO reports that at its most sensitive, it can detect a change in distance ten-thousandth the width of a single proton. The facility in Hanford, Washington, detects vibrations so sensitive that it can pick up ocean waves crashing on the beach several hundred miles away.
Using multiple facilities located in different locations, it’s possible to detect gravitational waves very quickly using advanced, purpose-made software (used to separate the data from the noise) and roughly locate the source in the sky.
What Do We Do With This Information?
Gravitational-wave detection is one of the newest and most profound breakthroughs in recent observational cosmology. Even merely detecting a gravitational wave is a feat not to be understated—it signifies that we have directly measured a ripple in the fabric of spacetime itself and further cemented the theory of general relativity. It took nearly a century after their first theoretical prediction to achieve a direct detection.
Gravitational-wave astronomy gives us our first look at the universe beyond electromagnetic radiation (light, infrared, x-rays, and so on). We are finally able to see the ripples of the pond in which we all live, not just the specks of light. Gravity behaves differently than EM radiation in several important ways, so it promises new insights into massive phenomena like neutron stars, supermassive black holes, and the like—all at incredible distances difficult to observe otherwise. The promise of revelations into the formation of galaxies, exotic phenomena, dark matter, or even the creation of the universe all await.
Already, though, we’ve seen the birth of a new form of astronomy altogether called multi-messenger astronomy which combines both gravitational wave observations along with traditional radio or optical telescopic observations of the same event. Until now, humanity has only ever been able to see the light from the stars and make educated guesses about distance, mass, and so on. What’s more, we still have more questions than answers about how EM radiation and gravity relate to one another. The most fundamental explanations of all of creation, from the subatomic level to the cosmic level, depend on answers to these questions.
The first event observed via both gravity and light was called GW170817. Gravitational waves from this event was detected by three detector facilities in real time, and a corresponding gamma-ray burst (the most violent kind of explosion in the universe) was found at the same location in the sky by dozens of observatories. This event, which is thought to be two neutron stars colliding, has already taught us new things and begun to constrain models of fundamental physics.
For example, since it was observed via both light and gravity, we can compare the time it took for both to reach us and see what differences may exist. Some grand unifying theories of everything thought that perhaps gravity would take longer to cross the distance to us because it had to travel differently (through hidden, “compactified” spacetime dimensions, for example). Since that didn’t happen, those theoretical physicists will have to go back to the drawing board.
Gravity travels unattenuated by dust and unscattered over vast distances. Events like GW170817 travel over distances only affected by other masses, allowing us to “see” the universe in a different and maybe clearer way. Some scientists hope that we may even find primordial gravitational waves leftover from the earliest epochs of the universe, before even light could emerge because matter was too dense. Gravitational waves may let us pierce the wall of creation’s primordial fire and look beyond into nearly the very earliest moments of the universe itself.
Gravitational-wave astronomy and multi-messenger astronomy are extraordinarily young sciences. The data from the events we’ve observed are still being pored over by scientists as they attempt to make or break new theories and find new signals in the noise.
In the future, we may be able to put extraordinarily large interferometers into space which extend over massive distances and which would not be subject to earthly vibrations such as trucks, oceans, footfalls, or earthquakes. One such planned project is called LISA. We would be able to observe many more sources of waves with such a detector, even ones within our own galaxy. Perhaps we will even find sources of gravitational waves we never even expected. We’re standing at the verge of a whole new universe.
An old story relates that Newton figured out gravity when an apple fell on his head. Newton himself doesn’t mention the apple falling on his head—this appears to be a later embellishment—but he does mention the apple anecdote a couple of times in his dotage. John Conduitt remembered,
In the year 1666 [Newton] retired again from Cambridge to his mother in Lincolnshire. Whilst he was pensively meandering in a garden it came into his thought that the power of gravity (which brought an apple from a tree to the ground) was not limited to a certain distance from Earth, but that this power must extend much further than was usually thought.
Why not as high as the Moon said he to himself & if so, that must influence her motion & perhaps retain her orbit, whereupon he fell a calculating what would be the effect of that supposition.
This anecdote describes a key quality of gravity as understood then: its nature as an occult force—something working mysteriously and unseen across space.
Before Newton, it was known that the planets moved according to well known laws (Kepler’s laws) which allowed their motions to be predictable. It was not understood, however, why they should move in that way. Kepler’s laws merely came from generalizations after many observations.
Philosophers at the time were troubled that the planets appeared to have no reason to move as they did. Aristotelian thought required that something must drive the planets in their motions. If concentric spheres of quintessence did not, what could this be? For a while, we believed space might be full of a kind of fluid which moved in vortices which propelled the planets like clockwork. This explanation was unexpectedly successful for decades precisely because it did not require belief in occult forces—which is to say, it didn’t require something invisible to reach magically over distances and cause a thing to happen without touching it. It pushed instead of pulled.
Newton had looked at the apple and realized nothing had pushed it to the ground. It seemed to have fallen of its own accord. Newton then extrapolated this idea out beyond the garden into the stars. Once he did, a compact set of laws allowed him to explain all the motions of the heavens very tidily. His explanation, eventually known as the Principia, laid the groundwork for fundamental physics for centuries to come. It was a feat on par with Euclid’s Elements and fully completed the Scientific Revolution which Galileo had inaugurated.
From Hypotheses to Theories
In the second edition of the Principia, Newton tacked on some notes by popular demand. In this General Scholium, he explained that he was in no position to explain what gravity could tangibly be. Famously, he said, “Hypotheses non fingo” (“I do not feign hypotheses [of what gravity could be]”). He described nature as he found it, and the explanation worked. That’s how the matter lay for centuries.
One problem is that, over time, we observed that Newton’s explanations were not perfect after all. There were subtle but galling errors which cropped up in very rare circumstances (such as predicting where Mercury would be over time). Another problem was more metaphysical—Newton’s laws only explained how gravity worked, not what it was.
Einstein solved both problems in a single stroke with general relativity. His theory of general relativity followed in the decade after special relativity as a consequence of the latter. The general theory extended the special one to more situations and provided a more fundamental explanation of universal phenomena, particularly gravity.
Equivalence All the Way Down
If you’ve made it this far, you’ve read how energy is an impetus to change over time. Motion can be a form of energy because it can impart motion on another object, accelerating it. Energy is also equivalent to mass, and mass to energy—even at rest. Finally, you’ve seen how motion itself changes energy, space, and time relative to someone observing the motion.
Now we add a new equivalence—one so incredible in its implications that Einstein called it his “happiest thought.” It’s now simply known as the equivalence principle, special enough to stand alone by that name. It states that it’s impossible to distinguish between acceleration and gravity in any real, physical way.
That is to say, if you were trapped in some enclosed box and unable to see outside, you could not devise any instrument which would be able to tell you whether that box were accelerating in some direction steadily (and therefore drawing you toward the floor) or within a gravitational field (which would accomplish the same effect). Therefore, experiencing acceleration is equivalent to experiencing a gravitational field.
Einstein realized this in November 1907. From that point, he realized that energy, mass, space, time, and gravity were all inseparably linked, and he spent the next several years feverishly working toward a general theory of relativity to explain how it all works. The explanation he came up with in 1915 works so well that its predictive power overturned Newton and has held up even to this day.
Motion in a Bottle
As a result of special relativity, we saw that motion warps space and time. We also know that motion relative to an observer represents kinetic energy, which is equivalent to any other form of energy. Finally, we know that energy is equivalent to mass and vice versa. The final piece of the puzzle to put into place here is that, since motion—and therefore energy—warps time and space, so does mass.
Think of mass as bottled motion. Mass–energy equivalence lets us treat mass as energy which has congealed, more or less, into one place. As I said in the last essay, it’s not enough to think of mass and energy as distinct things sharing some properties—they are a single substance. Therefore, all the same properties and consequences which apply to one form also apply to the other. That means that all the warping effects which apply to energy—to motion—also apply to mass.
So mass warps time and space, but what does this actually mean in reality? The result is gravity! Gravity is an emergent consequence of how mass warps time and space, exactly the same way motion warps time and space due to special relativity. Gravity is in fact not a force reaching mysteriously across distances but instead a bending of space and time which changes the paths of objects traveling through that space and time, leading them inexorably closer to one another.
The Conservative Appeal of Gravity
Let’s dispense with the tired bowling-ball-on-a-rubber-sheet imagery and talk about what that last paragraph actually means. We can begin with the classic assumptions about how objects behave. Newton’s laws state that objects in motion tend to stay in motion, or at rest, unless acted on. They also state that there’s always an opposite and equal reaction for every action.
These are, at their heart, conservation laws. For things to behave otherwise would mean creating or destroying energy. An action must impart an opposite and equal reaction, or energy would go missing. An object at rest must stay at rest, or energy would spontaneously appear. An object in motion must stay in motion, or energy would vanish.
In flat space, therefore, moving objects tend to stay the course in order to conserve energy. You can trace the line of how the object moves geometrically as a straight line. Now if we introduce a mass nearby, space and time contract and stretch, respectively, in the vicinity of that mass. The object’s path still needs to conserve energy, and in order to do so, the line we trace now curves closer to the mass. It appears as if the object “falls” inwards toward the mass—exactly as you’d expect from a gravitational field.
Occult Forces and Fictitious Forces
We no longer need an “occult force” to explain the mechanism of gravity. General relativity—which geometrically describes space and time as it bends under the influence of mass and energy—provides the complete picture.
As it turns out, gravity is not a force at all in the ordinary sense. It only appears to exert a force in the way that a merry-go-round in motion appears to make a ball curve through the air when you throw it from one side to the other. Gravity plays a similar trick on us: we’re constantly on a path through time and space which, were it not for the gigantic rock beneath us, would cause us to curve inexorably toward the center of the Earth. Since the Earth itself interrupts our course, we press against it, and it against us, which imparts the force we’re familiar with.
By uniting conservation laws and a handful of postulates, we can fully explain the substance and behavior of gravity. When we combine this knowledge with the speed limit of the universe, we see that even gravity takes time to travel, which means that changes in gravity take time to travel. This allows gravity to ripple across space and time. We’ll now be prepared to look at these waves in the next explainer.
Of my previous wide-field photos of the night sky, none have been more than single long exposures of thirty seconds or less. Recently I’ve taken my first steps into experimenting with stacking these non-planetary photos. Below, I show the process and results from my first attempts to stack both an in-telescope photo and a wide-field photo.
Stacking is, as I’ve mentioned in the past, a way of combining separate photos into a single, longer exposure. With highly detailed, small objects like planets, stacking can be used to get more detail and clarity through lucky imaging and the shift-and-add technique. With a wider-field photo, the goal changes a bit. Certainly, more detail and clarity result, but you also gather more light and reduce camera sensor noise.
Noise, Noise, Noise!
I have been limited by camera sensor noise in all the individual astronomical photos I have ever made. To make a relatively decent exposure of the night sky, it’s necessary to boost the ISO to at least 1600, which increases the sensor gain. On its own, this usually isn’t a grave concern, but it limits how much I can subsequently push the photo to bring out its details.
Inside of a single photo, there’s no real way to overcome this noise without manipulating the photo aggressively, such as using a powerful noise reduction algorithm. I typically avoid doing so because it’s difficult for such an algorithm to distinguish noise from fainter stars, and even the brighter details lose much of their finer qualities (dust lanes in the Milky Way core, for example).
Instead of eliminating the noise, I usually just leave it in. I limit the amount I push a photo so that the noise remains relatively unapparent when seen in context, and generally the noise does not mask the most important parts of the photo.
Yet, that noise limits my light. I can’t turn up the light without turning up the noise—both in the camera (I must keep the ISO low) and in the computer (I must avoid pushing the photo too far). What can I do? Stacking! Taking many photos and averaging them together means not only do I combine the light from them to make that light brighter, but the noise (which is largely random) gets canceled out because it varies between each photo.
New Techniques, New Tools
Stacking deep-sky and wide-field photos is a different process than stacking planetary photos. The exposures are much longer (several seconds instead of small fractions of a second), and often you have fewer of them.
In many ways, it is a more advanced technique. I have not yet tapped a lot of the tools available to me, and I won’t be discussing them today. I have proceeded by taking tiny steps, seeing what happens, and observing the result. Each time, I figure out what changed, what limitations I’ve hit, and what new techniques I can draw on. I will mention a few avenues of improvement I’ve passed up, though.
For example, for stacking photos of dim subjects (the Milky Way, nebulae, and so on), it is common for astrophotographers to prepare ahead of time a series of preliminary photos used to calibrate the process. These are known as darks, flats, and bias frames. These aren’t pictures of the sky but instead of (essentially) nothingness, allowing you to photograph your camera’s inherent sensor variations. For example, dark frames are photos taken with the lens cap on.
All digital cameras have inherent variations in the sensor. When you stack photos taken with your camera, you’re also stacking up these variations and exaggerating them as well. By taking these extra frames ahead of time and incorporating them into the process, it’s possible to subtract the sensor variations and come out with a smoother photo which can be more freely manipulated.
I did not, of course, prepare any darks, flats, or biases. All I had were lights, which is to say, photos of the actual subject. This is because I was only experimenting and hadn’t planned ahead. I had never done this before, and I was using photos from either months or a year ago.
I also knew I needed to use a new tool. The stacking programs (like AutoStakkert!3) I had been using were more designed for planetary objects or the Moon. These existing processes and tools might have worked okay, but they are quite rigid, and I wanted something more advanced.
For example, in wider-field photos, aligning different sections of the sky means actually conforming the photos somewhat to a single projection. This is necessary because the sky is a large, three-dimensional dome, and each photo projects a piece of that dome onto a two-dimensional image. Any movement in the camera causes that projection to change somewhat, so alignment of the photos together requires a projectional transformation—which looks like a slight warping. (This sort of warping may be easier to imagine if you considered what would happen if you photographed the entire sky all at once and then attempted to stitch it together into a panorama. The panorama would show the horizon on all sides, and the sky would be a circle in the middle. Each photo would have to be bent to complete parts of this circle.)
Instead, I used a much more advanced tool called PixInsight. It is not free software in any sense of the word, unfortunately, but it’s extraordinarily powerful and flexible. This is the only tool I used (aside from Apple Photos), and it’s what I’ll discuss below.
Totally an accidental thing—I had been aimlessly roaming with my tracking motor and just happened to see a blob. I couldn’t quite make it out with my eye, so I used the camera to photograph it more clearly. I decided I’d use the photos to identify it later, which I did. It took a lot of work to get it to show up nicely in an image.
A couple of days ago, on the anniversary of the eclipse, I decided to revisit those photos. I figured, well, I had maybe eight photos of the thing, so maybe I could do something with that. I read some wickedly complicated PixInsight tutorials (including this one), skipped around in them, and sort of scrummaged together a workflow. It’s not perfect, but I’ll share it.
My PixInsight Process for the Omega Nebula
With PixInsight open, first, I went to the “Process” menu, and under “ColorSpaces,” I chose the “Debayer” process. This is a little hard to explain, but essentially it’s a way to deconstruct a limitation of the camera sensor. The images I began with were the RAW images (dumps of the raw sensor data from when I photographed). The sensor’s pixels each have no ability to differentiate color, only light intensity, so a color filter array is placed over each pixel sensor to allow each to see one of red, green, or blue. That then must be debayered or demosaiced to construct the color image accurately. To know which kind of mosaic pattern, I searched the Internet for the one applicable to my camera, and it seemed like “RGGB” was the way to go.
I added images to the “Debayer” process and let it run, and it output a series of files which had been debayered, which had been renamed with a “_d” at the end and were in PixInsight’s own file format, XISF.
The next step was to align the images. PixInsight calls this “registration,” and it has multiple forms. Under the “Process” menu, I went to “ImageRegistration” and found “StarAlignment.”
In it, I chose one of the images from my set as a “reference,” meaning it would be the image against all the others would be aligned. For this, I would use the output of the debayering, so I used the XISF files output from the last step. I also told it to output “drizzle” data, which is used to reconstruct undersampled images. It can add resolution that’s missing using interpolation. It’s possible to configure the star matching and star detection parameters, but I found I did not need to do so.
The output from this step was similar to the previous one, but the resulting files now ended in “_d_r.xisf”. These images had been moved around and warped such that when laid over top of one another, they would match perfectly. Not all the photos could be aligned, and only six survived the process. I proceeded with these.
There was one more step I did before the final stacking, and that was a normalization. Under “Process” I went to “ImageCalibration” and then “LocalNormalization.” This allowed me to create new files (not image files but metadata files) containing normalization data. These data allow reducing noise and cleaning up the signal even further. I learned about it from this extensive tutorial which explains better than I can (which is the source I used to piece together much of this workflow).
After it ran, I finally had all the data I needed for the final stack. PixInsight calls this “ImageIntegration,” which is under the “Process” menu and “ImageIntegration” submenu.
I chose the six images which I had debayered, registered (aligned), normalized, and drizzled. I added them to the process. I added the normalization files and the drizzle files which had been output. I chose the average combination algorithm, which is the default. Switched normalization to “Local normalization,” but I left other parameters alone. Then I ran it.
The result was three views, two of which contained rejected pixels and one of which contained the integration itself. (A view, in PixInsight, is like an unsaved file—something you can see but which doesn’t necessarily exist on disk yet.)
It still appeared dim and indistinct, but I knew this was a raw product, ready to be manipulated. The rejection views were blank in this case, so I discarded them.
I figured that I would use PixInsight to stretch the image, and so under “Process” and “IntensityTransformations,” I first tried “AdaptiveStretch,” but I found this to be too aggressive. With its default parameters, the image was washed out by noise, and I couldn’t tame its parameters enough for a more natural result.
It’s possible in that screenshot to see the artifacts of the alignment process as well (the neat lines where the noise increases near the bottom and right). This is because the images didn’t cover precisely the same area, so after stacking, the places where they don’t overlap are visible. The intense green color is probably either contributed by my camera’s noise or from skyglow I picked up. In either case, it’s not what I want. I threw it away.
I then hit upon trying an “AutoHistogram” in the same submenu, and this was much gentler and more helpful. I bumped up its parameters a bit.
Now this truly got me somewhere.
A lot of additional color and structure leapt out. Notice down on the bottom and the right, the places where the alignment didn’t quite overlap, there’s some color distortion? This is an interesting outcome of the process—a kind of color correction.
This result definitely seemed much closer to what I wanted, but it’s still quite washed out. I could continue in PixInsight, but I really wanted it only for the stacking part. I’m a little more used to editing photos in Apple Photos, as crude as it can be, so I decided to save this file and move it over (as a 32-bit TIFF).
Finishing Omega Nebula in Apple Photos
I first flipped the photo vertically (to undo the flip introduced by the telescope) and cropped away the parts of the nebula which didn’t align fully.
Then I maxed out the saturation so that I could easily see any tint and color temperature adjustments I would need to make. I changed the photo’s warmth to 4800K and did my utmost with the tint to reduce any green cast. After that, I bumped the saturation way back down.
My next goal was to reduce the washed out appearance of the background sky without losing details of the nebula, so I used a curves adjustment. Apple Photos allows using a targeting tool to set points on the curve based on points on the photo, so I tend to do that. (It also allows setting a black point, but I usually find that too aggressive for astrophotography.) A gentle S-shaped curve of all channels often helps. I try not to be too aggressive with the curves adjustment because I can also use a levels adjustment to even out the histogram even more.
Using the “Selective Color” adjustment, I can pick out the color of the nebula and raise its luminance, which will boost the visibility of some of its dimmer portions.
After this, I make some grosser adjustments, using black level, contrast, highlights, shadows, and exposure.
The focus is very, very soft, but I usually don’t apply any sharpening or added definition because it will more often than not exaggerate distortions and noise without adding any new information. The reason for the soft-looking focus is down to a few reasons. First, I didn’t have perfect tracking on the telescope when I made these photos because I didn’t expect to photograph a nebula. Second, the exposures were long enough that the seeing (the ordinary twinkling of the sky) allowed the objects (like stars and other fine points) to smear into larger discs. Third, I hadn’t spent any time getting the focus tack-sharp because I was in a hurry. Fourth, this is a zoomed in section of a combination of several photos, which already tends to blend together some finer details (despite the drizzle data).
The Omega Nebula After Stacking
For what it’s worth, I think it turned out fine for a completely unexpected outing with just a few photos taken over a few minutes. After the entire process of stacking, which took a couple of hours, I came up with this.
Here are the before and after photos side-by-side so you can compare.
The latter image has more structure, more detail, more color, and all with less noise. All this, even with imperfect, brief photos and with an imperfect, incomplete process.
The Milky Way
I decided to see if I could apply the same process to some of the Milky Way photos I had from earlier in July. I had taken several toward the core, including ones which used my portrait lens. I thought the results were middling, and I was frustrated by all the noise in them.
I’m not going to step through the entire process of the stacking because it’s largely the same as the one I applied for the Omega Nebula. I have tried different kinds of parameters here and there (such as comparing average versus median image integration), but in the end, I used largely the same method.
One interesting wrinkle was that my Milky Way photos included trees along the bottom. Because the stars moved slightly between each shot, the registration process left the trees moving slightly between each. This caused a severe glitch after the PixInsight processing.
It’s likely I could have used a rejection algorithm, a mask, or tweaked the combination algorithm not to cause this, but I haven’t learned how to do that yet, so I let PixInsight do what it did.
Before I did any further processing, I needed to hide the glitch, and I decided cropping would be awkward. So I took the trees from another photo and laid them over top as best as I could. It looks sort of crude when you understand what happened, but unless you squint, it works well enough.
It covers a lot of the photo, unfortunately, and it looks really weird when you look closely at it, but hopefully the attention is drawn to the sky.
The Milky Way doesn’t look all that much improved over versions I’ve shown in the past, but it took a lot less work to get it there, and the noise and fine details are significantly improved.
The photo above shows a similar section of the sky as the noisy patch I showed earlier. (They’re not exactly the same section but very close; the same bright star is seen in both.) Here, there’s much less noise, and it’s possible to see indistinct tendrils of dust among the glowing sections of the Milky Way. The stars are easier to distinguish from the background. Below, I’ll place the two side by side for comparison.
That’s the difference—the photo has more underlying signal, so I can eke more detail from it. The overall photo ends up looking better defined as a result, even if it doesn’t appear, superficially, all that much more improved.
I need those calibration shots, for sure: the darks, flats, and biases. I can do those without a night sky, though. I just need to get around to it.
I also have a better idea of what kinds of photos align and stack better than others, so I should leave the glitchy trees at home next time. When I’m using the telescope, I should re-examine my focus; use consistent exposure settings; take many, many photos so that I have some to discard; and track as well as I can manage.
After that, I can elaborate on my process and show better photos than ever before.
On the night of the 14th, I got to take my camera out to a friend’s farm—the same one I visited last year—and try more photos of the Milky Way. None of them came out particularly special, but I thought I’d share a few here in one place.
My favorite of the evening might’ve been while I was waiting for dusk, watching the last rays of the sun over the countryside.
I ended up using my Zeiss Touit lens more than usual this time. It has considerable aberrations and some vignetting, as I’ve pointed out in the past, but its longer focal length let me frame the core of the Milky Way more tightly. It’s a 32mm lens, meaning that on my camera’s APS-C sensor, it is the equivalent of a 48mm lens on a full frame sensor. It’s ideal for things like portraiture, not really for landscapes or astrophotography, but I wanted to give it a try.
I took several photos dead into the Milky Way core with it. I haven’t yet reached the point where I’m taking longer exposures to combine them for more detail. I’ve been instead experimenting with seeing how much detail I can get from individual photos using different settings.
The photo I pushed the most used an ISO of 3200.
A lot of the brightness comes from aggressive processing after the fact, though. With another photo from the set, taken with identical settings and nearly identical framing, I used more subdued processing.
I also turned the camera up to the zenith to catch Vega, Lyra, some of Cygnus, and a bit of the North American Nebula.
By the time I got out the lens I normally use for night sky wide-field photos, the Rokinon, a few clouds had drifted into view and began to spoil the shots in the direction of the core. So I got nothing so wonderful as last year, but still some nice and expansive shots. My friend suggested portrait aspect, and I definitely got the most out of that.
I took photos facing both toward and away from the center of the galaxy, though the latter required some additional processing to reduce the distorted colors from light pollution. There’s a small glimpse of the Andromeda Galaxy as a small blur in the lower right, but not much definition is there—I’d need a zoom lens and many exposures to get more.
I’m thrilled about finally getting it edited and published. I put a lot of care into it, the same way I’ve put a lot of care into improving my astrophotography over the years. The sense of the article is to contrast reality versus perception, signal versus noise—to show how photo-manipulation can sometimes paradoxically get us a little closer to the truth rather than take us farther away.
The best part about image stacking is how the very randomness of the sky’s turbulence provides the key to seeing through its own distortions, kind of like a mathematical judo. Read through if you want to find out how.
I had clear skies again last night, and I remembered to look for the Moon while it was slightly higher in the sky. I set my telescope up on the front porch shortly after sunset. The Moon presented an incandescent, imperceptibly fuller crescent facing the failing twilight.
Because it was higher, I had a better perspective, I had more time to take photos, I had more time to check my settings, and my photos had less atmosphere through which to photograph (meaning less distortion). And because the crescent was fuller, I captured more detail in my photos.
I always remember to spell out acquisition details in my astrophotography posts, but I’ve found instead people most often ask what equipment I use. I usually don’t list this in detail, both because I’ve usually already mentioned my equipment in earlier posts and also because I find that the exact equipment I used on a given night is partially convenience and whim, not meriting any particular recommendation or endorsement. My photos are within reach of all sorts of equipment of various kinds and prices, given practice and technique, and the last thing I want to do is give someone the impression they need to spend over a thousand dollars to do what a two-hundred-dollar telescope and a smartphone can do.
However, I’m going to try to make an effort to name what equipment I use now and in the future just because it’s so commonly asked. Maybe I’ll need to reference it myself in the future, too. So last night, I used
Those are the only four pieces of hardware I used last night.
I aligned the telescope on the Moon, which let it track roughly. This meant it needed periodic corrections to keep it from drifting out of view (once every several minutes). I concentrated on keeping the extents of the arc within the viewfinder.
Once it was centered and roughly focused, I used a feature on my camera called the “Focus Magnifier” to fine-tune the focus. I’ve found this to be indispensable. Using this feature, I zoom in to a close up view of some section of what the camera sensor is seeing. This way, I can make fine adjustments to the telescope’s focus until I get the best possible clarity available. I can also get a good idea what kind of seeing I’ll encounter that night—whether the sky will shimmer a lot or remain still. I was lucky last night to find good focus and good seeing.
Once focus is good, it can be left alone. I ensure that the adapter is locked tightly in place so that nothing moves or settles, keeping the focal point cleanly locked on infinity.
Then I turned the ISO up—doubled it. The Moon is a bright object, so I was not keen to use something I would use for a dark site, but I settled on ISO 1600. My goal was to reach a shutter speed of 1/100 seconds, which I did, without losing the picture to noise or dimness. A higher ISO works great at a dark site, but the Moon is quite dynamic, so I felt like I had less headroom. In any case, I used 1/100 seconds’ exposure and ISO 1600 for all my photos.
I captured a short 4K video before I began so I could capture the seeing conditions that night. I recommend viewing it fullscreen, or it will look like a still photo—the sky was placid as a pond last night.
After taking the video, I realigned the telescope slightly and, using my remote controller so that I could quickly actuate it without shaking the telescope, I took 319 photos, occasionally realigning to correct for drift.
Unfortunately, Venus and Mercury had already sunk too low to get a glimpse, so I packed it up and went inside.
I moved all the photos, in RAW format, to my computer from the camera. Then I converted them all to TIFF format. These two steps took probably something like an hour and resulted in seven and a half gigabytes of data.
Because the Moon drifted, due to the rough tracking, the photos needed to be pre-aligned. I used a piece of software called PIPP for that. Without this pre-alignment step, the tracking and alignment built into my stacking software struggled mightily with the photos and created a mess.
Its output was another series of TIFF photos. I found afterwards that two of the photos were significantly too exposed, leaving many details blown out, so I excluded them from the rest of the process, leaving me with 317 photos.
I opened these 317 photos in AutoStakkert!3 beta. After initial quality analysis, I used the program to align and stack the best 50% of the images (by its determination). This took a bit less than ten minutes and left me with a single TIFF photo as output.
Image stacking leaves behind an intermediate product when it’s complete, which is what this TIFF photo is. It’s blurry, containing an average of all the 157 photos which were composited into it. However, the blurs in this photo can be mathematically refined more easily using special filters. I used a program called Astra Image to apply this further processing. In particular, I used a feature it calls “wavelet sharpening” (which can be found in other programs) to reduce the blurring. I also applied an unsharp mask and de-noising.
Finally, I used Apple Photos to flip the resulting photo vertically (to undo the inversion which the telescope causes) and tweak the contrast and colors.
Click to view the photo in fullscreen if you can. There’s a lot of detail. The terminator of the lunar surface stops just short of the Mare Crisium (the Sea of Crises), the round, smooth basalt surface right about the middle of the crescent.
I can’t help but compare this one to the photo from the night before: what a difference a day makes. I had more time to work, more photos to take, and the benefit of yesterday’s experience to help improve.
Now it’s clouded over here again—Portland weather—and I can’t practice anymore for a while.
Before the waxing crescent moon set tonight, I caught its Cheshire grin among the firs in the west for a few minutes. Then it was gone.
I had to take my telescope (a smaller model, a Celestron NexStar 5 SE) down the sidewalk a little ways to get a view between the branches. I took as many photos as I could before it set too low in the sky, using my Sony α6300 camera connected to the telescope using an adapter without an eyepiece (the “prime focus” technique). They were photographed all at ISO 800 and exposed for 1/25 seconds. The photo above was stacked from the 50% best examples of those seventy-eight photos I took before the Moon subsided among the trees.
I had promised myself I wouldn’t bother with photography during the 2017 eclipse. I had figured everyone else would take such far better photos that I shouldn’t bother. But I knew I wouldn’t miss seeing totality for the world, and as the time approached, I found myself bringing all of my equipment, “just in case.”
I kept having this debate with myself about how I would spend my precious minute and eight seconds (the duration of totality allotted to me where I ended up). Do I passively observe? Or do I try to capture the experience?
Actually, people kept expecting me to take photos. They were excited for them in advance, and each time I tried to let them down gently—”I might just let the experts take the photos and sit back and enjoy the show”—I felt more and more like I was kidding myself. In the end I decided all the hours of solitude at the telescope over the last two years, all the practice, all the writing I’ve done here—they’ve engendered in me the confidence to photograph the eclipse up close, and I’d be disappointed in myself if I didn’t try.
The Night Before
I drove to a friend’s farm for the eclipse, in the area of Molalla, Oregon, in the Willamette Valley (the same place where I photographed the Milky Way the month before). I had been invited to come the day before so that I could stay and watch the event the next day, and my host had also invited possibly a hundred people to come for a pig roast that Sunday. It was a kind of impromptu country fair, and I met a lot of people that day.
As night fell, I set up the telescope and aimed it on Saturn so I could make sure the motors and optics were still in working order. There was a panicked moment when I thought I had lost the control cable for the declination motor! But after some fooling around with collimation and other setup, I got it aimed on Saturn and invited everyone to form a line to see. Nothing impresses quite like it!
People began to turn in, and I stayed up a bit later to look at other parks of the Milky Way’s core. Quite randomly, as I shifted the telescope about the core, I happened upon a smudge I didn’t recognize but was rather bright. I couldn’t make out through the eyepiece quite what it was, so I found my camera and began photographing it for later identification.
Later, after the whole thing was over and I got home, I turned to a program called solve-field from Astrometry.net. It used the star field in the background to determine the area of the sky this photo was taken in. It plotted the nebula as the Omega Nebula.
It’s one of my favorite photos of the weekend, and it was entirely happenstance!
The Morning of the Eclipse
I was up early, having barely slept—new place, lots of people coming and going. There were dozens of people encamped where I was. I arose by seven and gradually made my way out. I determined where the sun would finally be and moved the telescope out to a prime spot (with the help of some sturdy new acquaintances—thanks, friends!).
Next was putting on a filter. I had a couple of twelve-by-twelve pieces of solar filter sheet from Thousand Oaks Optical. Another couple of new friends lent me gaffer’s tape to secure it in place and cover any small gaps leftover. I wish I had a photo of the result, but believe me when I say it looked crude and took a couple of attempts to get right.
I looked through it at the sun in its fullness to see what it looked like.
I had succeeded. I was ready. The telescope’s motor was tracking the sun. Now all I had to do was wait.
Shortly after 9 a.m., we knew it was real. The limb of the moon touched the sun. We could see something we had never seen before.
Things progressed surprisingly quickly from there.
I have photos during several phases of partiality, but I mostly kept the camera away from the eyepiece of the telescope so that people could look through it. I found that as things advanced, the dozens of people in attendance began to line up, look through, and take smartphone pictures through the eyepiece. I didn’t want to interrupt this as much as I could. The closer we got, the more popular the telescope was.
I got to see other signs of approaching totality, like the growing coolness of the air and the light gradually fading. Someone also brought a colander so that we could see projections of the crescent through its holes.
About ten minutes before, I began to take over the telescope for myself so I wouldn’t miss the chance to photograph the parts I really wanted to.
The sun itself became dimmer and dimmer—the same settings I had on the camera captured less and less light. I’ve had to play with these after the fact to make them look brighter. Toward totality, the sun began to look very slender.
From this point, everything happened so quickly that the sky and earth changed from breath to breath. I watched the crescent thin almost perceptibly quickly, each photo different than the last.
Just before totality, the entire grassy field was covered in shadow bands, which I remember clearly—we could see we were all at the bottom of a vast ocean of air, now that the light from the sun had grown point-like and highly collimated. Muted ripples of white crossed the pale grass quickly, as if we were sitting on the bottom of a shallow pool.
I kept photographing as the eclipse continued, until I could get the barest crescent detectable through the filter.
In that slight crescent, there are some places at the sides where the light seems almost mottled. It doesn’t form clean points. I can’t say that either the atmosphere nor my focus cooperated perfectly in that moment, but I suspect some of the irregularities (evident in other photos as well) are from the surface of the moon itself—its mountains and valleys interacting with the surface of the sun. Here I believe I captured the profile of the lunar geography along the edges of the crescent.
Finally, the view in the camera went pitch black, and I looked up from the viewfinder with my bare eyes. The sun appeared to be an emptiness on fire. There is an ineffable quality to the experience, and I did my best to linger, knowing my time was so short with it.
I was surprised how much color and dynamism I saw—a kind of unnatural fierce fire fringe lay just inside the corona of blue-white which feathered out, all of which circumscribed an inner full blackness. The sky beyond was deep blue-black.
Outside of that, I saw Venus to the right. I looked for other planets, but I could not see Mars or Mercury (too close to the corona or sun, I suppose). I did not see Regulus, either. I saw other stars in the distance. It was not a full, pitch-black night around us, but it was a swirling night. I felt it palpably begin to get dewy, so quickly did the temperature plunge.
In a moment, I ripped off the filter from my telescope. Once off, the camera could see again, and it saw spectacularly.
I took as many photos as I could in the time allotted—about a minute. I didn’t dare mess with the settings I had. I simply set them as if I were photographing the moon (which I had practiced some weeks before) and took as many as I could in burst mode. I figured later I’d just try to process what I could and see if anything turned out okay.
Incredibly, they did, though even these could not capture what the eye saw. I was amazed to see the solar prominences in my photos as well as I could. I found that if I processed some of the photos a particular way, I could even get a clearer view of these prominences and of the fierce orange I recalled.
As totality ended, the light began to overwhelm my sensor again. If I had had more practice, I would have backed off the exposure length or ISO to capture a diamond ring effect, but I did not have this practice, and it happened so quickly that I did not adjust in the moment. Instead, the light began to overwhelm my sensor, revealing the sun in all its power as dramatic distortions.
I liked the drama of it, even if I missed the special diamond ring effect. The color was really interesting (that’s more or less how it came out of the camera).
Within seconds after, totality had ended, and I had to race to slam back on my lens cap on my telescope before I damaged my camera or optics.
How I Spent the Eclipse
Now I have hindsight to think about how I spent the eclipse: about whether I should have put all the equipment away and let the experts do the photography so that I could enjoy the spectacle itself, or if I was right to join in by photographing it myself.
I think if I had had less practice, I might have come away frustrated, with poorer photos to show, and I might have missed actually looking down to see shadow bands (I yelled out, “shadow bands!” to call them out to others) or missed out on looking up. I might have ruined the moment.
But all the time I had spent with the stars and moon had prepared me, and I came away with photos that didn’t disappoint me, nor did they detract from the experience in the moment.
In fact, having the telescope set up at all was the best part, and it is the reason I do not regret the attempt. Dozens of people came and went, looking through it to see what they could, using their smart phone to take away their own photos, including lots of children. If I had not bothered, they would not have gotten to see that. I’m glad I could provide a close-up view that only a minority got.
I’m not sure if “beginning astrophotography” fits me, still, but I’m keeping it. I’ve come a long way in the last two years, but I know I have so much to learn. I spent so much time wondering if I should “let the experts” handle the photography of the eclipse, only to learn I had somehow become one of the experts at some point. This eclipse marked for me an incredible turning point as an amateur astronomer, and I hope I keep learning and growing.
If I had one regret, actually, the journey home might be it. It took a couple of hours to get home, and I found myself stuck still in a line of cars like this.
“You know, ‘galaxy’ means ‘milky,'” I said, still looking up.
“What? No way,” my friend, who was stargazing with me with her own camera, said.
“Totally. ‘Milky Way’ is directly from Latin, ‘via lactea.'”
“So it’s not from the candy bar?”
I was taking photos with a new friend at her farm south of Portland. I remain extremely grateful to her for allowing me to do so because they allowed me to my first photos of the core of the galaxy unaffected by light pollution.
The photo above was processed somewhat delicately to improve the white balance and the colors and brighten things up a bit, but that’s more or less how it came out of the camera. Taking photos of the sky at large is a very different activity than taking photos of individual objects through a telescope.
Chiefly, there is no telescope. None of this post will discuss using a telescope. I took all these photos with my same mirrorless camera, the Sony α6300, and a tripod. To adapt this camera to wide-field night sky images of the Milky Way, there are two big differences from ordinary photography: for one, using a long exposure and high ISO, and for two, using a suitable lens.
When I started last year, I was practicing blind, experimenting in wintry months, guessing at settings, and using a 32 mm lens with significant shortcomings for night-sky photography. To make improvements, I’m grateful for information I got from Lonely Speck, which I adapted to suit me.
First, most of the job of collecting a night-sky image is accomplished by exposing with a high ISO and a long exposure period. This means trucking out to a dark site—this activity is absolutely impossible anywhere near a city and impractical in a suburb. You also have to have a camera capable of manual control over its ISO and exposure length, among other things.
For my early wide-field attempts, I was afraid to raise the ISO higher than about 1600. I took some experimental shots with the ISO as high as I could go, but few were in the middle ground. I assumed these photos would be unusably noisy. Therefore, the photos which turned out best were at ISO 800, but to bring out any detail, I had to push them dramatically, such that they looked artificial.
The most important thing I read was an article on Lonely Speck about finding the best ISO which explained that ISO doesn’t increase sensitivity so much as it provides amplification of the underlying signal. ISO can be thought of as a gain control for the sensor signal. Quoting,
It’s a (very) common misconception that increasing ISO increases the sensitivity of a camera sensor. ISO doesn’t change sensitivity. Increasing ISO simply increases the brightness of a photo by amplifying the sensor signal. In the electronics world, amplification is sometimes called “gain.” …[W]e can “gain” brightness if we increase our ISO. … Higher ISOs won’t increase the visible noise in a photo. …A higher ISO will decrease the total dynamic range of the image…And, in many cases (like astrophotography), a higher ISO will actually decrease the visible noise[.]
I was amazed to learn this. The article goes on to explain the conditions under which this occurs and how. This meant that I was free to amp up the ISO on my photos considerably.
The other consideration was exposure length. Mostly, the goal is to expose as long as possible before stars stop being points of light and start being streaks. How long this takes is entirely a function of the focal length of the camera—that is, the wider the field of view, the smaller the points of light are, so the less noticeable it becomes when stars seem to “move” across the field of view.
The lens I had used before was a bit longer than typically used for Milky Way photography. It’s only able to capture about the size of a constellation. That meant that stars would appear to move if I exposed longer than about fifteen seconds.
Add these together, and I was taking in a lot less light than my camera was capable of. On top of that, my lens was not designed for astrophotography, meaning that it introduced significant distortions, called aberrations, to each photo around the edges.
Choosing a lens
I had noticed from the first images I took that I had weird comet-looking distortions around the edges of my photos, but I didn’t know why. All the bright stars ended up looking this way.
I figured I might be able to avoid these distortions by stopping down the lens somewhat (and I would have been right, as I later learned), but that would have meant blocking even more light.
Luckily, there was another post on Lonely Speck that explained all about these distortions, called aberrations. I learned that these shapes were a combination of coma (which caused the light from the star to smear inward toward the center of the photo) and tangential astigmatism (which butterflied the distortion apart parallel to the radius running from the center to the star).
These were in-built distortions of the lens. It’s not necessarily that I had a bad lens—indeed, this was a Zeiss Touit f/1.8, an extremely good portrait lens. It just wasn’t designed for work where spots of light in the periphery were meant to be precise dots.
I found out there are classes of lenses built by Samyang (also known as Rokinon lenses, among others) designed to minimize these aberrations, also having extremely short focal lengths (meaning, really wide fields of view). For my birthday in June, I treated myself to a Rokinon Cine CV12M-E 12mm T2.2 Cine fixed lens. This is the lens I’ve used for all the photos of the Milky Way since then.
The First Batch: Learning What’s Possible
I’ve taken two batches of photos of the Milky Way since getting the lens and figuring out the right direction for settings.
For the first batch, I went to Stub Stewart State Park and waited till about eleven at night. It’s summer, so that’s when astronomical dusk occurs, and you can look up and see the Milky Way (which is visible from that site, though a bit washed out). Being summer, as well, the core of the galaxy is visible in the south, which I’ve wanted to photograph for a long time.
I followed the instructions from Lonely Speck rather closely, with respect to ISO and exposure, and I found I got wonderful results. In this case, I exposed for twenty-five seconds, and I used ISO 3200. The results exceeded my expectations.
As I processed them later, I found that I captured a lot of the light pollution from the city (which was in the distance in the southeast), and that presented difficulties in processing the photos without bringing out splotches of unnatural color.
I consider my attempts from that night now to be middling, and my ability to process them have evolved considerably as well.
The Second Batch: Finding What Worked
I was extremely lucky enough to have a very helpful and happy friend who let me come to her farm and do more night-time photography. Because her farm was south of Portland, the core of the galaxy was facing away from all the light pollution. The photos at the top of the post represent some taken from this attempt.
Here at the farm, I decided to lessen both the exposure length of time (down to twenty seconds) and the ISO (down to 2000). The earlier settings, I had found, seemed almost too aggressive for the conditions, though I may revisit them if I’m at a darker site. But twenty seconds and ISO 2000 turned out to be perfect. The photos looked gorgeous right off the camera, almost without editing at all. The results had delicate bands of dust and light in them that were considerable easier to work with as I processed them on my computer.
I took enough that night that I’ve been able to find lots of different ways to process each and experiment with what I like. For some, I’ve tried wild color combinations and gradients. I’ve tried delicate forms of processing or pushing others as far as they’ll go. I’ve learned to duplicate a photo many times over so I can manipulate it in many different directions and compare the results.
This post has been about changes I’ve introduced to the photography process, and in a future post, I’d like to talk about processing a bit more (basically editing the RAW photos to make them pop). I’d like to get better at that first, though.
On the evening of the Fourth of July, I was cringing every few seconds as volleys of illegal fireworks shot into the air a few houses over on my block. I was outside, poking halfway out my backyard garage with the telescope, looking at the moon to pass the time until Saturn rose over the treetops.
Conditions didn’t allow me any good Saturn photos, but the moon turned out to make a rewarding enough target. I took a minute and a half of video and fifty-eight photos. It probably seems silly, but I’ve wanted to stack the photos from the moon for a long while. The moon is an easy enough thing to see in plenty of detail, but it’s difficult to show it as a vivid, three-dimensional object—the way it looks through a telescope—in a photo. So much gets lost in the translation from eye to sensor, and much of this experience gets swallowed into the seeing disc at the moment of capture, maddeningly blurred at the final moment.
For comparison, here’s an individual photo of the moon that’s been converted from RAW and cropped but otherwise not tampered with at all (ISO 400, f/6, shutter speed 1/800 s). You might have to click on it to see it larger to get a sense of the difference I mean. You’ll see the same details from the image above, but they’ll be indistinct. In particular, look at the edge of the basalt plain along the top limb, where the terminator crosses it. Or look at the craters along the lower part of the terminator. I look at that and think, oh, yep, that’s the moon—no news there.
Last night, I tried stacking the frames of the video to get more detail, but the results were only so-so. I was pretty dissatisfied, and because I expected to get more, I kept pushing the image, getting distortions in some of the higher contrast parts of the image. I used all kinds of filters to get what I wanted (deconvolution filters of all sorts, wavelet sharpening, unsharp masking, custom convolution filters, all sorts of contrasting and denoising), but I just made things worse.
I am not sure why stacking from a video gave me a poorer result. The same problem probably limits my planetary photos as well, so it’s worthwhile figuring out. It might be some aspect of the sensor, or it might be that I’m using too many photos in the final stack, more than needed. Maybe I didn’t align the frames properly.
In either case, I took all the RAW photos as a backup, so I turned back to those today and stacked them. All the photos were taken with the same settings: ISO 400, f/6, shutter speed 1/800.
I’ve discussed this process before, but to run it down again,
I converted all the RAW images to TIFF;
I used AutoStakkert!3 (a beta version of the program) to load them up as individual frames, then stacked all forty-seven of them; and
I loaded the resulting TIFF from that stack in AstraImage and, after much experimentation,
first applied as much wavelet sharpening as I could before distortions became apparent, and
then applied a very small amount of unsharp mask.
I’ve experimented a little with stacking, changing parameters here and there to see how the result changes, but mostly I’ve been trusting that it’s doing the job properly and concentrating on seeing how much I can get out of AstraImage, since that’s quicker. I’ll load up the stacked TIFF, make a change, and save a version. Each change, I’ll save, and when I’ve gone down a path too far, I’ll back up to a version that I want and start down a new path. With them all in the same directory, I can then open them all at once and shift between them quickly, as if I were using a blink comparator, to see which changes helped and which hurt.
After I was done with all these things, I took the photo over to Apple Photos to tweak the colors, levels, and contrast a bit and to share.