Beginning Astrophotography: The Deeper Sky

 

Screenshot of PixInsight during ImageIntegration processing of six Omega Nebula exposures
Screenshot of PixInsight during ImageIntegration processing of six Omega Nebula exposures

Of my previous wide-field photos of the night sky, none have been more than single long exposures of thirty seconds or less. Recently I’ve taken my first steps into experimenting with stacking these non-planetary photos. Below, I show the process and results from my first attempts to stack both an in-telescope photo and a wide-field photo.

Stacking is, as I’ve mentioned in the past, a way of combining separate photos into a single, longer exposure. With highly detailed, small objects like planets, stacking can be used to get more detail and clarity through lucky imaging and the shift-and-add technique.1 With a wider-field photo, the goal changes a bit. Certainly, more detail and clarity result, but you also gather more light and reduce camera sensor noise.

Noise, Noise, Noise!

I have been limited by camera sensor noise in all the individual astronomical photos I have ever made. To make a relatively decent exposure of the night sky, it’s necessary to boost the ISO to at least 1600, which increases the sensor gain. On its own, this usually isn’t a grave concern, but it limits how much I can subsequently push the photo to bring out its details.

Small section of a Milky Way photo from 14 July 2018 showing abundant chrominance and luminance noise
Small section of a Milky Way photo from 14 July 2018 showing abundant chrominance and luminance noise

Inside of a single photo, there’s no real way to overcome this noise without manipulating the photo aggressively, such as using a powerful noise reduction algorithm. I typically avoid doing so because it’s difficult for such an algorithm to distinguish noise from fainter stars, and even the brighter details lose much of their finer qualities (dust lanes in the Milky Way core, for example).

Instead of eliminating the noise, I usually just leave it in. I limit the amount I push a photo so that the noise remains relatively unapparent when seen in context, and generally the noise does not mask the most important parts of the photo.

Yet, that noise limits my light. I can’t turn up the light without turning up the noise—both in the camera (I must keep the ISO low) and in the computer (I must avoid pushing the photo too far). What can I do? Stacking! Taking many photos and averaging them together means not only do I combine the light from them to make that light brighter, but the noise (which is largely random) gets canceled out because it varies between each photo.

New Techniques, New Tools

Stacking deep-sky and wide-field photos is a different process than stacking planetary photos. The exposures are much longer (several seconds instead of small fractions of a second), and often you have fewer of them.

In many ways, it is a more advanced technique. I have not yet tapped a lot of the tools available to me, and I won’t be discussing them today. I have proceeded by taking tiny steps, seeing what happens, and observing the result. Each time, I figure out what changed, what limitations I’ve hit, and what new techniques I can draw on. I will mention a few avenues of improvement I’ve passed up, though.

For example, for stacking photos of dim subjects (the Milky Way, nebulae, and so on), it is common for astrophotographers to prepare ahead of time a series of preliminary photos used to calibrate the process. These are known as darksflats, and bias frames. These aren’t pictures of the sky but instead of (essentially) nothingness, allowing you to photograph your camera’s inherent sensor variations. For example, dark frames are photos taken with the lens cap on.

All digital cameras have inherent variations in the sensor. When you stack photos taken with your camera, you’re also stacking up these variations and exaggerating them as well. By taking these extra frames ahead of time and incorporating them into the process, it’s possible to subtract the sensor variations and come out with a smoother photo which can be more freely manipulated.

I did not, of course, prepare any darks, flats, or biases. All I had were lights, which is to say, photos of the actual subject. This is because I was only experimenting and hadn’t planned ahead. I had never done this before, and I was using photos from either months or a year ago.

I also knew I needed to use a new tool. The stacking programs (like AutoStakkert!3) I had been using were more designed for planetary objects or the Moon. These existing processes and tools might have worked okay, but they are quite rigid, and I wanted something more advanced.

For example, in wider-field photos, aligning different sections of the sky means actually conforming the photos somewhat to a single projection. This is necessary because the sky is a large, three-dimensional dome, and each photo projects a piece of that dome onto a two-dimensional image. Any movement in the camera causes that projection to change somewhat, so alignment of the photos together requires a projectional transformation—which looks like a slight warping. (This sort of warping may be easier to imagine if you considered what would happen if you photographed the entire sky all at once and then attempted to stitch it together into a panorama. The panorama would show the horizon on all sides, and the sky would be a circle in the middle. Each photo would have to be bent to complete parts of this circle.)

Instead, I used a much more advanced tool called PixInsight. It is not free software in any sense of the word, unfortunately, but it’s extraordinarily powerful and flexible. This is the only tool I used (aside from Apple Photos), and it’s what I’ll discuss below.

Omega Nebula

Last year, the night before the 2017 eclipse, I took some photos of the Omega Nebula. I got perhaps eight or so that night, trying different settings. None of them were great, but they showed the nebula for what it was—some glowing gas in the sky.

The Omega Nebula, taken via Celestron eleven-inch telescope and Sony α6300 camera
Before stacking: The Omega Nebula, taken as a single exposure via a Celestron eleven-inch telescope and Sony α6300 camera

Totally an accidental thing—I had been aimlessly roaming with my tracking motor and just happened to see a blob. I couldn’t quite make it out with my eye, so I used the camera to photograph it more clearly. I decided I’d use the photos to identify it later, which I did. It took a lot of work to get it to show up nicely in an image.

A couple of days ago, on the anniversary of the eclipse, I decided to revisit those photos. I figured, well, I had maybe eight photos of the thing, so maybe I could do something with that. I read some wickedly complicated PixInsight tutorials (including this one), skipped around in them, and sort of scrummaged together a workflow. It’s not perfect, but I’ll share it.

My PixInsight Process for the Omega Nebula

With PixInsight open, first, I went to the “Process” menu, and under “ColorSpaces,” I chose the “Debayer” process. This is a little hard to explain, but essentially it’s a way to deconstruct a limitation of the camera sensor. The images I began with were the RAW images (dumps of the raw sensor data from when I photographed). The sensor’s pixels each have no ability to differentiate color, only light intensity, so a color filter array is placed over each pixel sensor to allow each to see one of red, green, or blue. That then must be debayered or demosaiced to construct the color image accurately. To know which kind of mosaic pattern, I searched the Internet for the one applicable to my camera, and it seemed like “RGGB” was the way to go.

Screenshot of the PixInsight Debayering process dialog, set to the RGGB mosaic pattern and ready to receive pictures to debayer
Screenshot of the PixInsight Debayering process dialog, set to the RGGB mosaic pattern and ready to receive pictures to debayer

I added images to the “Debayer” process and let it run, and it output a series of files which had been debayered, which had been renamed with a “_d” at the end and were in PixInsight’s own file format, XISF.

The next step was to align the images. PixInsight calls this “registration,” and it has multiple forms. Under the “Process” menu, I went to “ImageRegistration” and found “StarAlignment.”

Screenshot of PixInsight's StarAlignment process, primed with a reference and the images to align which have already been debayered
Screenshot of PixInsight’s StarAlignment process, primed with a reference and the images to align which have already been debayered

In it, I chose one of the images from my set as a “reference,” meaning it would be the image against all the others would be aligned. For this, I would use the output of the debayering, so I used the XISF files output from the last step. I also told it to output “drizzle” data, which is used to reconstruct undersampled images. It can add resolution that’s missing using interpolation. It’s possible to configure the star matching and star detection parameters, but I found I did not need to do so.

The output from this step was similar to the previous one, but the resulting files now ended in “_d_r.xisf”. These images had been moved around and warped such that when laid over top of one another, they would match perfectly. Not all the photos could be aligned, and only six survived the process. I proceeded with these.

There was one more step I did before the final stacking, and that was a normalization. Under “Process” I went to “ImageCalibration” and then “LocalNormalization.” This allowed me to create new files (not image files but metadata files) containing normalization data. These data allow reducing noise and cleaning up the signal even further. I learned about it from this extensive tutorial which explains better than I can (which is the source I used to piece together much of this workflow).

Screenshot of PixInsight's LocalNormalization process, showing it primed with a reference photo with the other debayered and registered photos ready to normalize
Screenshot of PixInsight’s LocalNormalization process, showing it primed with a reference photo with the other debayered and registered photos ready to normalize

After it ran, I finally had all the data I needed for the final stack. PixInsight calls this “ImageIntegration,” which is under the “Process” menu and “ImageIntegration” submenu.

Screenshot of the PixInsight ImageIntegration process, showing six images ready for integration using the median combination algorithm
Screenshot of the PixInsight ImageIntegration process, showing six images ready for integration using the median combination algorithm

I chose the six images which I had debayered, registered (aligned), normalized, and drizzled. I added them to the process. I added the normalization files and the drizzle files which had been output. I chose the average combination algorithm, which is the default. Switched normalization to “Local normalization,” but I left other parameters alone. Then I ran it.

The result was three views, two of which contained rejected pixels and one of which contained the integration itself. (A view, in PixInsight, is like an unsaved file—something you can see but which doesn’t necessarily exist on disk yet.)

Screenshot of the result of the six-image median integration of the Omega Nebula images, before any further processing
Screenshot of the result of the six-image median integration of the Omega Nebula images, before any further processing

It still appeared dim and indistinct, but I knew this was a raw product, ready to be manipulated. The rejection views were blank in this case, so I discarded them.

I figured that I would use PixInsight to stretch the image, and so under “Process” and “IntensityTransformations,” I first tried “AdaptiveStretch,” but I found this to be too aggressive. With its default parameters, the image was washed out by noise, and I couldn’t tame its parameters enough for a more natural result.

Screenshot of a preview of an aggressive AdaptiveStretch in PixInsight, showing noise as a green glow, dithering, and vignetting which masks the nebula almost entirely
Screenshot of a preview of an aggressive AdaptiveStretch in PixInsight, showing noise as a green glow, dithering, and vignetting which masks the nebula almost entirely

It’s possible in that screenshot to see the artifacts of the alignment process as well (the neat lines where the noise increases near the bottom and right). This is because the images didn’t cover precisely the same area, so after stacking, the places where they don’t overlap are visible. The intense green color is probably either contributed by my camera’s noise or from skyglow I picked up. In either case, it’s not what I want. I threw it away.

I then hit upon trying an “AutoHistogram” in the same submenu, and this was much gentler and more helpful. I bumped up its parameters a bit.

Screenshot of PixInsight's AutoHistogram process dialog, showing a stretch method of "Rational Interpolation (MTF)" and a parameterized value of 0.35 for all channels
Screenshot of PixInsight’s AutoHistogram process dialog, showing a stretch method of “Rational Interpolation (MTF)” and a parameterized value of 0.35 for all channels

Now this truly got me somewhere.

Screenshot of the Omega Nebula median integration after applying the AutoHistogram process, revealing more color and structure
Screenshot of the Omega Nebula median integration after applying the AutoHistogram process, revealing more color and structure

A lot of additional color and structure leapt out. Notice down on the bottom and the right, the places where the alignment didn’t quite overlap, there’s some color distortion? This is an interesting outcome of the process—a kind of color correction.

This result definitely seemed much closer to what I wanted, but it’s still quite washed out. I could continue in PixInsight, but I really wanted it only for the stacking part. I’m a little more used to editing photos in Apple Photos, as crude as it can be, so I decided to save this file and move it over (as a 32-bit TIFF).

Finishing Omega Nebula in Apple Photos

I first flipped the photo vertically (to undo the flip introduced by the telescope) and cropped away the parts of the nebula which didn’t align fully.

Then I maxed out the saturation so that I could easily see any tint and color temperature adjustments I would need to make. I changed the photo’s warmth to 4800K and did my utmost with the tint to reduce any green cast. After that, I bumped the saturation way back down.

My next goal was to reduce the washed out appearance of the background sky without losing details of the nebula, so I used a curves adjustment. Apple Photos allows using a targeting tool to set points on the curve based on points on the photo, so I tend to do that. (It also allows setting a black point, but I usually find that too aggressive for astrophotography.) A gentle S-shaped curve of all channels often helps. I try not to be too aggressive with the curves adjustment because I can also use a levels adjustment to even out the histogram even more.

Screenshot of the Omega Nebula integration in Apple Photos after white balance, curves, and levels adjustments
Screenshot of the Omega Nebula integration in Apple Photos after white balance, curves, and levels adjustments

Using the “Selective Color” adjustment, I can pick out the color of the nebula and raise its luminance, which will boost the visibility of some of its dimmer portions.

After this, I make some grosser adjustments, using black level, contrast, highlights, shadows, and exposure.

The focus is very, very soft, but I usually don’t apply any sharpening or added definition because it will more often than not exaggerate distortions and noise without adding any new information. The reason for the soft-looking focus is down to a few reasons. First, I didn’t have perfect tracking on the telescope when I made these photos because I didn’t expect to photograph a nebula. Second, the exposures were long enough that the seeing (the ordinary twinkling of the sky) allowed the objects (like stars and other fine points) to smear into larger discs. Third, I hadn’t spent any time getting the focus tack-sharp because I was in a hurry. Fourth, this is a zoomed in section of a combination of several photos, which already tends to blend together some finer details (despite the drizzle data).

The Omega Nebula After Stacking

For what it’s worth, I think it turned out fine for a completely unexpected outing with just a few photos taken over a few minutes. After the entire process of stacking, which took a couple of hours, I came up with this.

The Omega Nebula, composited from six individual exposures taken via a Celestron eleven-inch telescope and Sony α6300 camera on the night of 20 August 2018
The Omega Nebula, composited from six individual exposures taken via a Celestron eleven-inch telescope and Sony α6300 camera on the night of 20 August 2018

Here are the before and after photos side-by-side so you can compare.

The latter image has more structure, more detail, more color, and all with less noise. All this, even with imperfect, brief photos and with an imperfect, incomplete process.

The Milky Way

I decided to see if I could apply the same process to some of the Milky Way photos I had from earlier in July. I had taken several toward the core, including ones which used my portrait lens. I thought the results were middling, and I was frustrated by all the noise in them.

I’m not going to step through the entire process of the stacking because it’s largely the same as the one I applied for the Omega Nebula. I have tried different kinds of parameters here and there (such as comparing average versus median image integration), but in the end, I used largely the same method.

One interesting wrinkle was that my Milky Way photos included trees along the bottom. Because the stars moved slightly between each shot, the registration process left the trees moving slightly between each. This caused a severe glitch after the PixInsight processing.

Photo of the core of the Milky Way composited from twelve individual exposures, showing a glitched tree-covered horizon at the bottom
Photo of the core of the Milky Way composited from twelve individual exposures, showing a glitched tree-covered horizon at the bottom

It’s likely I could have used a rejection algorithm, a mask, or tweaked the combination algorithm not to cause this, but I haven’t learned how to do that yet, so I let PixInsight do what it did.

Before I did any further processing, I needed to hide the glitch, and I decided cropping would be awkward. So I took the trees from another photo and laid them over top as best as I could. It looks sort of crude when you understand what happened, but unless you squint, it works well enough.

Photo of the core of the Milky Way composited from twelve individual exposures, with the glitches at the bottom covered with a crudely pasted in tree line
Photo of the core of the Milky Way composited from twelve individual exposures, with the glitches at the bottom covered with a crudely pasted in tree line

It covers a lot of the photo, unfortunately, and it looks really weird when you look closely at it, but hopefully the attention is drawn to the sky.

The Milky Way doesn’t look all that much improved over versions I’ve shown in the past, but it took a lot less work to get it there, and the noise and fine details are significantly improved.

Small section of the composited Milky Way photo from 14 July 2018 showing reduced noise and finer details
Small section of the composited Milky Way photo from 14 July 2018 showing reduced noise and finer details

The photo above shows a similar section of the sky as the noisy patch I showed earlier. (They’re not exactly the same section but very close; the same bright star is seen in both.) Here, there’s much less noise, and it’s possible to see indistinct tendrils of dust among the glowing sections of the Milky Way. The stars are easier to distinguish from the background. Below, I’ll place the two side by side for comparison.

That’s the difference—the photo has more underlying signal, so I can eke more  detail from it. The overall photo ends up looking better defined as a result, even if it doesn’t appear, superficially, all that much more improved.

Next

What’s missing?

I need those calibration shots, for sure: the darks, flats, and biases. I can do those without a night sky, though. I just need to get around to it.

I also have a better idea of what kinds of photos align and stack better than others, so I should leave the glitchy trees at home next time. When I’m using the telescope, I should re-examine my focus; use consistent exposure settings; take many, many photos so that I have some to discard; and track as well as I can manage.

After that, I can elaborate on my process and show better photos than ever before.