You’ll hear astrophotographers talk a LOT about their “workflow.” I don’t know for sure if there’s an official definition for this, but for me it includes everything after carrying out the polar alignment all the way through to the finished image. I’ve already covered polar alignment and guiding in my walkthrough, so I’ll assume you’ve already read that and want to have a look at what else goes on. If not, then here is the link to that walkthrough.
So you’ve done your polar alignment, you’ve selected your target (let’s say Messier 33, The Triangulum Galaxy in this case), slewed and platesolved to target. There’s any number of software that allows you to image and in this case I’m going to assume you’re using APT (AstroPhotography Tool.) A number of factors will determine the length of each frame, not least the accuracy of your guiding / tracking. In my case I usually go for 5 minute frames, or “subs.”
One thing to bear in mind with the sub length is the loss of data. For example. lets say you shoot for an hour. Using 5 minute subs, that’s 12 frames. If two of those, for whatever reason, go wrong, then that’s 10 minutes of imaging data that you’ve lost. If you shoot 10 minute subs and two go wrong, then that’s 20 minutes you’ve lost. If shooting 1 minute subs and 2 go wrong, that’s only 2 minutes you’ve lost. At the moment, I’m pushing 10 minute subs, but not with a reasonable enough degree of consistency, so I restrict myself on the serious sessions to 5 minutes. Potentionally the longer the sub length, the more data you can aquire in a single frame, although there does come a point where you hit the rule of diminishing returns. I’m not going to go into that in this article, other than to say that 5 minutes for me is reasonable under the Class 5 skies I have.
You might ask then, why not just shoot 1 minute subs and minimise that data loss? Because an hour of 60 second subs grabs less data than an hour of 5 minute subs. Again I won’t explain it here, but Dylan O’Donnell gives a great example on his YouTube Channel “Star Stuff.” The link for the video is here. I can highly recommend subscribing to the channel as well.
The above image is a single frame of M33 using APT. You can see looking at the different sections all the information you need, including the guide graph at the bottom. This saves switching between APT and PHD2. There is also a numerical display over on the left giving the “APT State”. The lower that number, the better. The graph will help keep track of the trend in guiding accuracy. Any BIG peaks will most likely mean that the frame will be garbage. The peaks in this are fine.
You can also see over on the top left the number of images taken, in this case 3 of 48. In APT you can define a plan, whereby you set the sub length, the number of images and the camera gain, in this case 400, which is pretty much something called “unity gain” for my particular camera. The plan I’m using here is 4 hours of 5 minute subs at 400 gain, which is my usual “go to” for DSO imaging these days.
So you have your images from the session, 4 hours worth of subs, plus calibration frames (darks, flats and bias.) What I usually do at this point is load the light frames (the subs) into a stacking program, such as Deep Sky Stacker (DSS) or Sequator. For DSO I usually use Astro Pixel Processor (APP) to stack and do the initial cleaning up (crop and light pollution removal plus star colour calibration.) However I retain DSS simply as a tool to run through the captured frames and ditch any bad ones, and that’s simply because DSS is so lightweight and easy to use. Once I’ve loaded them all into DSS I then work through each frame and erase the bad ones. You can hover the cursor over the frame itself and inspect the roundness of the stars. You want them nice and tight and round without any “trailing.”
Loading into Astro Pixel Processor (APP)
When you first load up APP, if you’ve just come from using DSS, it can look insanely complicated. Thanks to Stacey over at AstroStace and her YouTube tutorials, I’ve learned not to be so afraid of it and have now fully incorporated it into my workflow. It’s a lot more powerful than DSS, but not as much as PixinSight (PI.) It makes for a very good intermediate tool though without the complexity of PI and I would highly recommend taking the time to learn it’s capabilities.
Once you’ve confirmed your working directory (I usually use the same root directory my frames are in), the first thing you’ll notice are the numbered tabs over on the left. As a side note, when I’m working through the initial data, I leave it in the APT directory, but once I’ve done that, I transfer the remaining good frames across to their own directory and then break it down into date order, for example C:/Astrophotography/M33/Date/Lights, and just change the “Lights” part of that structure to whatever the frame type is, ie darks, flats etc. Ultimately it’s whatever works for you, but I find a good directory structure helps with organisation of my data, especially as it’s often shot across different sessions.
I won’t go into how to use APP as that’s something worthy of a post in its own right, but after the stacking is done, I’ll give it all an initial basic stretch, and make use of the “Remove Light Pollution” and “Calibrate Star Colour” tools, which have saved my data on more than one occassion.
Stretching The Data
Personally at this point I import the saved image directly into LightRoom, partly for cataloguing purposes, and partly to do an initial stretch and crop. I’ll bring up the exposure first and knock down the blacks in order to see what the initial impression of the data is, and so that I can also better see the crop point. I don’t pay much attention to the histogram at this stage as it’s almost pointless until I’ve cropped and I know what data there is to work with. Now crop.
I then reset the exposure and black levels to default and play with them again in order to start getting M33 showing. Once done, I right click the image and select “Edit in Photoshop CC” and choose the option “Edit a copy with LightRoom adjustments.”
Once I’ve loaded it into PhotoShop I edit it on a per channel basis. Essentially this is where you break down the original RGB image into the seperate red, green and blue channels and edit each one individually. This is relatively new for me as I always used to just edit the full RGB. There’s a great tutorial on AstrobackYard’s YouTube channel, the specific video for which is here. I’ve followed Trevor for a while now and his videos are always informative and interesting, so I can definitely recommend subbing to his channel.
Because the video explains this next process in full I’ll skip past it on the assumption that you’ve watched it.
I’ll also do a levels and curves adjustment, again on a per channel basis, and repeat this process several times, before flattening the image. Remember that less is more when it comes to processing the data, and that the idea is to tease the image out, not go in all guns blazing and bang all the sliders as far over as they’ll go.
In addition to all of that I’ll also run the astronomy tools action set for things like minimising star halos, making small stars smaller, removal of deep space noise, despeckling, DSO enhancement. The great thing about these are that you can run them as many times as you feel you need to.
Once I’ve saved the final PhotoShop-processed image I’ll return to LightRoom for some final small global adjustments and to confirm the final crop.
Admittedly this still isn’t a great image to my mind, but I think at this point it’s probably more to do with the camera I was using, the Altair GPCam2 290c. Don’t get me wrong, it’s a fab little camera, but it’s not a ZWO1600. One day…
From here I’ll export and then share publicly and await constructive feedback, which I always welcome.
And that’s it, that’s my workflow. If you’re still reading at this point then thank you, and please feel free to leave any insights on how you would go about processing your own astro images.
For now, clear skies all!