This past week we have been bombarded with some really stellar imagery from space, specifically the internet is abuzz with the photos taken by the New Horizons satellite as it passed Pluto. This got us thinking: what is the process for taking, ingesting and processing photo data from satellites millions of miles away? What does NASA do with them, if anything, before the public sees them?

RobertHurt

Robert Hurt, an astronomer and visualization scientist at Caltech’s Infrared Processing and Analysis Center, uses Photoshop at work on April 10, 2015. Credit: Robert Hurt.

We spoke with Robert Hurt, who is a member of the Infrared Processing and Analysis Center (IPAC) at the California Institute of Technology. Among a number of other NASA projects he handles in his role at Caltech, he devotes much of his time to processing infrared images from missions including the Wide-field Infrared Survey Explorer (WISE) and the Spitzer Space Telescope. Since 2006, Hurt has hosted a video podcast called The Hidden Universe and often speaks on the subject of using new media to communicate science and astronomy. He holds a Ph.D. in Physics from University of California, Los Angeles.

Listen to the whole conversation on this NASA Special of the ReWrap Podcast:

 

Robert has worked on countless images, spent hundreds of hours in post production in Photoshop and custom-built applications within CalTech and NASA and has an intimate understanding of how images from space are not only taken, but how they are sent back and what happens to them once they get here.

 

The Helix Nebula

The Helix Nebula

The Pinwheel Galaxy

The Pinwheel Galaxy

The Seven Sisters (Pleiades)

The Seven Sisters (Pleiades)

The Sombrero Galaxy

The Sombrero Galaxy

 

Robert worked on each of those images above and counts them among his favorites. But to get images to that point is a much more in-depth, complicated matter than we ever realized… and we already thought it was probably going to be complicated.

To start to understand how this kind of astrophotography works, we started basic: what kind of camera took the photos we’re seeing?

A lot of what astronomy is today is made possible by really really good software written by very clever people.

“The main difference between normal astronomical data collection and earthbound photography is that astronomy is intrinsically monochromatic,” Robert told us. “That image may be obtained through a filter, a detector, but we’re only getting back one channel of information at a time.”

For example, Spitzer, sees multiple channels in the infrared spectrum. “It would basically do a bunch of mosaics using these cameras. They are stacked up, added together and built through software, many times hands off. The sensors themselves are not nearly as high a pixel count as what we are used to. We have to have electronics that survive launches, space and radiation exposure,” Robert explained. “Pixels are pricey, and we try to be as efficient as we can with the pixels we have. Spitzer has 256×256 pixels, not even a megapixel in a single exposure. But what we lack in a single frame, we make up for in mosaicing. Sometimes you’re looking at thousands of individual images that are meshed together into a single coherent image while still maintaining astronomical coordinates and proportions.”

“A lot of what astronomy is today is made possible by really, really good software written by very clever people.”

But this is just the start. Let’s take a look at two different photos, at opposite ends of the “how this photo was made” spectrum: the photo of Pluto we have been seeing this week, and a gigantic wide sweeping image of the Andromeda galaxy.

“The simple one is the pluto one, which is a single exposure from a single instrument. Just our ability to receive data from New Horizons is very bandwidth limited. I believe the current baud rate communicating between the deep space network and New Horizons is something like 3000 killabits a second, which is akin to pretty early AOL dialup speeds.”

 

The Planet Pluto Taken by New Horizons

 

“The reason we aren’t seeing a lot of pictures right now is that it takes time. The spacecraft, when it’s communicating with earth, has to dedicate itself to downloading data back to earth which means it can’t be doing observations. And since New Horizons is just whizzing past Pluto super fast and it’s not going to slow down, the way missions work is it squirted a few little early images back, just so there is something to be saved just in case something terrible happens, and then it stopped transmitting to us and focused on image, image, image, observation, data collection. It beeped a little ‘hey, I’m still alive’ signal back, and then it just passed. And now it will spend the next 16 months downloading all the data that was collected over the last few days.”

So for the next year and a half, it will be sending radio transmission back to us, and as the New Horizons satellite gets farther and farther away, the time it takes to send data back keeps extending. At the edge of the solar system, it takes about 4 hours to communicate data at light speed, but that will keep extending.

“The reason we can’t just get everything off the satellite super fast is that there are only three deep space communication dishes on earth, and those dishes are sharing time with other mission, other satellites. Each mission has to share, and wait its turn. “They receive all the data from mars, from space telescopes, so we have to actually time share all of the missions. Getting data back from Saturn, from Mars, from Pluto, from Spitzer. It’s all a big scheduling nightmare that someone has to deal with. Data flow logistics. It’s part of why it will take 16 months. It could go faster, but we all have to share.”

Compare that process with the one it took to take this photo of the Andromeda galaxy:

 

AndromedaM31_HST

 

“This is a huge, incredible lovely image. The cameras on Hubble are definitely bigger than the ones on Spitzer, but they don’t cover a huge area. They still have to take a ton of exposures. Hubble has this incredible sharpness, but it doesn’t look at as much of the sky as say, Spitzer. To get something as big as Andromeda in its entirety, just to image that one galaxy would eat into the science of other projects.” This photo was made up of many, many photos over a long period of time, and those images are combined with software before the image is sent back to earth for final processing.

One thing that should be more emphasized was something Robert mentioned: “The main difference between normal astronomical data collection and earthbound photography is that astronomy is intrinsically monochromatic. That image may be obtained through a filter, a detector, but we’re only getting back one channel of information at a time.”

Just take a look at the images taken from the Mars Exploration Rover here:

 

MarsOpportunity_sol3948-49bw

An original composite of ‘Marathon Valley’ taken by the Mars Exploration Rover Opportunity on March 3-4, 2015. Credit: NASA/JPL-Caltech.

MarsOpportunity_sol3948-49_3D

A 3D-glasses-required, enhanced view of ‘Marathon Valley’ taken by the Mars Exploration Rover Opportunity on March 3-4, 2015. Credit: NASA/JPL-Caltech.

 

If you’ve seen colors, or a colored photo from Mars, that color was added later taking into account the right wavelengths in infrared that someone like Robert observed. To help understand how this happens, let’s look at this photo above of the Orion Nebula:

 

The Great Cosmic Hearth- The Orion Nebula

The Great Cosmic Hearth- The Orion Nebula

 

To understand where this photo started, let’s take a look at the different channels that were captured by the WISE(NASA’s Wide field Infrared Survey Explorer) satellite of this image:

 

OrionM42_WISE

Here, the top 4 are the different datasets. The lower 4 are the image in varying stages of the production and clean-up process, and the single color one under that is a nearly final stage before the diffraction spikes (red lines) were cleaned up.

 

Each of those channels in infrared corresponds to a color that we as humans understand. What do I mean? Well, color is just how our brains process wavelengths of light. Since everything is captured in monochrome, scientists have to reveal color, based on those wavelengths, for images released to the public here on Earth.

And this, right here, is where Robert’s expertise and understanding of both physics and photography come into play.

“Red, green and blue is just a purely biological limitation. A happenstance of how human eyesight works. There is nothing fundamental about red, green and blue for the rest of the universe.”

When you have to build color from scratch, you learn a lot. 

“What we have done for a lot of the infrared data sets that come through is that we will preserve the relative ordering of light. For example, red light has a longer wavelength than blue light. Adjust things properly and you have a color image. This is really how the understanding of how color works and how the eye interacts with color and how the human perception of color works, helps you make decisions.”

 

OrionM42_WISE

 

“If you have two channels of information that are very similar in the physics that they sample, for example, light from stars, so it makes sense to make those into two colors that are relatively close to each other. When different channels don’t line up, they stand out, and that will tell you about one process happening on one place, and something else in another.”

So basically, the process is this: an image is sent down from a satellite and it is then handed, in a raw monochrome form, to Robert. Robert then works on the image until he believes it to be accurate to the best of his ability and hands it off to someone who puts it out for public distribution.

You want to bring out the colors that are intrinsic to the data, but not end up creating colors that aren’t really there.

Curious what his Photoshop workspace looks like? Check it out:

Robert Hurt Photoshop Workspace Orian Nebula

Robert Hurt Photoshop Workspace

“For a really carefully done piece, it can easily take me between one and several days. There is a process of going from the raw astronomical data to the grayscale representation,” Robert explained. “Then sometimes you have to align them because sometimes they don’t come pre-aligned. Especially if you are combining things from different telescopes they often don’t line up. So I have to use distortion and warp tools, in for example Photoshop, to get things to line up properly channel by channel.

Then you have to apply colors, and find the right application of colors, and adjust each channel’s levels and curves, because you want to bring out the colors that are intrinsic to the data, but not end up creating colors that aren’t really there that you might be able to bring out by ‘over processing.'”

And here, we touch on a very interesting point: Scientists in Robert’s position find their work constantly under public scrutiny. Just take a look at how mad some people are about proving whether something is fake, or to the extent that some examine images released publicly. So it’s up to Robert and those like him to do their utmost to clean an image, remove refractions that happen as a result of lenses in space, but and then always remember what is and what isn’t in a particular image.

 

 

“People actually look at these things in incredible detail when they get out there,” Robert told us. “If you leave something in an image which is a known artifact in a detector, or a bad pixel, you open up the possibility that someone will think ‘Oh look at that! There is a giant black planet sitting in the middle of this nebula!” When in fact, it’s a known bad pixel on the array.”

Now, you can’t stop people from drawing their own conclusions about what actually is out there (such as in that Pluto alien space base video above), but at least it’s not something that a scientist altered either by mistake, or forgot to remove as a known variable from the equipment.

“This is where I wear my astronomer hat as I go though and do this. I can’t just go in willy nilly and take out anything that looks like a nasty little ick on a picture. I have to know the difference between a glint, and a bad pixel, and a diffraction spike, and a readout error. This is just the kind of thing you know over time,” he explained.

“If I’m not actually sure about something, I have to go to the original research and ask, ‘Is this real?’ Or ‘is this an artifact?'”

One of the main questions I always hear, and one that I personally had, was why the cameras on these satellites aren’t better. Why does it seem like some consumer grade cameras have better image quality than the images we are getting from space?

“The way that technology filters into astronomy, particularly spaced-based astronomy, the time frame for developing these missions can easily be longer than a decade. A lot of decisions have to be made well in advance of when the mission launches.” For example, a mission that is set to go up in 2018 has hardware that has long since been built. “By the time it’s in space, it’s already decade old tech.”

Needless to say, there is a monumental amount of knowledge it would take to even begin processing an image from space, and even more pressure after that to make sure that what you are releasing is as accurate as possible. The truth is of huge importance to Robert, because as scientists, that’s all they’re ever seeking.

“Is that color really showing something that’s real about the universe, or have I basically done the astronomy equivalent of adding an Instagram filter to a photo?”

For Robert, he always wants to be sure he’s done the former, and never the latter.

Special thanks to Adobe for introducing Dr. Robert Hurt to us and facilitating our interview. Especially, thank you Colleen Kuhn!

  • Very fascinating article! Thanks for sharing.

  • Bob West

    Nice! We rarely get to see the image processing aspect of NASA’s work. Would love to see the actual PSD files they use to get more info on compositing. — BTW: “bog rate” should be “baud rate”. 🙂

  • Rona

    Super!