Multiband mapping for astrophotography based on PCA


Color mapping is always challenging in astrophotography, especially when we have observations of the same astronomical object in multiple spectral bands. Here, I propose a possible method to approach this problem based on a well-known machine learning algorithm borrowed from classic statistics, the Principal Component Analysis (PCA), to compress information from the multiband input data into RGB channels while minimizing information loss. This technique could also be applied to narrowband-only "SHO" data to find an encoding that deals with the typical H-alpha overwhelming signal. 

A Python implementation, as a command-line script, is provided via github.

The technique described here was awarded with the Annie Maunder Prize for Image Innovation in the Astronomy Photographer of the Year competition, 2021 edition, with the image Another Cloudy Day on Jupiter: A Surface Map Based on 8 Filters Data [1]. Also shortlisted in this edition of the contest was the image Multiband Whirlpool Galaxy: From Infrared to X-Rays, produced in the same way. Both images are in the "Examples" section below.  

The problem and existing solutions

What can we do when we have observations of an object in multiple bands and want to construct a color image based on them?

In amateur astrophotography, we usually work with LRGB data and we have workflows to blend them and produce a final RGB image. Sometimes we also capture narrowband data and manually try some mappings to the RGB channels to generate a narrowband-only image, or to enhance nebulosity details on the LRGB data. 

Data from professional observatories usually include observations in more spectral bands. As an example, the Outer Planet Atmospheres Legacy (OPAL) program includes observations in 8 bands [2]. A more extreme case would be the data cubes from Integral Field Spectroscopy (IFS) observations, where each spatial pixel may have several thousand measurements in different wavelengths, sampling its associated spectrum. In these cases, to create a color image for outreach purposes, image processing specialists tend to select a few of the most interesting bands and map/combine them to RGB. 

Observations of Jupiter (2019) made in 8 bands by NASA OPAL Program

Proposed approach

The observations in different bands of the same object may be seen as a data cube, like in integral field spectroscopy: each pixel corresponds to a specific spatial coordinates of the sky and has N observations associated with it. Therefore, these input pixels are N-dimensional. But our final color image has only 3-dimensional pixels, corresponding to the R, G, and B channels. From this perspective, what we have to do is to find a dimensionality reduction transformation that effectively compresses those input 8-pixels into the output 3-pixels, ideally with minimal loss. The usual approaches do this by discarding some bands and combining others. Can we do this another way?

There are several dimensionality reduction algorithms in Data Science, where it is common to face datasets with thousands of variables or features. Maybe the most basic algorithm, borrowed from classical Statistics, is Principal Component Analysis (PCA). This algorithm finds a linear transformation from the input N-dimensional space into an output M-dimensional space that minimizes the loss of variance in statistical sense [3], or the reconstruction loss (mean squared error of the inverse transformation) if we see it as a compression problem [4], or the (euclidean) distance between samples and their projection onto the lower dimensional space [3]. In other words, the PCA transform reduces dimensionality trying to minimize the loss of information according to some mathematical criteria.

The PCA output components have some interesting properties [3]:

  • They are ordered by the amount of information (or more precisely, explained variance, in statistical terms) they carry, so the first component carries more information than the second, and the second more than the third, etc.;
  • In many cases, only a few components suffice to convey most of the original information;
  • They are linearly uncorrelated, i.e., in some sense, the information carried by one component is not "repeated" in other components;
  • The first component is a weighted average (linear combination with positive weights whose squared sum is equal to 1) of the input dimensions.

Although this is a technical detail, we should bear in mind that for PCA only the direction (of the axis of the lower dimensional space) is relevant [5]. This means that in some cases, one component may be generated with its sign flipped. 

The input data should be normalized in order for PCA to perform best. Also, in the case of astronomical images, PCA will work better on stretched, i.e., non-linearized input images. 

Applied to astronomical images, the first component usually carries more than 80% of the variation in the input data. Being a weighted average, this first component can be easily interpreted as an overall representation of the luminosity of the object. 

From left to right: first, second and third principal components extracted by PCA from the OPAL Jupiter input bands. They account for ~97% of the total variance of the input data, with more than 80% explained by the first component alone. 

In data visualization, perceptual psychology should be taken into account to efficiently convey information graphically: There are some rules that we should follow to effectively encode information in a data visualization [6] [7]. In general, it is accepted that color can be tricky, so shapes (lengths, areas, contrasts, textures, etc.) are preferred when encoding information graphically. Therefore, we will use this first component, the one holding the most of the variation of the original data, as luminance data to construct our final image. We encode most of the original information as shapes

To produce a final color image and, at the same time, incorporate more information coming from the original data, we will use additional principal components. This will encode information not contained in the first component into chrominance. How many components? Just two of them will suffice to cover all possible hues and saturation values. 

In other words, we are performing a PCA to project the input data to three components, interpreting them as coordinates in a luminance-chrominance color space. We have used PCA as a tool to separate luminance and chrominance, so that they are (linearly) uncorrelated. As our final goal is to get a RGB color image, this luminance-chrominance color space serves as an intermediate space in our data visualization process. 

One possible option for the intermediate color space is CIELAB. Other color spaces, such as YCrCb or Ydbdr, were explored with more or less similar results. For the CIELAB case, the first principal component would map to the L channel, while the second and third can be arbitrarily mapped to the a and b channels. 

The components should be centered and scaled to fit the range in which the color coordinates are expected to be for this color space. In this step, the centering/scaling for the L channel may be a simple minimum-maximum mapping. But for the a and b channels, a robust centering/scaling is preferred, so that we map the estimated background value (the median) to 0, effectively neutralizing the background color. In this step, it is critical to check for possible clipped values. 

At this stage, we can explore different chrominance encodings by adding additional transformations to the a and b channels. In particular, we have experimented by simply rotating these components and mirroring (flipping the sign) of one of them (mirroring both plus adding a 180º rotation is equivalent to no rotation or flipping). The result is a color palette selection chart that we can use to visually choose the appearance of the colors in the final image. This is similar to what hue shift tools do in photography edition software packages, but in this case we are applying the process over the principal components. 

As a last step, the final image is generated by simply converting from the intermediate color space to RGB. In this step, it is also important to check for out-of-gamut clipping: in that case, the scale factors for a and/or b should be reduced. 

Note the importance of the interpretative process of the astrophotographer here, combining truthfulness to the original data with an esthetic, purely artistic, personal touch [8].

This brings also an useful insight in deep-sky astrophotography: we are indeed constructing a data visualization from the original observations of the astronomical object, but we are doing so in such a way that the result has the appearance of a photography [9]. 


The provided implementation is written as a Python command line script, using the skimage and sklearn packages. 

It supports 16 bit unsigned TIFF files but 32/64 bit float XISF or numpy's NPZ are preferred. As stated before, these files should contain previously delinearized data (a simple stretch is implemented in the script for quick preview), and of course, they should be already co-registered (aligned). 

Two working modes are supported:

  • An exploratory mode that generates a mosaic with different color palettes (see Applicability to narrowband images below);
  • A production mode that generate the final image.

The available command line parameters are described in the script help output. 

See the Download section at the bottom of the page. 


Joint-winning image in the Annie Maunder Image Innovation category of the APY13 competition, including a quote (that totally made me happy) from the Senior Manager of Public Astronomy of RMG, the astrophysicist Dr. Emily Drabek-Maunder.
Credit: S. Díaz/NASA OPAL. 

"Multiband Whirlpool Galaxy", shortlisted image in APY13 competition, using four optical bands from HST, one infrared band from Spitzer and one X-rays band from Chandra.
Credit: S. Díaz/NASA.

Image of NGC3627 based on data from the PHANGS survey using ESO's MUSE integral field spectrograph. The described PCA approach is applied here to the 64 strongest bands, among the 3700+ bands available in the data cube. Credit: S. Díaz/ESO PHANGS.

Are these "false color" images?

Oh, they certainly are. But aren't colors a construct of our minds, just the way that our eyes encode wavelength information to our brains? What should we do if we want to visualize wavelengths that our eyes are not sensitive enough for? We only can map them to colors we can see. But then they would overlap with the contribution of colors that truly lie in the visual range. To maintain separation and ensure efficient use of the visual color range, the only solution is to find a good mapping that compresses the information contained in the original bands with minimal loss. This efficient use of the visual range means that we should be open to use all color hues; otherwise, we may be compressing even more the original information. The color hues transport information about differences in the contributions of the original bands, usually related to different chemical compositions. Astrophotography is not meant to allow for scientific measurements, but it can and should convey qualitative information about the object, and that is exactly what we are doing here.

Maybe the real underlying question here is whether these images are fake. And the answer is a resounding 'No.' The color encoding that maps input bands to output channels results from a chain of linear transformations (and a final nonlinear transformation to convert between color spaces). Everything in the image is as real as it can be. No information is made up in the process. Think of this encoding as a pair of glasses that allows you to see what is invisible, like infrared goggles that 'translate' light you cannot see to your visual perception. 

As discussed, deep-sky astrophotography is mostly a data visualization process: there is no point on stigmatizing their colors as "false" or even worse, "fake". You are visualizing something that your eyes just cannot see [8]. Color hues serve as a way to visually cluster different chemical compositions as derived from the input data. 

For us astrophotographers: we are learning a lesson the hard way. We should make an effort to describe our work properly, trying to avoid connotations that may lead the public to confusion. 

Applicability to narrowband images

We amateur astrophotographers frequently use H-alpha, [O III], and [S II] narrowband filters when imaging nebulae (brackets are needed as they are forbidden emission lines [10]). If we compose the color image using the typical Hubble SHO palette, that is, [S II] for R, H-alpha for G and [O III] for B ("sorting" by wavelength), then the green channel will probably dominate the resulting image, making very difficult to discern the contributions of the others, generally. The usual solutions are as follows:

  • Defining manually a custom mapping from the filters data to RGB channels, usually based on linear combinations of them;
  • Applying some kind of regularization of the green contribution where it becomes dominant in nebulae, but protecting the stars (SCNR process in PixInsight);
  • Applying non-linear curve as transfer function for color hues, stretching the input hues far from the deep greens;
  • A combination of the above.

The PCA approach can also be used without dimensionality reduction. We can feed it with 3-dimensional data and get an output of 3-dimensional data. In this case, it performs only a three-dimensional translation and rotation, without making any projection as no dimensionality reduction is needed. The properties of this transformation are the same, so the principal components will not be linearly correlated, and will separate effectively shape from color. As there is no dimensionality reduction, all the variance is conserved, so all the original information should be kept (as long as we check that there is no gamut clipping). 

By further rotation and/or flipping of the chroma components, we can visually choose a color palette for the image. Numerically this do not have any impact on the conveyed information of the chroma components, but as the intermediate color space used, CIELAB, is not perceptual, some color palettes may work better than others visually separating the different chemical compositions of the object. A perceptual color space would be desirable in this aspect, i.e., a space where distances are proportional to differences in visual stimuli. In that case, rotations and flipping would only be related to esthetics. 

Narrowband color palette mosaic generated with dimred script

Example of narrowband color mapping using PCA transformation: Sh2-132 nebula complex
with H-alpha, [O III] and [S II] filters. Captured and processed by the author [+ Info] 

Another example of the technique, applied to the iconic Orion's sword (M42, De Mairan's, and Running Man nebulae), using H-alpha, [O III] and [S II] filters. Captured and processed by the author [+Info]

Conclusions and future work

The described technique is intended to be another tool in our astrophotography toolbox, as in data visualization there is no best overall solution. It is not meant to replace manual encodings, that may result in more powerful images as we can include e.g. non-linear transformations that may better suit in some cases. Its optimization is based on minimizing the mean squared error, something that may or may not be the best solution for this visualization process. It can be helpful in the case of very high-dimensional input data where manual band selection and combination would be especially challenging. It can also help when looking for a narrowband SHO mapping. 

One line of future work would be to experiment with non-linear dimensionality reduction algorithms. Some preliminary tests have been done with UMAP, NMF and other algorithms available in the sklearn package. The problem here would be interpreting the result in such a way that a color photo-like image could be derived. PCA made this easy, as the first component is a good candidate for luminance. 

Exploration of perceptual color spaces as intermediate color spaces would be interesting, but their shapes, defined as in-gamut values when converted to RGB, may be complex and should be taken into account somehow in the PCA transformation; i.e., we should use a hypothetical constrained version of the PCA algorithm. Or maybe develop a custom transformation model that trains on the input data trying to optimize a maximum coverage of the intermediate, perceptual color space but constrained to maintain somehow the structure of the data. 

Something we tend to forget in astrophotography is the use of palettes adapted for color-blind vision. To be fair, many astrophotographs are already reasonably adequate for typical color-blind variants, as red/blues are common in LRGB or HOO palettes. But, for example, subtle red to golden variations, as found in SHO palettes, can be challenging. In the case of the proposed approach, that compresses multiple bands by potentially using all possible hues, the risk of generating images inadequate for color-blind vision is higher. So, in addition to exploring perceptual color spaces, it may be very interesting to explore the case of color-blind perceptual color spaces. 


[1] Díaz-Ruiz, S. Shortlisted images on the Astronomy Photographer of the Year 2021 competition.  

[2] Simon et al. Outer Planet Atmospheres Legacy (OPAL) Program. NASA/ESA/Hubble Space Telescope. 

[3] James, G. et al. An Introduction to Statistical Learning. 8th edition, 2017.

[4] Goodfellow, I. et al. Deep Learning.

[5] Bruce, P. et al. Practical Statistics for Data Scientists. 2nd edition, 2020,

[6] Viégas, F. et al. Visualization for Machine Learning. Google Brain. NeurIPS, 2018. 

[7] Illinsky, N. et al. Designing Data Visualizations. O'Reilly, 2011. 

[8] Peris, V. Fotografiar lo invisible. 2020. 

[9] Díaz-Ruiz, S. What is astrophotography. 2021. 

[10] López-Sánchez, A. Understanding the colors of nebulae. 2020.


Popular Posts