After playing around with some Junocam preprocessed images ("map projected", as they appear on the image processing website), I wanted to give a try starting from the raw images instead.
Junocam raw images are 8-bit monochrome PNGs composed of stripes or "framelets" vertically stacked in groups of three, corresponding to the blue, green, and red filters. To assemble the puzzle, what the knowledgeable people do is to three-dimensionally project the framelets to a spheroid representing Jupiter and then take a snapshot to this map-projected spheroid. In fact, they even correct the framelets for optical distortion induced by the camera lens. They use tools like ISIS and SPICE kernels and things out of my reach at least by now; see "
My frustrating walkthough to processing Junocam raw images" by
cosmas-heiss (github) and
"Processing JunoCam Images" by Kevin M. Gill (github).
Here I tried to approach the image assembly by just overlapping the framelets (separately for each color channel) by the "right" amount, probably the same thing you first think of when you see the raw file. A sort of planar projection (framelet translations), if you like. I wrote a Python script for that, using the packages that I mention below.
The framelets come with hot pixels and dust shadows, so the script first creates a defect map by looking for outliers in a local median map computed from the framelets with little or no dark background. The defect map is then used to apply a cosmetic correction in each framelet, imputing the defects with the local median value.
To find the optimal overlap value, I ended up evaluating (with skimage) the cross-correlation error among the framelet overlapping regions for different overlap values, and selecting the one that minimizes that error. There is not one overlap value that fits all raw images: I assume that the orbital parameters, orientation of the camera, rotation of the planet, all influence the image assembly process. In fact, after solving the puzzle this way, the limb of the planet does not align nicely, for these reasons. I tried to find the optimal horizontal scaling, but the result was always that no scaling reduced the cross-correlation error the most. For better alignment of the areas near the limb, the 3D projection method would be needed. What I do here is to compute a limb mask that leaves out the limb areas where not fully covered with image pixels, by fitting an ellipse to the limb (using scipy.optimize and skimage.draw), and then apply the mask to the image.
The color channels still do not align perfectly among them, partly for the optical distortion of the lens, but I guess that also partly for the dynamic nature of the planet atmosphere. So, as the final step I used an non-rigid registration tool based on spline warping,
elastix, developed mainly for MRI scans, available from Python thanks to the
pyelastix package. I also tried other registration method (skimage.feature.ORB) without success.
Post-processing was done in PixInsight, mainly histogram, color adjustments and wavelet transformations to reduce the noise and enhance the structures. Photoshop was used to manually remove the remaining pixel defects that the script could not catch.
This particular image was inspired by the iconic one produced by
Gerald Eichstädt and Seán Doran. An impressive panorama of the
North North Temperate Belt captured during Perijove 16 of the intricate cloud structures, where you can almost feel the volume of the atmospheric masses. What seems to be the top of the enormous structure at the right of the center is crowned by eye-catching clouds that look very much like ours here at the Earth. This framing is slightly wider than Eichstädt&Doran's version.
Images credit: NASA/JPL-Caltech / SwRI / MSSS / Sergio Díaz