Application June 8, 2021

Techniques to overcome atmospheric blurring in solar astronomy

Bright sun glowing with golden clouds

Ground-based solar telescopes have given us amazing insight into much of our sun’s activity, but atmospheric turbulence, caused by temperature and density differences in our atmosphere, creates distortion and blurring of images as they’re received on Earth. A number of solutions to this have been implemented, with a range of success.

Adaptive optics

Adaptive optics (AO)  is a pre-processing technique requiring the use of a wavefront sensor to measure the distortions of incoming waves and deformable mirrors or a liquid crystal display to then compensate for, and correct, these distortions. We’ve supplied frame grabbers for one of these projects at the USA’s National Solar Observatory, you can read more about that here. However, the cost of components required for such a system can be prohibitive to research projects, and residual image degradations still remain a factor in some cases.

Comparising images of the sun with different numbers of frames
Fig.1: Example of the beneficial effect of the number of frames on the quality of the restored image

Lucky imaging

Post-processing can be carried out on blurred image data to try and achieve a high-resolution image. Speckle imaging is a technique which involves taking numerous short-exposure pictures that “freeze” the atmosphere. We’ve looked at lucky imaging, a particular method of speckle imaging, in a previous article. Images are taken with a high-speed camera using exposure times short enough (100ms or less) to ensure the changes in the Earth’s atmosphere during the exposure are negligible. If thousands of images are taken, there are likely to be a number of frames where the object in question is in sharp focus due to the probability of having less atmospheric turbulence during the short exposure period of the “lucky” frame. By taking the very best images, for example 1% of the images taken, and combining them into a single image by shifting and adding the individual short exposure pictures, “lucky imaging” can reach the best resolution possible. Originally, objects captured at these exposure times needed to be very bright but the adoption of CCDs has made this process more widely used.

Frame selection, “Lucky imaging”
Fig. 2: Frame selection, “Lucky imaging”

Multi-Frame Blind Deconvolution (MFBD)

MFBD is a further numerical technique that models and removes these residual image degradations, allowing observers to increase the amount of information they can extract from their image data. An observed image is the true image convolved with the unknown point spread function, or PSF (the blur), created by the atmosphere and telescope. With the help of MFBD, astronomers attempt to calculate PSF as close to reality as possible with the help of a set of image frames.

Mosaic of fitted PSFs to the data shown in one of the pictures in Figure 1
Fig. 3: Mosaic of fitted PSFs to the data shown in one of the pictures in Figure 1

In this technique, the more frames acquired, the better the average and, of course, the better the resulting image, so a very fast method of capturing frames is required. Michiel van Noort, a scientist with the Max Planck Institute for Solar System Research, has been working  with our  FireBird Quad CXP-6 CoaXPress frame grabber and our FireBird Camera Link 80-bit (Deca)  frame grabbers, to obtain accurate, real-time performance and unlock enhanced solar imaging.

Michiel is now using the FireBird CXP acquisition card to grab frames from the latest generation of large format image sensors from AMS/CMOSIS (CMV 12000, CMV20000 and CMV50000) needed for a new type of high-resolution hyperspectral instrument. Such instruments require very large detectors, but also a high frame rate to “freeze” the Earth’s atmosphere, which tests our boards to their limits. This is how he explains the processes:

“The hyperspectral instrument I’m using is called an integral field spectrograph, which differs from a traditional long-slit spectrograph by trying to recover high resolution spectral information not only along a one-dimensional slit, but over a two-dimensional field, without any scanning or tuning. To do this, the light in the focal plane is “reformatted” in such a way, that space is created between the image elements, in which the spectral information can be dispersed. The Microlensed Hyperspectral Imager (MiHI) does this, as the name suggests, using a double-sided microlens array.

“This works very well, but it generates a major challenge: even for a small field of view (say, 256×256 pixels), if you want to capture 256 spectral elements, you actually need to capture 16M pieces of information. On top of that, you need to critically sample each of these elements with at least 4 pixels, to avoid confusing the information from different pixels. This is why such instruments need very large sensors, and the atmospheric turbulence needs them to be fast as well (100fps is ideal, but difficult to attain). The MiHI has 128×128 image elements (so-called spaxels – pixels that can move in three spatial dimensions), and has about 324 spectral elements, and that needs more than 20Mpx, which I couldn’t really get earlier in the process, but now it’s no problem.”

As you can see from Fig. 1, there’s a direct correlation between frames and result: more frames give a better average leading to better results.

SST solar observatory
Photo Credit: Pablo Bonet/IAC; Source: IAC

We’re proud that our industry-leading products are contributing to scientists’ knowledge of our sun’s full variety of observable and measurable phenomena. But we also customize image capture and processing products for more down-to-earth applications including industrial inspection, medical imaging, surveillance and more. From space missions to large scale deployment of industrial vision systems, we’ve provided imaging components and embedded systems that help our customers provide world-class solutions. Get in touch with your vision challenges to hear more.

Featured image credit: IAC.

Latest News

See More News
Read More Company News
Active Silicon Product Manager holding a frame grabber
November 27, 2024

Spotlight on our experts: Delivering innovative frame grabbers

Craig joined Active Silicon in the summer as Frame Grabber Product Manager to focus on…

Read More Industry News
Large fish swallowing small fish with Exosens and Noxant logos.
November 25, 2024

Exosens intends to acquire Noxant

In yet another acquisition in our sector, Exosens announced last week that it has entered…

Read More Industry News
large fish swallowing small fish with company logos in bubbles
November 19, 2024

Atlas Copco Group acquires VisionTools Bildanalyse Systeme GmbH

Atlas Copco, a Swedish global provider of sustainable productivity solutions, has acquired German systems integrator,…

Read More Industry News
snapshots from the IVSM meeting
November 18, 2024

International Vision Standards Meeting – Fall 2024

The Fall IVSM was recently hosted by Baumer in conjunction with the VDMA, at Baumer’s…

Upcoming Events

See More Events
Read More about DSEI 2025 9-12 September 2025
Event details DSEI 2025

DSEI 2025

London, UK

DSEI is a leading Defense & Security Show that connects governments, national armed forces, industry…