Given the advancing StarLink and OneWeb satellite constellations, as well as Amazon’s projected Project Kuiper internet network — and other initiatives, particularly from China — there is escalating worry by astronomers of being “blinded by the light” from a projected 400,000 recent and planned low Earth orbit satellites.
There’s a bounce back theme from some advocates of megaconstellations: What’s the worry? After all, any instrument or eye-piece time required to scrutinize the cosmos should all be done off-Earth in the first place.
What’s more, isn’t that why the pricy Hubble Space Telescope and the James Webb Space Telescope, and future space-based scopes, have been or will be rocketed off Earth in the first place? Get away from that soupy and fuzzy vision of the surrounding universe due to landlocked looking!
A new tool is available to help de-fuzz astronomical imagery – but coping with satellite constellation flyovers that can foul telescopic observations appears to remain a global worry.
Indeed, the cosmos would look a lot better if Earth’s atmosphere wasn’t photo bombing it all the time. Now researchers at Northwestern University in Evanston, Illinois and Tsinghua University in Beijing have unveiled a new strategy to improve ground-based telescope imagery.
The technique involves the adaption and adoption of a well-known computer-vision algorithm used for sharpening photos and, for the first time, applied it to astronomical images from ground-based telescopes.
What’s involved here is training artificial intelligence (AI) algorithm on data simulated to match the Vera C. Rubin Observatory’s imaging parameters. When that ambitious observatory opens next year, the AI tool will be instantly compatible.
Northwestern’s Emma Alexander explains that images are used for science. “By cleaning up images in the right way, we can get more accurate data. The algorithm removes the atmosphere computationally, enabling physicists to obtain better scientific measurements. At the end of the day, the images do look better as well.”
Alexander is the senior author of research just published in the Monthly Notices of the Royal Astronomical Society and an assistant professor of computer science at Northwestern’s McCormick School of Engineering. Alexander’s prime focus: low-level, physics-based, bio-inspired artificial vision.
Highly anticipated data
Alexander and Tianao Li, an undergraduate in electrical engineering at Tsinghua University and a research intern in Alexander’s lab, combined an optimization algorithm with a deep-learning network trained on astronomical images.
Among the training images, the team included simulated data that matches the Rubin Observatory’s expected imaging parameters. The resulting tool produced images with 38.6% less error compared to classic methods for removing blur and 7.4% less error compared to modern methods, according to a Northwestern University press statement.
The Rubin Observatory officially opens next year. Its telescopes will begin a decade-long deep survey across a vast portion of the night sky. Because the researchers trained the new tool on data specifically designed to simulate Rubin’s upcoming images, the university statement adds, it will be able to help analyze the survey’s highly anticipated data.
For astronomers interested in using the tool, the open-source, user-friendly code and accompanying tutorials are available online: Go to: https://github.com/Lukeli0425/Galaxy-Deconv
For more information, go to – “Galaxy Image Deconvolution for Weak Gravitational Lensing with Unrolled Plug-and-Play ADMM” – at: