Maximum-A-posteriori estimate with Deep generative NEtworks for Source Separation (MADNESS)
The next generation of astronomical surveys will collect massive amounts of data. It will present challenges not only because of the sheer volume of data but also because of its complexity. In the Legacy Survey of Space and Time (LSST) at the Vera Rubin Observatory, more than 60% of objects along the line of sight are expected to overlap in the images. Classical methods for solving the inverse problem of source separation, so-called “deblending”, either fail to capture the diverse morphologies of galaxies or are too slow to analyze billions of galaxies. To overcome these challenges, we propose a deep learning-based approach to deal with the size and complexity of the data.
Our algorithm called MADNESS deblends galaxies from a field by finding the Maximum-A-posteriori solution parameterized by latent space representation of galaxies generated with deep generative models. We first train a Variational Autoencoder (VAE) as a generative model and then model the underlying latent space distribution so that can it be sampled to simulate galaxies. To deblend galaxies, we perform gradient descent in the latent space to find the MAP estimate.
In my talk, I will outline the methodology of our algorithm, evaluate its performance, and compare it against state-of-the-art techniques using flux reconstruction as a metric.
Biswajit Biswas (APC)