Atomic auto-encoders for learning sparse representations

At the Gretsi 2025, one session was about inverse problems. An example inverse problem involves find a high-dimensional signal \(x\) from low dimensional observations \(y\) under a linear model \(y = A x +z\) where \(z\) is a measurement noise, \(A\) the sensing matrix.

One approach involves compressed sensing where you try to decompose your signal as the combination of the smallest number of dictionnary elements. In this family of approachs, (Newson & Traonmilin, 2023) proposed latent auto-encoders.

References

  1. Disentangled latent representations of images with atomic autoencoders
    Alasdair Newson, and Yann Traonmilin
    In Sampling Theory and Applications Conference, Jul 2023



Enjoy Reading This Article?

Here are some more articles you might like to read next:

  • Regularized gradient descent
  • Test
  • Changing surrounding delimiters in VIM
  • Python and Paraview for batch processing
  • One liners