|
Posterior sampling in imaging with learnt priors: from Langevin to diffusion models| title | Posterior sampling in imaging with learnt priors: from Langevin to diffusion models |
|---|
| start_date | 2024/12/03 |
|---|
| schedule | 14h-16h |
|---|
| online | no |
|---|
| location_info | amphi Yvette Choquet-Bruhat (bât. Perrin) |
|---|
| summary | In this talk we explore some recent techniques to perform posterior sampling for ill-posed inverse problems in imaging when the likelihood is known explicitly, and the prior is only known implicitly via a denoising neural network that has been pretrained on a large collection of images. We show how to extend the Unadjusted Langevin Algorithm (ULA) to this particular setting leading to Plug and Play ULA. We explore the convergence properties of PnP-ULA, the crucial role of the stepsize and its relationship with the smoothness of the prior and the likelihood. In order to relax stringent constraints on the stepsize, annealed Langevin algorithms have been proposed, which are tightly related to generative denoising diffusion probabilistic models (DDPM). The image prior that is implicit in these generative models can be adapted to perform posterior sampling, by a clever use of gaussian approximations, with varying degrees of accuracy, like in Diffusion Posterior Sampling (DPS) and Pseudo-Inverse Guided Diffusion Models (PiGDM). We conclude with an application to blind deblurring, where DPS and PiGDM are used in combination with an Expectation Maximization algorithm to jointly estimate the unknown blur kernel, and sample sharp images from the posterior.
Collaborators (in alphabetical order) Guillermo Carbajal, Eva Coupeté, Vincent di Bortoli, Julie Delon, Alain Durmus, Ulugbek Kamilov, Charles Laroche, Rémy Laumont, Jiaming Liu, Pablo Musé, Marcelo Pereyra, Marien Renaud, Matias Tassano. |
|---|
| responsibles | Leclaire |
|---|
Workflow history| from state (1) | to state | comment | date |
| submitted | published | | 2024/11/27 13:13 UTC |
| |
|