|
Diffusion-based image and video inpainting with internal learning| title | Diffusion-based image and video inpainting with internal learning |
|---|
| start_date | 2024/04/02 |
|---|
| schedule | 15h-16h |
|---|
| online | no |
|---|
| location_info | Salle 314 |
|---|
| summary | Diffusion models are now the undisputed state-of-the-art for image generation and image restoration. However, they require large amounts of computational power for training and inference. We propose lightweight diffusion models for image inpainting that can be trained on a single image, or a few images. We develop a special training and inference strategy which significantly improves the results over our baseline. On images, we show that our approach competes with large state-of-the-art models in specific cases. Training a model on a single image is particularly relevant for image acquisition modality that differ from the RGB images of standard learning databases; for which no trained model is available. We have results in three different contexts: texture images, line drawing images, and materials BRDF, for which we achieve state-of-the-art results in terms of realism, with a computational load that is greatly reduced compared to concurrent methods. On videos, we present the first diffusion-based video inpainting approach. We show that our method is superior to other existing techniques for difficult situations such as dynamic textures and complex motion; other methods require supporting elements such as optical flow estimation, which limits their performance in the case of dynamic textures for example. |
|---|
| responsibles | Vacher, Blusseau |
|---|
Workflow history| from state (1) | to state | comment | date |
| submitted | published | | 2024/03/28 14:48 UTC |
| |
|