VisionJune 19, 2020UC Berkeley
Denoising Diffusion Probabilistic Models
Jonathan Ho, Ajay Jain, Pieter Abbeel
Abstract
We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our models produce samples that are competitive with state-of-the-art GANs while enjoying desirable properties such as distribution coverage and a stationary training objective.
Key Findings
- 1Demonstrated that diffusion models can generate high-quality images competitive with GANs
- 2Introduced a simplified training objective based on denoising score matching
- 3Showed better mode coverage and training stability compared to GANs
- 4Established the theoretical framework for modern diffusion-based generation
- 5Achieved state-of-the-art FID scores on image generation benchmarks
Impact & Significance
DDPM revived interest in diffusion models and directly led to the development of Stable Diffusion, DALL-E 2, Imagen, and the entire class of modern AI image generators. It replaced GANs as the dominant paradigm for image synthesis.
Related Tools
Related Papers
LLMJuly 23, 2024
The Llama 3 Herd of Models
Meta AI
LLMJuly 15, 2024
Qwen2 Technical Report
Alibaba Cloud / Qwen Team
EfficiencyMay 7, 2024
DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model
DeepSeek AI
LLMMarch 4, 2024
The Claude 3 Model Family: Opus, Sonnet, and Haiku
Anthropic