Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Diffusion Models

Diffusion Models

Generative models that learn to reverse a noise-addition process to synthesize new data.

Year: 2020Generality: 796
Back to Vocab

Diffusion models are a class of generative models that learn to create new data by reversing a gradual noising process. During training, clean data samples — most commonly images — are progressively corrupted by adding small amounts of Gaussian noise across hundreds or thousands of discrete timesteps until the original signal is completely destroyed and only random noise remains. A neural network, typically a U-Net or transformer architecture, is then trained to predict and remove that noise one step at a time, effectively learning the statistical structure of the original data distribution.

At inference time, the model starts from pure random noise and iteratively denoises it, step by step, until a coherent, high-quality sample emerges. This reverse diffusion process can be guided by conditioning signals — such as a text prompt, a class label, or a reference image — allowing precise control over what gets generated. Techniques like classifier-free guidance amplify this conditioning, dramatically improving output relevance and quality. The mathematical framework draws on stochastic differential equations and score matching, connecting diffusion models to earlier work on energy-based and score-based generative approaches.

Diffusion models rose to prominence around 2020 with the publication of Denoising Diffusion Probabilistic Models (DDPM), which demonstrated that the approach could match and exceed the sample quality of generative adversarial networks (GANs) while being more stable to train and avoiding mode collapse. Subsequent advances — including DALL·E 2, Stable Diffusion, and Imagen — extended the framework to text-to-image synthesis, video generation, audio synthesis, and molecular design, cementing diffusion models as one of the most versatile and powerful tools in modern generative AI.

The significance of diffusion models lies in their combination of training stability, output diversity, and scalability. Unlike GANs, they do not require adversarial training between competing networks, and unlike variational autoencoders, they impose fewer restrictive assumptions on the latent space. Their ability to produce photorealistic images, coherent audio, and even novel protein structures has made them central to both commercial AI products and cutting-edge research across science and creative industries.

Related

Related

Latent Diffusion Backbone
Latent Diffusion Backbone

A generative framework combining latent variable models with diffusion processes for high-dimensional data synthesis.

Generality: 520
Large Language Diffusion Models
Large Language Diffusion Models

Generative architectures applying diffusion-based denoising processes to large-scale natural language generation.

Generality: 337
Diffusion Forcing
Diffusion Forcing

Training diffusion models with mixed noise levels to enable flexible, controllable generation.

Generality: 174
Policy-Guided Diffusion
Policy-Guided Diffusion

Using a learned policy to steer diffusion model sampling toward desired outcomes.

Generality: 292
Adaptive Dual-Scale Denoising
Adaptive Dual-Scale Denoising

A diffusion model denoising technique that dynamically balances local detail and global structure.

Generality: 94
Full-Sequence Diffusion
Full-Sequence Diffusion

A diffusion modeling approach that processes entire data sequences simultaneously rather than in segments.

Generality: 293