Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Deepfakes

Deepfakes

AI-generated synthetic media that realistically replaces or manipulates faces and voices.

Year: 2017Generality: 678
Back to Vocab

Deepfakes are synthetic media — images, videos, or audio — generated by deep learning models that convincingly replace or alter a person's likeness. The term blends "deep learning" and "fake," capturing both the technology and its output. At their core, deepfakes typically rely on generative architectures such as autoencoders or Generative Adversarial Networks (GANs), which learn to map facial features from a source identity onto a target subject. The result can be nearly indistinguishable from authentic footage, with the model capturing subtle details like lighting, skin texture, and lip movement.

The technical pipeline generally involves training an encoder-decoder pair on large collections of images from both the source and target individuals. The encoder learns a shared latent representation of facial structure, while separate decoders reconstruct each person's unique appearance. At inference time, swapping decoders allows the model to render one person's expressions and movements onto another's face. More recent approaches use diffusion models and transformer-based architectures to achieve even higher fidelity and require less training data, making the technology increasingly accessible.

Deepfakes have legitimate applications across entertainment, education, and accessibility — enabling seamless film dubbing in foreign languages, resurrecting historical figures for documentaries, or generating personalized avatars. However, the same capabilities carry serious risks. Non-consensual explicit content, political disinformation, and identity fraud represent the most documented harms, prompting legislative responses in multiple jurisdictions and an active research field dedicated to deepfake detection. Detection methods typically analyze subtle artifacts — unnatural blinking patterns, inconsistent lighting, or frequency-domain anomalies — that generative models tend to leave behind.

The societal impact of deepfakes extends beyond individual misuse. Widespread awareness of the technology has contributed to an "epistemic crisis," where even authentic media can be dismissed as fabricated. This erosion of trust in visual evidence has implications for journalism, legal proceedings, and public discourse. As generative models continue to improve, the arms race between synthesis and detection remains one of the more consequential challenges at the intersection of AI research and media integrity.

Related

Related

Image Synthesis
Image Synthesis

AI techniques that generate novel, realistic images by learning from training data.

Generality: 794
Generative AI
Generative AI

AI systems that produce original content by learning patterns from training data.

Generality: 871
Image-to-Video Model
Image-to-Video Model

AI system that animates static images by synthesizing realistic motion and temporal dynamics.

Generality: 521
Synthetic Data Generation
Synthetic Data Generation

Artificially creating data to train ML models when real data is scarce or sensitive.

Generality: 650
Hallucination
Hallucination

When AI models confidently generate plausible but factually incorrect or fabricated outputs.

Generality: 794
Generator-Verifier Gap
Generator-Verifier Gap

The asymmetry between an AI model's ability to generate versus verify outputs.

Generality: 416