Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Research
  3. Prism
  4. Dream-to-Video Decoders

Dream-to-Video Decoders

Systems that reconstruct visual imagery from brain scans of dreams or perception
Back to PrismView interactive version

Dream-to-video decoders align brain imaging data—fMRI voxels, MEG, or dense EEG—with visual latent spaces so they can reconstruct coarse video of what a subject sees or imagines. Training requires hours of paired data per participant: subjects watch clips while networks learn mappings between neural activity patterns and latent image tokens. During recall, the system samples from the learned distribution to generate impressionistic clips that reflect color, motion, and gist.

Although fidelity is low, the technology hints at new storytelling forms: scientists visualizing dreams, therapists externalizing traumatic memories, or artists collaborating with their subconscious. Media labs explore legal, ethical, and consent frameworks, imagining future “neuro-cinemas” where audiences share cognitive content directly.

The field is firmly TRL 2, limited to labs with MRI machines. Ethical concerns (privacy, coercion) dominate regulatory discussions, with neuro-rights advocates pushing for explicit protections before commercialization. Advances in non-invasive sensors and shared decoders may eventually bring the tech into creative toolkits, but for now it remains a provocative glimpse at direct imagination capture.

TRL
2/9Theoretical
Impact
3/5
Investment
2/5
Category
Software

Related Organizations

Osaka University logo
Osaka University

Japan · University

100%

A major national university in Japan.

Researcher
Meta logo
Meta

United States · Company

95%

Developer of the Llama series of open-source LLMs.

Researcher
National University of Singapore (NUS) logo
National University of Singapore (NUS)

Singapore · University

95%

Singapore's flagship university.

Researcher
Gallant Lab (UC Berkeley)

United States · University

90%

Neuroscience lab at UC Berkeley led by Jack Gallant.

Researcher
Radboud University (Donders Institute)

Netherlands · University

90%

Leading research centre for cognitive neuroscience.

Researcher
University of Texas at Austin

United States · University

88%

Major public research university.

Researcher
Kernel logo
Kernel

United States · Company

85%

Neuroscience company developing non-invasive brain recording technology (Flow and Flux).

Developer
Neuralink logo
Neuralink

United States · Company

80%

Neurotechnology company developing implantable brain-machine interfaces.

Developer
Snap Inc. logo
Snap Inc.

United States · Company

75%

Social media and camera company developing AR spectacles.

Acquirer

Supporting Evidence

Evidence data is not available for this technology yet.

Connections

Applications
Applications
Targeted Dream Incubation

Audio-visual cues timed to sleep stages to guide dream narratives

TRL
3/9
Impact
2/5
Investment
2/5
Hardware
Hardware
Brain-Computer Media Interfaces (BCMI)

Neural interfaces that translate brain signals into media control and content creation commands

TRL
3/9
Impact
5/5
Investment
5/5
Ethics Security
Ethics Security
Cognitive Liberty Frameworks

Legal and technical standards that protect mental privacy and neural data from unauthorized access

TRL
2/9
Impact
4/5
Investment
1/5
Hardware
Hardware
Neural light-field cameras

Cameras that record light direction and intensity to enable post-capture focus and viewpoint editing

TRL
4/9
Impact
4/5
Investment
4/5
Software
Software
Real-Time NeRF Engines

Live 3D scene capture and rendering from multiple camera angles in real time

TRL
6/9
Impact
5/5
Investment
5/5
Hardware
Hardware
Neuromorphic Vision Sensors

Event-driven vision chips with on-sensor neural processing for real-time motion and edge detection

TRL
5/9
Impact
4/5
Investment
3/5

Book a research session

Bring this signal into a focused decision sprint with analyst-led framing and synthesis.
Research Sessions