Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Research
  3. Cortex
  4. Neural Foundation Models

Neural Foundation Models

AI models pre-trained on brain recordings to enable faster, personalized neural decoding
Back to CortexView interactive version

Neural foundation models are large-scale AI models (similar in concept to GPT for language, but for neural signals) that are pre-trained on massive, diverse datasets of brain recordings from both intracranial (implanted) and non-invasive (EEG, MEG) sources, learning general patterns of neural activity that can be applied across different individuals and tasks. These models, sometimes called 'NeuroGPT' concepts, enable few-shot decoding for new users (requiring minimal calibration data) and transfer learning across different brain-computer interface tasks, dramatically reducing the time and data needed to set up BCIs for new users or new applications by leveraging knowledge learned from many previous users and tasks.

This innovation addresses the major limitation of current BCIs, where each user requires extensive calibration and training data, making setup time-consuming and limiting usability. By pre-training on large datasets, these models can generalize across users and tasks. Research institutions and companies are developing these models.

The technology is particularly significant for making BCIs more practical and accessible, where reducing calibration time could dramatically improve usability. As the technology improves, it could enable plug-and-play BCIs. However, ensuring generalization across individuals, managing data privacy, and achieving reliable performance remain challenges. The technology represents an important evolution in BCI software, but requires continued development to achieve the reliability and generalization needed for widespread use. Success could make BCIs much more practical, but the technology must prove that models can reliably generalize across the diversity of human brains.

TRL
4/9Formative
Impact
5/5
Investment
5/5
Category
Software

Related Organizations

Meta Fundamental AI Research (FAIR) logo
Meta Fundamental AI Research (FAIR)

United States · Research Lab

95%

Published research on decoding visual perception from MEG signals using AI models trained on large datasets.

Researcher
University of Texas at Austin

United States · University

95%

Major public research university.

Researcher
Osaka University logo
Osaka University

Japan · University

90%

A major national university in Japan.

Researcher
EPFL (École Polytechnique Fédérale de Lausanne) logo
EPFL (École Polytechnique Fédérale de Lausanne)

Switzerland · University

85%

Home to the 'Digital Bridge' project and CEBRA, a machine learning method for mapping neural data to low-dimensional spaces.

Researcher
Google DeepMind logo
Google DeepMind

United Kingdom · Research Lab

85%

Developers of the Gemini family of models, which are trained from the start to be multimodal across text, images, video, and audio.

Researcher
NeuroPace logo
NeuroPace

United States · Company

85%

Holds one of the world's largest datasets of chronic intracranial EEG recordings from human patients.

Deployer

OpenNeuro

United States · Nonprofit

85%

A free and open platform for sharing MRI, MEG, EEG, iEEG, and ECoG data.

Standards Body
Radboud University (Donders Institute)

Netherlands · University

85%

Leading research centre for cognitive neuroscience.

Researcher
Hugging Face logo
Hugging Face

United States · Company

70%

The global hub for open-source AI models and datasets. Founded by French entrepreneurs with a major office in Paris.

Deployer

Supporting Evidence

Evidence data is not available for this technology yet.

Connections

Software
Software
Neuroprosthetic Calibration AI

AI that auto-tunes brain–computer interfaces to maintain performance as neural signals drift

TRL
6/9
Impact
4/5
Investment
4/5
Software
Software
Brain-State Decoders

Machine learning models that classify cognitive states like attention or fatigue from neural signals

TRL
6/9
Impact
4/5
Investment
4/5
Hardware
Hardware
Next-Gen Noninvasive BCIs

Wearable brain sensors using magnetic fields and light to decode neural activity outside labs

TRL
6/9
Impact
4/5
Investment
4/5
Hardware
Hardware
High-Density Cortical Arrays

Electrode arrays recording thousands of neurons simultaneously for brain–machine interfaces

TRL
6/9
Impact
5/5
Investment
5/5
Ethics Security
Ethics Security
Neural Biometric Authentication

Authenticates identity using unique brainwave patterns captured via EEG

TRL
5/9
Impact
4/5
Investment
3/5
Software
Software
Digital Neuro-Twins

Personalized brain simulations for testing treatments before applying them to patients

TRL
3/9
Impact
5/5
Investment
3/5

Book a research session

Bring this signal into a focused decision sprint with analyst-led framing and synthesis.
Research Sessions