Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. LFMs (Liquid Foundation Models)

LFMs (Liquid Foundation Models)

Efficient generative AI models using dynamical systems principles to handle diverse data types.

Year: 2024Generality: 102
Back to Vocab

Liquid Foundation Models (LFMs) are a class of generative AI models developed by Liquid AI that depart from the dominant transformer paradigm by grounding their architecture in principles drawn from dynamical systems theory and numerical linear algebra. Rather than relying on the attention mechanisms central to transformers, LFMs use structured state-space representations that allow the model to process sequential data — including text, audio, and video — with a fundamentally different computational profile. This design enables them to handle long-context inputs of up to 32,000 tokens without the quadratic memory scaling that burdens standard attention-based architectures.

The core innovation of LFMs lies in their adaptive, self-regulating computation. The models adjust their internal complexity based on the demands of the task at hand, drawing inspiration from liquid neural networks — a family of recurrent networks whose dynamics are governed by ordinary differential equations. This lineage gives LFMs a natural capacity for continuous-time sequential reasoning, making them well-suited for applications like document summarization, conversational AI, and autonomous systems that require sustained coherence over long input sequences. Their reduced memory footprint also makes them deployable not just on large cloud infrastructure but on resource-constrained edge devices.

LFMs were publicly introduced in 2024 by Liquid AI, a company founded by MIT researchers including Ramin Hasani, Mathias Lechner, and Daniela Rus, whose earlier work on liquid neural networks laid the conceptual groundwork. Despite using significantly fewer parameters than leading models from Meta and OpenAI, LFMs achieved competitive benchmark performance, positioning them as a credible efficiency-focused alternative in the foundation model landscape.

The significance of LFMs extends beyond their benchmark numbers. They represent a broader challenge to the assumption that transformer architectures are the inevitable substrate for large-scale AI. By demonstrating that dynamical systems principles can underpin capable, scalable foundation models, LFMs open a research direction that may prove especially valuable as AI deployment shifts toward edge computing, real-time inference, and domains where memory and energy efficiency are hard constraints.

Related

Related

AFMs (Analog Foundation Models)
AFMs (Analog Foundation Models)

Large pretrained AI models designed to run on analog hardware for dramatic efficiency gains.

Generality: 96
RFM (Robotics Foundation Model)
RFM (Robotics Foundation Model)

A large-scale pretrained model providing general-purpose capabilities across diverse robotic tasks.

Generality: 322
LNN (Liquid Neural Network)
LNN (Liquid Neural Network)

A recurrent neural network that continuously adapts its internal state to process time-varying data.

Generality: 339
LLM (Large Language Model)
LLM (Large Language Model)

Massive neural networks trained on text to understand and generate human language.

Generality: 905
LCMs (Large Concept Models)
LCMs (Large Concept Models)

Large-scale models that represent and reason over abstract, compositional concepts rather than raw tokens.

Generality: 381
LVLMs (Large Vision Language Models)
LVLMs (Large Vision Language Models)

Large AI models that jointly understand and reason over images and text.

Generality: 694