Efficient generative AI models using dynamical systems principles to handle diverse data types.
Liquid Foundation Models (LFMs) are a class of generative AI models developed by Liquid AI that depart from the dominant transformer paradigm by grounding their architecture in principles drawn from dynamical systems theory and numerical linear algebra. Rather than relying on the attention mechanisms central to transformers, LFMs use structured state-space representations that allow the model to process sequential data — including text, audio, and video — with a fundamentally different computational profile. This design enables them to handle long-context inputs of up to 32,000 tokens without the quadratic memory scaling that burdens standard attention-based architectures.
The core innovation of LFMs lies in their adaptive, self-regulating computation. The models adjust their internal complexity based on the demands of the task at hand, drawing inspiration from liquid neural networks — a family of recurrent networks whose dynamics are governed by ordinary differential equations. This lineage gives LFMs a natural capacity for continuous-time sequential reasoning, making them well-suited for applications like document summarization, conversational AI, and autonomous systems that require sustained coherence over long input sequences. Their reduced memory footprint also makes them deployable not just on large cloud infrastructure but on resource-constrained edge devices.
LFMs were publicly introduced in 2024 by Liquid AI, a company founded by MIT researchers including Ramin Hasani, Mathias Lechner, and Daniela Rus, whose earlier work on liquid neural networks laid the conceptual groundwork. Despite using significantly fewer parameters than leading models from Meta and OpenAI, LFMs achieved competitive benchmark performance, positioning them as a credible efficiency-focused alternative in the foundation model landscape.
The significance of LFMs extends beyond their benchmark numbers. They represent a broader challenge to the assumption that transformer architectures are the inevitable substrate for large-scale AI. By demonstrating that dynamical systems principles can underpin capable, scalable foundation models, LFMs open a research direction that may prove especially valuable as AI deployment shifts toward edge computing, real-time inference, and domains where memory and energy efficiency are hard constraints.