Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. LCMs (Large Concept Models)

LCMs (Large Concept Models)

Large-scale models that represent and reason over abstract, compositional concepts rather than raw tokens.

Year: 2024Generality: 381
Back to Vocab

Large Concept Models (LCMs) are a class of foundation models designed to represent, compose, and reason over high-level conceptual primitives rather than—or in addition to—raw tokens or pixels. Where conventional large language models operate primarily on surface-level token sequences, LCMs aim to encode semantically meaningful units such as objects, relations, affordances, procedures, and abstract categories into structured latent spaces where these concepts can be explicitly manipulated, combined, and queried. This orientation toward concept-level representations draws on ideas from disentangled representation learning, symbolic AI, and multimodal alignment.

Architecturally, LCMs span a wide range of approaches: contrastive and self-supervised pretraining that clusters semantically related inputs, structured latent-variable models that expose interpretable concept dimensions, neuro-symbolic hybrids that interface neural representations with symbolic reasoning engines, and concept bottleneck models that route predictions through human-interpretable intermediate variables. A defining characteristic is that the learned concept space supports compositional generalization—the ability to correctly handle novel combinations of known concepts—which standard token-prediction objectives do not explicitly encourage. Evaluation therefore emphasizes compositional benchmarks, causal intervention tests, and concept disentanglement metrics rather than perplexity or accuracy alone.

The practical motivation for LCMs is substantial. By exposing concept-level interfaces, these models promise improved sample efficiency when transferring to new tasks, stronger causal understanding, greater interpretability for human oversight, and more reliable behavior in safety-critical settings. Downstream applications include robotics (where grounding actions to object affordances matters), scientific modeling (where abstract variables must be identified and manipulated), and human–AI collaboration (where shared conceptual vocabulary aids communication). The term gained explicit traction in the research community around 2024–2025 as interest grew in augmenting large multimodal models with structured concept representations and APIs for compositional reasoning.

LCMs sit at the intersection of several long-running research threads—cognitive-science-inspired concept learning, structured generative models, and large-scale pretraining—and represent an effort to reconcile the empirical power of modern foundation models with the systematic, interpretable reasoning that symbolic approaches have historically offered. Whether through soft concept vectors, discrete symbolic APIs, or hybrid architectures, the central bet is that making concepts first-class citizens in model design will yield more robust, transferable, and trustworthy AI systems.

Related

Related

LRM (Large Reasoning Models)
LRM (Large Reasoning Models)

Large-scale neural systems explicitly optimized for multi-step, structured reasoning tasks.

Generality: 384
LVLMs (Large Vision Language Models)
LVLMs (Large Vision Language Models)

Large AI models that jointly understand and reason over images and text.

Generality: 694
LLM (Large Language Model)
LLM (Large Language Model)

Massive neural networks trained on text to understand and generate human language.

Generality: 905
L2M (Large Memory Model)
L2M (Large Memory Model)

A decoder-only Transformer with addressable auxiliary memory enabling reasoning far beyond its attention window.

Generality: 189
MLLMs (Multimodal Large Language Models)
MLLMs (Multimodal Large Language Models)

AI systems that understand and generate content across text, images, audio, and more.

Generality: 794
LAM (Large Action Model)
LAM (Large Action Model)

AI systems that interpret human intent and execute actions directly within digital applications.

Generality: 337