Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. SLM (Sparse Linear Model)

SLM (Sparse Linear Model)

A linear model that makes predictions using only a small subset of input features.

Year: 1996Generality: 520
Back to Vocab

A Sparse Linear Model (SLM) is a predictive framework that constrains most of its learned coefficients to zero, effectively selecting a compact subset of input features to explain the target variable. Rather than weighting every available feature, the model is trained with a sparsity-inducing penalty—most commonly an L1 regularization term—that drives irrelevant or redundant coefficients toward zero during optimization. The result is a model that depends on far fewer features than are present in the original data, making it especially well-suited for high-dimensional settings where the number of predictors can vastly exceed the number of observations.

The mechanics of sparsity are typically enforced through regularization techniques such as Lasso (Least Absolute Shrinkage and Selection Operator), Elastic Net, or basis pursuit. During training, the optimization objective balances fitting the data well against a penalty proportional to the sum of absolute coefficient values. This trade-off encourages the model to zero out features that contribute little predictive signal, performing simultaneous variable selection and parameter estimation in a single step. Variants like group Lasso extend this idea to structured sparsity, zeroing out entire groups of related features at once.

Sparse linear models matter for several practical reasons. First, they are highly interpretable: a model that uses ten features out of ten thousand is far easier to audit, explain, and trust than a dense alternative. Second, they generalize better in data-scarce regimes by reducing overfitting through implicit dimensionality reduction. Third, they are computationally efficient at inference time, since predictions require only a handful of multiplications. These properties have made SLMs a foundational tool in genomics, finance, natural language processing, and any domain where practitioners need both predictive accuracy and transparent feature attribution.

Related

Related

Sparsity
Sparsity

A principle where models use mostly zero values to improve efficiency.

Generality: 752
Sparse Autoencoder
Sparse Autoencoder

An autoencoder that learns compact data representations by enforcing sparsity in hidden activations.

Generality: 595
Sparsability
Sparsability

A model or algorithm's capacity to exploit sparse data for computational efficiency.

Generality: 339
SSM (State-Space Model)
SSM (State-Space Model)

A mathematical framework modeling dynamic systems through evolving hidden state variables.

Generality: 720
Sparse Coupling
Sparse Coupling

A design strategy using fewer connections between model components to boost efficiency and scalability.

Generality: 340
Sparse Crosscoders
Sparse Crosscoders

A mechanistic interpretability tool using sparse autoencoders to analyze features across model layers.

Generality: 94