Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Observatory
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. LLE (Locally Linear Embedding)

LLE (Locally Linear Embedding)

Nonlinear dimensionality reduction that preserves local neighborhood geometry across a manifold.

Year: 2000Generality: 575
Back to Vocab

Locally Linear Embedding (LLE) is an unsupervised manifold learning algorithm that reduces high-dimensional data to a lower-dimensional representation while preserving the local geometric structure of the original space. Unlike linear methods such as PCA, LLE assumes that data lies on or near a curved, nonlinear manifold, and exploits the fact that small neighborhoods on that manifold can be well-approximated as locally flat. This makes it especially powerful for uncovering intrinsic structure in complex datasets where global linearity assumptions break down.

LLE operates in two main stages. First, for each data point, the algorithm identifies its nearest neighbors and computes a set of reconstruction weights that best express that point as a linear combination of those neighbors — minimizing reconstruction error under a sum-to-one constraint. These weights encode the local geometry around each point. In the second stage, LLE finds a low-dimensional embedding in which each point is still reconstructed by the same weights from the same neighbors, effectively preserving the local relationships discovered in the first step. This is solved as a sparse eigenvalue problem, making it computationally tractable for moderately sized datasets.

LLE belongs to a broader family of manifold learning methods that emerged in the early 2000s alongside Isomap and Laplacian Eigenmaps, all motivated by the need to go beyond linear dimensionality reduction for tasks like visualization, feature extraction, and data exploration. It is particularly well-suited for datasets where the underlying structure is smooth and the neighborhood relationships are meaningful — such as images of faces under varying pose or lighting, or motion capture data. However, LLE can struggle with sparse data, noisy neighborhoods, or manifolds with holes, and it does not naturally generalize to new out-of-sample points without extensions.

Despite being largely superseded in practice by methods like t-SNE and UMAP for visualization tasks, LLE remains a foundational concept in geometric machine learning and continues to inform modern approaches to representation learning and graph-based dimensionality reduction.

Related

Related

Manifold Learning
Manifold Learning

Nonlinear dimensionality reduction that uncovers low-dimensional structure hidden in high-dimensional data.

Generality: 792
Dimensionality Reduction
Dimensionality Reduction

Transforming high-dimensional data into fewer dimensions while preserving essential structure.

Generality: 838
Embedding Space
Embedding Space

A learned vector space where similar data points cluster geometrically close together.

Generality: 794
Latent Space
Latent Space

A compressed, learned representation where similar data points cluster geometrically.

Generality: 794
PCA (Principal Component Analysis)
PCA (Principal Component Analysis)

Dimensionality reduction technique that projects data onto its highest-variance directions.

Generality: 871
Hyperspherical Representation Learning
Hyperspherical Representation Learning

Learning data representations constrained to a hypersphere to exploit its geometric properties.

Generality: 314