Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Equivariance

Equivariance

A function property where input transformations produce corresponding, predictable transformations in the output.

Year: 2016Generality: 694
Back to Vocab

Equivariance describes a property of a function or model whereby applying a transformation to the input produces a corresponding, well-defined transformation in the output. Formally, a function f is equivariant with respect to a transformation g if f(g·x) = g·f(x) — the function and the transformation commute. This stands in contrast to invariance, where the output remains entirely unchanged under input transformations. In machine learning, equivariance is most naturally illustrated in image processing: a translation-equivariant model that detects an edge in one region of an image will detect the same edge if it shifts to another region, with the detection shifting correspondingly rather than disappearing or requiring relearning.

The practical importance of equivariance in neural networks lies in its ability to encode known symmetries directly into model architecture, rather than learning them implicitly from data. Standard convolutional neural networks (CNNs) are translation-equivariant by construction — a core reason for their success in vision tasks. Extending this principle to other symmetry groups (rotations, reflections, permutations, or more abstract group actions) gave rise to the field of geometric deep learning. Group equivariant convolutional networks (G-CNNs), introduced around 2016, generalized the convolution operation to act over broader symmetry groups, dramatically improving sample efficiency and generalization on tasks with known geometric structure.

Equivariance has since become a foundational design principle across many domains. In molecular property prediction and drug discovery, equivariant graph neural networks respect the rotational and translational symmetries of 3D atomic structures, enabling physically consistent predictions without requiring exhaustive data augmentation. In climate modeling, robotics, and particle physics, similar principles allow models to respect the underlying symmetries of the problem domain. By baking symmetry constraints into architecture rather than learning them from scratch, equivariant models typically require less data, generalize more reliably, and produce outputs that are consistent with known physical or geometric laws.

Related

Related

Invariance
Invariance

A model property where outputs remain unchanged under specified transformations of the input.

Generality: 792
Symmetry
Symmetry

Transformations that leave model predictions or data representations unchanged.

Generality: 720
Geometry-Informed Neural Networks
Geometry-Informed Neural Networks

Neural networks that embed geometric structure as inductive bias for spatial data.

Generality: 337
Geometric Deep Learning
Geometric Deep Learning

Deep learning extended to graphs, manifolds, and other non-Euclidean data structures.

Generality: 644
GCN (Graph Convolutional Networks)
GCN (Graph Convolutional Networks)

Neural networks that apply convolution-like operations to learn from graph-structured data.

Generality: 694
Hierarchy of Generalizations
Hierarchy of Generalizations

A layered framework where neural networks learn increasingly abstract data representations.

Generality: 695