A computing paradigm using high-dimensional random vectors to represent and process information robustly.
Hyperdimensional computing (HDC) is a computational framework that represents data as dense, high-dimensional binary or bipolar vectors — typically with 10,000 or more dimensions — called hypervectors. These hypervectors are generated randomly but exploit the mathematical properties of high-dimensional spaces, where random vectors are nearly orthogonal to one another with overwhelming probability. This near-orthogonality provides a natural mechanism for distinguishing between distinct concepts while still allowing meaningful similarity comparisons, forming the foundation for encoding, storing, and retrieving information in a noise-tolerant way.
The core operations in HDC are simple and hardware-friendly: binding (element-wise multiplication or XOR) combines two hypervectors into a new one representing their association, bundling (element-wise addition or majority vote) creates a superposition of multiple hypervectors, and permutation (cyclic shifting) encodes sequential or positional information. These operations are closed over the hypervector space, meaning the results remain valid hypervectors. By composing these primitives, HDC systems can encode complex structured data — sequences, graphs, sets — into single fixed-width vectors that support fast associative lookup via similarity search against a stored item memory.
In machine learning contexts, HDC has attracted interest as a lightweight alternative to deep neural networks for classification and pattern recognition tasks, particularly in resource-constrained environments like edge devices and neuromorphic hardware. Because training often reduces to a single pass of vector accumulation rather than iterative gradient descent, HDC models can learn incrementally and adapt quickly to new classes without catastrophic forgetting. This makes them appealing for continual learning and few-shot learning scenarios where conventional deep learning struggles or is computationally prohibitive.
The framework draws theoretical inspiration from Pentti Kanerva's work on sparse distributed memory in the late 1980s and was formalized as a computing paradigm in the 2000s, gaining significant traction in the ML community around 2009 as researchers demonstrated competitive accuracy on real-world benchmarks with dramatically lower energy and latency costs. Today HDC is actively explored for biosignal classification, natural language processing, robotics, and brain-inspired AI architectures, positioning it as a compelling bridge between cognitive science and practical machine learning.