AI that encodes knowledge implicitly through distributed representations rather than explicit symbols.
Subsymbolic AI refers to a broad class of artificial intelligence approaches that represent and process information not through explicit, human-readable symbols and logical rules, but through the collective behavior of many simple, interconnected computational units. Rather than encoding knowledge as discrete facts or if-then rules, subsymbolic systems distribute information across parameters — such as the weights of a neural network — in ways that are not directly interpretable as symbolic propositions. This stands in contrast to classical symbolic AI, which manipulates structured representations like predicates, ontologies, and formal grammars.
The primary mechanisms of subsymbolic AI include artificial neural networks, connectionist models, and evolutionary algorithms such as genetic algorithms. In neural networks, learning occurs by adjusting millions of numerical weights through optimization procedures like backpropagation and gradient descent. No single weight encodes a specific fact; instead, knowledge emerges from the statistical patterns captured across the entire parameter space. This distributed encoding gives subsymbolic systems remarkable robustness to noise and their ability to generalize from examples — properties that rule-based symbolic systems struggle to match.
Subsymbolic AI gained significant momentum in the mid-1980s, particularly following the popularization of the backpropagation algorithm for training multilayer perceptrons. Researchers in the connectionist movement, including those behind the influential Parallel Distributed Processing volumes, argued that cognition itself might be better modeled through distributed, sub-symbolic processes rather than sequential symbol manipulation. This framing positioned subsymbolic AI as both a practical engineering approach and a theoretical alternative to the dominant symbolic paradigm of the time.
The practical importance of subsymbolic AI has grown enormously with the deep learning revolution. Modern applications — image recognition, speech synthesis, machine translation, and large language models — are almost entirely subsymbolic in nature, relying on learned distributed representations rather than hand-crafted rules. The tension between subsymbolic and symbolic approaches remains active in AI research, with hybrid neuro-symbolic systems attempting to combine the pattern-learning strengths of subsymbolic methods with the interpretability and reasoning capabilities of symbolic ones.