An AI paradigm that represents knowledge as explicit symbols manipulated through logical rules.
Symbolic AI is an approach to artificial intelligence in which knowledge about the world is encoded as human-readable symbols — words, predicates, logical statements — and reasoning is performed by applying explicit rules to manipulate those symbols. Rather than learning patterns from raw data, a symbolic system operates on structured representations: an expert system might encode thousands of if-then rules about medical diagnosis, while a theorem prover works through formal logical proofs step by step. The approach draws heavily from mathematical logic, philosophy of mind, and linguistics, and rests on the assumption that intelligent behavior can be fully captured by symbol manipulation alone.
In practice, symbolic systems are built by knowledge engineers who interview domain experts and translate their expertise into rule bases, ontologies, or semantic networks. Inference engines then traverse these structures — using techniques like forward chaining, backward chaining, or resolution — to answer queries or reach conclusions. This makes the reasoning process highly transparent and auditable: every conclusion can be traced back through the chain of rules that produced it. Systems like MYCIN (medical diagnosis) and DENDRAL (chemical structure inference) demonstrated that symbolic methods could match or exceed human expert performance in narrow, well-defined domains during the 1970s and 1980s.
Symbolic AI fell out of favor as researchers encountered the brittleness problem: rule-based systems struggle with ambiguity, incomplete information, and domains where knowledge is difficult to formalize. The rise of statistical machine learning in the 1990s and deep learning in the 2010s shifted the field toward data-driven approaches that learn representations automatically. Nevertheless, symbolic methods remain relevant — they underpin knowledge graphs, formal verification, and planning systems used in robotics and logistics.
Today there is renewed interest in neurosymbolic AI, which seeks to combine the pattern-recognition strengths of neural networks with the structured reasoning and interpretability of symbolic systems. This hybrid direction aims to address the weaknesses of both paradigms: neural networks lack explicit reasoning and struggle with systematic generalization, while pure symbolic systems cannot handle perceptual data or uncertainty gracefully. Symbolic AI thus remains a live research area rather than a historical artifact.