A resource-bounded reasoning framework that performs adaptive, defeasible inference under uncertainty.
NARS is a formal reasoning framework built on Non-Axiomatic Logic (NAL) that models intelligent behavior when both knowledge and computational resources are inherently limited. Developed by Pei Wang, it operationalizes what he calls the Assumption of Insufficient Knowledge and Resources (AIKR) — the premise that any realistic intelligent system must act and learn without complete information and within finite time and memory budgets. Rather than treating these constraints as engineering inconveniences, NARS treats them as foundational design principles, producing a system that reasons adaptively and incrementally in open-world environments.
At the core of NARS is a distinctive truth representation: instead of single probabilities, beliefs are encoded as pairs of (frequency, confidence) derived from accumulated evidence. This allows the system to distinguish between a belief that is uncertain because little evidence exists versus one that is uncertain because evidence is mixed. Inference proceeds through a rich set of local rules — deduction, induction, abduction, analogy, and revision — that transform these truth-value pairs in ways that are resilient to inconsistency and incompleteness. Crucially, NARS also incorporates mechanisms for attention allocation, memory budgeting, and controlled forgetting, enabling it to prioritize reasoning tasks dynamically under real-time constraints.
NARS differs meaningfully from both classical symbolic AI and standard probabilistic approaches. Unlike logic systems that assume a consistent, complete knowledge base, NARS embraces contradiction and partial knowledge as normal operating conditions. Unlike Bayesian systems, it does not require a global probability distribution or closed-world assumptions. This makes it particularly well-suited to continual learning scenarios, autonomous agents, and cognitive architectures where the environment changes and new information must be integrated without restarting inference from scratch.
The system gained broader attention in the AI community during the 2000s and 2010s as interest grew in resource-bounded cognition, lifelong learning, and alternatives to deep learning for structured reasoning. The OpenNARS project and related implementations have extended the framework into robotics, natural language processing, and cognitive modeling, positioning NARS as a reference architecture for researchers exploring general, adaptive machine intelligence beyond pattern recognition.