AI systems designed to excel at one specific task or domain.
Narrow AI, also called weak AI, refers to artificial intelligence systems built and optimized to perform a specific task or a tightly bounded set of tasks. Unlike the theoretical concept of artificial general intelligence (AGI), which would match or exceed human cognitive flexibility across any domain, narrow AI systems are purpose-built: they excel within their designated scope but cannot transfer that competence to unrelated problems. Every commercially deployed AI system today — from spam filters to medical image classifiers — falls into this category.
These systems work by training on large, domain-specific datasets using techniques such as supervised learning, reinforcement learning, or deep neural networks. The model learns statistical patterns relevant to its target task and encodes them in its parameters. A language translation model, for instance, learns mappings between linguistic structures across languages but has no inherent understanding of chess strategy, and vice versa. This specialization is both a strength and a limitation: narrow AI can achieve superhuman performance on its target task while remaining completely blind to anything outside its training distribution.
The practical significance of narrow AI is enormous. It underpins modern recommendation engines, autonomous vehicle perception systems, voice assistants, protein structure prediction tools, and fraud detection pipelines. The rapid improvement of narrow AI capabilities — particularly following the deep learning breakthroughs of the early 2010s — has driven most of the economic and scientific value attributed to AI in recent decades. Benchmarks like ImageNet classification, Go, and protein folding have each seen narrow AI systems surpass human expert performance.
Understanding the narrow/general distinction matters for setting realistic expectations about AI capabilities and risks. A system that outperforms radiologists at detecting tumors in chest X-rays is not thereby capable of diagnosing a patient's symptoms in conversation, writing a treatment plan, or reasoning about ethics. Recognizing these boundaries helps practitioners deploy AI responsibly, avoid over-reliance on systems outside their competence envelope, and frame ongoing research toward the harder, unsolved problem of generalization.