A hypothetical AI system capable of performing any intellectual task a human can.
Artificial General Intelligence (AGI) refers to a class of AI systems that can understand, learn, and apply knowledge across virtually any intellectual domain at a level matching or exceeding human capability. Unlike today's narrow AI systems — which excel at specific tasks like image classification, language translation, or game-playing but fail outside their training distribution — AGI would generalize fluidly across domains, transfer knowledge between contexts, and tackle novel problems without task-specific programming. It remains a central long-term goal of AI research, though no system has achieved it to date.
The challenge of building AGI is not merely one of scale or compute. It requires solving deep problems in areas like causal reasoning, common-sense understanding, continual learning, and goal-directed behavior under uncertainty. Current large language models and multimodal systems have demonstrated surprising breadth, leading some researchers to argue that AGI may be closer than previously thought, while others maintain that today's systems are sophisticated pattern matchers that lack the grounded understanding true AGI would require. This debate has intensified as frontier models like GPT-4 and Gemini exhibit emergent capabilities that blur the line between narrow and general intelligence.
The concept has significant implications for how AI systems are designed, evaluated, and governed. Researchers use AGI as a conceptual benchmark when assessing whether a system can generalize, reason abstractly, or operate autonomously across open-ended environments. Evaluation frameworks like the ARC benchmark and proposals for "general" capability testing reflect ongoing efforts to measure progress toward AGI-like behavior in a rigorous way.
AGI also carries profound ethical and societal stakes. Questions about alignment — ensuring that a general-purpose AI system pursues goals consistent with human values — become dramatically more urgent as systems approach general capability. Organizations like OpenAI, DeepMind, and Anthropic have explicitly framed their missions around the safe development of AGI, making it one of the most consequential and contested concepts in contemporary AI discourse.