An autonomous system that perceives its environment and acts to achieve goals.
In artificial intelligence, an agent is any system that perceives its environment through inputs—whether physical sensors, data streams, or simulated observations—and takes actions intended to achieve a specified goal or maximize some measure of performance. This perception-action loop is the defining characteristic of an agent, distinguishing it from passive systems that merely process data without influencing the world. Agents can range from simple reflex systems that respond directly to current inputs, to goal-based and utility-maximizing agents that plan sequences of actions, to fully autonomous learning agents that improve their behavior over time through experience.
The theoretical backbone of agent design draws heavily from decision theory, control systems, and reinforcement learning. In reinforcement learning specifically, an agent interacts with an environment modeled as a Markov Decision Process, receiving rewards or penalties that guide it toward optimal behavior. The agent's policy—a mapping from perceived states to actions—is refined through trial and error, enabling it to handle complex, high-dimensional environments without explicit programming of every contingency. Modern deep reinforcement learning agents, such as those developed by DeepMind for playing Atari games or mastering Go, combine neural network function approximation with this framework to achieve superhuman performance on challenging tasks.
The concept of agents is foundational across nearly every subfield of AI. In robotics, agents must navigate physical uncertainty and real-time constraints. In natural language processing, conversational agents must manage dialogue state and user intent. Multi-agent systems—where multiple agents interact, compete, or cooperate—introduce additional complexity around coordination, communication, and emergent behavior, with applications in autonomous vehicles, financial markets, and distributed computing. As AI systems become more capable and are deployed in higher-stakes settings, the design of agents that are safe, aligned with human values, and robust to distributional shift has become one of the central challenges in the field.