A goal or desired outcome that guides an intelligent agent's planning and actions.
In artificial intelligence, intention refers to the internal representation of a goal or desired future state that drives an agent's decision-making and behavior. Unlike simple reactive systems that respond only to immediate stimuli, intention-based agents maintain persistent commitments to objectives, allowing them to plan multi-step action sequences, allocate resources, and coordinate behavior over time. This concept is central to the Belief-Desire-Intention (BDI) model of agency, where beliefs represent the agent's knowledge of the world, desires represent possible goals, and intentions represent the goals the agent has committed to pursuing. The BDI framework, formalized in the late 1980s and 1990s, became one of the most influential architectures for building rational, goal-directed software agents.
Implementing intention in AI systems requires mechanisms for goal prioritization, conflict resolution, and plan revision. An agent must not only select which intentions to pursue but also recognize when circumstances have changed enough to warrant abandoning or revising a committed plan. This balance between commitment and flexibility is a core challenge: an agent that abandons intentions too readily wastes planning effort, while one that persists rigidly may fail to adapt to a changing environment. Planning algorithms, hierarchical task networks, and reinforcement learning approaches all offer different ways to operationalize this trade-off in practice.
The concept of intention has grown increasingly relevant as AI systems become more autonomous and are deployed in complex, dynamic environments such as robotics, autonomous vehicles, and multi-agent systems. In modern deep learning contexts, intention modeling appears in areas like imitation learning and inverse reinforcement learning, where systems attempt to infer the goals behind observed behavior. It also plays a role in human-robot interaction, where robots must model human intentions to collaborate safely and effectively. As AI agents take on longer-horizon tasks and greater autonomy, robust representations of intention become essential for building systems that are both capable and predictable.