Intelligence emerges from an agent's dynamic interaction with its physical and social environment.
The situated approach in AI holds that intelligent behavior cannot be fully understood or replicated in isolation from the environment in which it occurs. Rather than treating cognition as pure symbol manipulation happening inside a self-contained system, this perspective argues that perception, action, and context are inseparable components of intelligence. An agent's surroundings are not merely inputs to be processed — they actively shape and constrain the agent's behavior in real time, making the environment a constitutive part of cognition itself.
In practice, the situated approach has most visibly influenced robotics and autonomous systems. Rodney Brooks's subsumption architecture, for example, abandoned centralized world models in favor of layered reactive behaviors that respond directly to sensory input. Similarly, Lucy Suchman's work on plans and situated actions challenged the assumption that human activity follows pre-specified scripts, demonstrating instead that people improvise intelligently in response to unfolding circumstances. These ideas pushed AI researchers to design systems that sense and act in tight feedback loops with their environments rather than planning exhaustively before acting.
The approach also connects to broader frameworks such as embodied cognition and ecological psychology, which argue that the body and its sensorimotor engagement with the world are prerequisites for genuine understanding. This has implications well beyond robotics: situated thinking informs how researchers design conversational agents, social robots, and reinforcement learning systems that must adapt to dynamic, partially observable environments. It raises fundamental questions about whether intelligence trained purely on static datasets can ever fully capture the adaptive, context-sensitive character of natural intelligence.
The situated approach matters because it reframes the benchmark for AI success. Instead of asking whether a system can solve abstract problems in controlled conditions, it asks whether a system can behave appropriately and flexibly in the messy, unpredictable contexts where intelligence actually needs to function. This shift in perspective has helped motivate advances in sim-to-real transfer, embodied AI research, and the growing emphasis on agents that learn through interaction rather than passive observation.