AI finds abstract reasoning easy but struggles with basic human sensorimotor skills.
Moravec's Paradox is the counterintuitive observation that tasks humans consider intellectually demanding — chess, calculus, logical deduction — are relatively straightforward to implement in software, while tasks that feel effortless to any toddler — recognizing a face, picking up an object, navigating a room — are extraordinarily difficult for machines. First articulated by roboticist Hans Moravec in the 1980s, and echoed by Marvin Minsky and Rodney Brooks, the paradox reframed what AI researchers should consider "hard" and fundamentally reshaped priorities in the field.
The explanation lies in evolutionary history. Abstract reasoning is a recent cognitive development, and humans perform it consciously and deliberately — meaning the underlying computational structure is relatively shallow and inspectable. Sensorimotor skills, by contrast, are the product of hundreds of millions of years of biological refinement, encoded in massively parallel, low-level neural architecture that operates entirely below conscious awareness. Replicating that depth of optimization in silicon requires enormous computational resources and sophisticated algorithms that took decades to develop.
For machine learning, the paradox proved prescient. Early AI systems excelled at symbolic reasoning and game-playing but failed catastrophically at perception and motor control. It was only with the rise of deep learning — particularly convolutional neural networks for vision and reinforcement learning for control — that machines began making meaningful progress on sensorimotor tasks. Even so, robotic manipulation and real-world navigation remain active research challenges, while language models now surpass human performance on many abstract reasoning benchmarks.
Moravec's Paradox continues to inform how researchers allocate effort and set expectations. It cautions against equating benchmark performance on structured tasks with general intelligence, and it explains why embodied AI and robotics remain harder problems than they superficially appear. The paradox also has philosophical implications: it suggests that the most distinctly "human" capabilities are not our highest reasoning faculties, but the ancient, unconscious competencies we share with much simpler animals.