Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Moravec's Paradox

Moravec's Paradox

AI finds abstract reasoning easy but struggles with basic human sensorimotor skills.

Year: 1988Generality: 678
Back to Vocab

Moravec's Paradox is the counterintuitive observation that tasks humans consider intellectually demanding — chess, calculus, logical deduction — are relatively straightforward to implement in software, while tasks that feel effortless to any toddler — recognizing a face, picking up an object, navigating a room — are extraordinarily difficult for machines. First articulated by roboticist Hans Moravec in the 1980s, and echoed by Marvin Minsky and Rodney Brooks, the paradox reframed what AI researchers should consider "hard" and fundamentally reshaped priorities in the field.

The explanation lies in evolutionary history. Abstract reasoning is a recent cognitive development, and humans perform it consciously and deliberately — meaning the underlying computational structure is relatively shallow and inspectable. Sensorimotor skills, by contrast, are the product of hundreds of millions of years of biological refinement, encoded in massively parallel, low-level neural architecture that operates entirely below conscious awareness. Replicating that depth of optimization in silicon requires enormous computational resources and sophisticated algorithms that took decades to develop.

For machine learning, the paradox proved prescient. Early AI systems excelled at symbolic reasoning and game-playing but failed catastrophically at perception and motor control. It was only with the rise of deep learning — particularly convolutional neural networks for vision and reinforcement learning for control — that machines began making meaningful progress on sensorimotor tasks. Even so, robotic manipulation and real-world navigation remain active research challenges, while language models now surpass human performance on many abstract reasoning benchmarks.

Moravec's Paradox continues to inform how researchers allocate effort and set expectations. It cautions against equating benchmark performance on structured tasks with general intelligence, and it explains why embodied AI and robotics remain harder problems than they superficially appear. The paradox also has philosophical implications: it suggests that the most distinctly "human" capabilities are not our highest reasoning faculties, but the ancient, unconscious competencies we share with much simpler animals.

Related

Related

AI Effect
AI Effect

Achieved AI tasks are dismissed as 'not real intelligence,' perpetually moving the goalposts.

Generality: 520
Motor Learning
Motor Learning

How AI and robotic systems acquire and refine physical motor skills through experience.

Generality: 608
Embodied AI
Embodied AI

AI systems that perceive and act in the physical world through a body.

Generality: 694
Lump of Task Fallacy
Lump of Task Fallacy

The mistaken belief that AI can fully replicate any task human intelligence performs.

Generality: 293
Situated Approach
Situated Approach

Intelligence emerges from an agent's dynamic interaction with its physical and social environment.

Generality: 658
Embodied Intelligence
Embodied Intelligence

Intelligence arising from an agent's physical interaction with its environment.

Generality: 694