Asimov's fictional ethical rules governing robot behavior and human safety.
The Three Laws of Robotics are a set of ethical principles introduced by science fiction author Isaac Asimov in his 1942 short story "Runaround." The laws establish a strict hierarchy of obligations for intelligent robots: first, a robot may not injure a human or allow one to come to harm through inaction; second, a robot must obey human orders unless doing so conflicts with the first law; third, a robot must protect its own existence unless doing so conflicts with the first or second laws. This prioritized structure was designed to create a logical, conflict-resolving framework for machine behavior in human environments.
Although conceived as a literary device rather than a technical specification, the Three Laws function as an early thought experiment in what would now be called AI alignment — the challenge of ensuring that intelligent systems reliably act in accordance with human values and intentions. Asimov's stories frequently explored edge cases and paradoxes that arose when robots attempted to apply these rules in complex real-world situations, anticipating many of the difficulties that modern AI safety researchers encounter when trying to formalize human values into machine-interpretable constraints.
The laws have had a lasting influence on AI ethics discourse, serving as a cultural touchstone that frames debates about autonomous systems, robot rights, and machine accountability. Researchers and ethicists often reference them when discussing how to encode safety constraints into AI systems, even while acknowledging that the laws are far too simplistic for practical implementation. Real-world challenges — such as defining "harm," resolving conflicting obligations, and handling incomplete information — illustrate why translating intuitive ethical principles into robust machine behavior remains an open and difficult problem.
While the Three Laws are not a technical framework used in contemporary AI development, their conceptual legacy is significant. They helped establish the idea that autonomous systems require explicit, hierarchically ordered safety constraints, and they popularized the notion that AI ethics deserves serious intellectual attention. Modern alignment research, value learning, and AI governance frameworks can all be seen as more rigorous successors to the questions Asimov first posed through fiction.