A metaphor for systemic coordination failures that produce collectively harmful outcomes despite individual rationality.
In AI safety and rationalist discourse, "Moloch" refers to a class of coordination failures in which individually rational actors, responding to competitive pressures or misaligned incentives, collectively produce outcomes that are harmful or suboptimal for everyone involved. The term captures situations where no single participant wants the bad outcome, yet the structure of incentives makes it nearly impossible for any one actor to unilaterally defect from the destructive pattern. Classic examples include arms races, environmental tragedies of the commons, and races to the bottom in regulatory standards—all scenarios where short-term competitive logic overrides long-term collective welfare.
The concept draws heavily on game theory, particularly multi-player prisoner's dilemmas and social trap dynamics, where dominant strategies for individuals lead to Pareto-inferior equilibria for the group. In AI safety contexts, Moloch is frequently invoked to describe the risk of an uncoordinated global AI development race, where competitive pressure between nations or corporations incentivizes cutting corners on safety, transparency, or alignment research. The fear is that no single actor can afford to slow down unilaterally without ceding ground to less cautious competitors, even if all parties would prefer a slower, safer collective pace.
The term was popularized in AI and rationalist communities by Scott Alexander's 2014 essay Meditations on Moloch, which used the ancient Canaanite deity—historically associated with sacrifice—as a vivid symbol for value-destroying competitive dynamics. Alexander synthesized ideas from economics, ecology, and game theory to argue that many of civilization's worst problems stem not from malice but from structural incentive traps. The essay resonated deeply within effective altruism and AI safety communities, where it became shorthand for a broad class of systemic risks.
Understanding Moloch dynamics is considered important for AI alignment because it frames alignment not merely as a technical problem but as a global coordination problem. Even if individual labs or governments wanted to develop AI responsibly, competitive pressures could undermine those intentions at scale. Proposed solutions range from international treaties and regulatory frameworks to technical mechanisms like corrigibility and cooperative AI design—all aimed at restructuring incentives so that safe behavior becomes the dominant strategy rather than a competitive liability.