An ideology advocating rapid, unconstrained AI development to solve humanity's greatest challenges.
Effective Accelerationism (often abbreviated e/acc) is a techno-optimist ideology that emerged from online communities in the early 2020s, advocating for the fastest possible development and deployment of artificial intelligence with minimal regulatory constraint. Drawing loosely from accelerationist philosophy — which holds that intensifying existing technological and economic trends hastens transformative change — and borrowing rhetorical framing from the effective altruism movement, e/acc proponents argue that AI represents humanity's best tool for solving existential problems including disease, poverty, and climate change. The movement gained visible momentum on platforms like Twitter around 2022, associated with pseudonymous figures such as @BasedBeff (later identified as physicist Beff Jezos) and a broader community of Silicon Valley technologists.
At its core, e/acc rests on the belief that sufficiently advanced AI — potentially approaching artificial general intelligence — will act as an "omnipotent" problem-solving force, compressing centuries of scientific and social progress into decades. Proponents treat attempts to slow AI development, whether through safety research, regulation, or governance frameworks, as net-negative interventions that delay this transformative potential. The ideology thus positions itself in direct opposition to the AI safety and alignment communities, which argue that uncontrolled AI development poses catastrophic risks.
Within AI discourse, e/acc occupies an important ideological pole in ongoing debates about how society should govern transformative technology. Critics — including AI safety researchers, ethicists, and policymakers — argue that the movement dangerously underweights tail risks, conflates speed of development with beneficial outcomes, and provides intellectual cover for commercial interests that profit from reduced oversight. Supporters counter that excessive caution and regulatory capture pose their own civilizational risks by concentrating AI power in the hands of incumbents.
Though e/acc is not a technical concept and has no formal academic grounding, its prominence in AI communities makes it relevant to understanding the cultural and political landscape surrounding machine learning research and deployment. It reflects genuine tensions between innovation velocity, safety, and governance that practitioners, researchers, and policymakers must navigate as AI systems grow more capable.