
Existential and catastrophic risk funding represents a paradigm shift in philanthropic priorities, directing resources toward threats that could fundamentally alter or end human civilization. This approach emerged from philosophical frameworks in effective altruism and longtermism, which argue that preventing low-probability but high-consequence events deserves significant attention alongside addressing immediate suffering. The technical challenge lies in assessing risks that have never materialized—how does one quantify the probability of advanced artificial intelligence systems becoming uncontrollable, or evaluate the likelihood of engineered pathogens escaping containment? These funders support research institutions focused on AI alignment, biosecurity protocols, nuclear de-escalation strategies, and resilience planning for civilizational-scale disruptions. The work involves not just traditional grantmaking but also building entirely new fields of study, from technical AI safety research to pandemic preparedness infrastructure, often requiring deep engagement with cutting-edge scientific and technological developments.
The rise of this funding category addresses a critical gap in traditional philanthropy and government spending, which typically operate on shorter time horizons and struggle to justify investments in preventing events that may never occur. Conventional risk assessment frameworks break down when confronting scenarios like recursive self-improvement in AI systems or the deliberate engineering of highly transmissible pathogens—threats that existing institutions were not designed to anticipate or prevent. This creates space for philanthropic capital to fund unconventional research, support policy development in emerging areas, and build communities of researchers working on problems that lack established academic homes. The approach has enabled the growth of specialized organizations focused on technical AI safety, nuclear security policy, and biosecurity governance, fields that might otherwise struggle to attract sustained funding. However, this focus has sparked intense debate about opportunity costs—whether resources directed toward speculative future risks might be better spent on immediate global health challenges, poverty alleviation, or climate change mitigation.
Recent developments have tested the resilience of this funding ecosystem, particularly following the collapse of major cryptocurrency-backed philanthropic ventures that had become significant supporters of existential risk work. Despite these setbacks, the fundamental concerns driving this movement—the accelerating pace of technological development, the proliferation of dual-use research capabilities, and the increasing complexity of global systems—continue to attract philanthropic attention. Established foundations have begun incorporating catastrophic risk considerations into their portfolios, while new funding vehicles explore how to balance immediate humanitarian needs with long-term civilizational resilience. The field faces ongoing challenges in developing robust evaluation frameworks, building diverse coalitions beyond its philosophical origins, and demonstrating concrete progress on problems that may take decades to fully understand. As technological capabilities continue to advance and geopolitical tensions create new threat vectors, the question of how philanthropy should weigh present versus future risks remains central to debates about the sector's role in shaping humanity's trajectory.
A research and grantmaking foundation with a major focus on global catastrophic risks.
Conducts research on AI risks, including the philosophical and safety implications of AI moral status and suffering.
Interdisciplinary research centre at the University of Cambridge studying risks that could lead to human extinction.
Focuses on existential risks and the long-term future of life, including the ethical treatment of advanced AI systems.
A philanthropic advisory firm designing bespoke giving portfolios focused on safeguarding the long-term future of humanity.
A fund primarily backed by Jaan Tallinn that uses 'S-process' software to distribute grants to organizations working on existential security.
A think tank analyzing the risks of human extinction and other global catastrophes.
Nuclear Threat Initiative
United States · Nonprofit
A nonprofit, nonpartisan global security organization focused on reducing nuclear and biological threats imperiling humanity.
A media and research organization that sets the Doomsday Clock, assessing man-made threats to human existence.
An institution for research and education in international ethics.