
The philosophical framework of longtermism posits that humanity's moral priorities should extend far beyond the present generation, arguing that the sheer number of potential future people—possibly trillions across millennia—creates an overwhelming ethical imperative to safeguard their wellbeing. This perspective draws on consequentialist ethics, particularly utilitarian reasoning, which suggests that actions should be evaluated based on their total impact across all affected individuals, regardless of when they exist. Proponents argue that existential risks threatening humanity's long-term survival, such as advanced artificial intelligence misalignment, engineered pandemics, or nuclear warfare, deserve disproportionate attention and resources because their prevention could preserve countless future lives. The framework operates through systematic cause prioritization methodologies that attempt to quantify expected value across vast time horizons, weighing factors like tractability, neglectedness, and scale to determine where philanthropic capital can achieve the greatest aggregate good.
Within the philanthropic sector, longtermism has sparked intense controversy about resource allocation and moral responsibility. Critics argue that prioritizing speculative future scenarios diverts urgently needed resources from addressing present suffering—poverty, disease, inequality—that affects billions of living people with certainty. This tension reflects deeper philosophical disagreements about moral standing: whether future people who do not yet exist possess comparable ethical claims to those currently alive, and whether we can meaningfully compare the prevention of hypothetical future harms against the alleviation of concrete present suffering. The debate extends to questions of epistemic humility—whether we can reliably predict or influence outcomes across centuries or millennia—and concerns about power dynamics, as longtermist frameworks risk concentrating decision-making authority among small groups of predominantly Western, affluent individuals making choices that affect all of humanity's trajectory. Some observers note that longtermism can inadvertently justify paternalistic interventions or technological development paths that serve narrow interests while claiming universal benefit.
These philosophical tensions have tangible implications for how billions of dollars in philanthropic capital are deployed. Major foundations and individual donors influenced by longtermist reasoning have redirected substantial funding toward existential risk research, artificial intelligence safety, biosecurity preparedness, and institutional reform aimed at improving long-term decision-making. Meanwhile, alternative frameworks like "presentism" or "near-termism" emphasize immediate impact and measurable outcomes for current populations, while "person-affecting views" grant moral priority to existing individuals over potential future people. The debate has also catalyzed important methodological discussions about how to evaluate philanthropic effectiveness when impacts span vastly different time scales, how to incorporate uncertainty and moral pluralism into funding decisions, and whether certain ethical frameworks inadvertently encode cultural assumptions that may not reflect global values. As computational tools and forecasting methods become more sophisticated, these debates will likely intensify, shaping not only philanthropic strategy but broader questions about humanity's collective responsibility across time.
An interdisciplinary research centre at the University of Oxford conducting foundational research on how to do the most good, specifically focusing on longtermism and prioritization.
A research and grantmaking foundation with a major focus on global catastrophic risks.
A philanthropic advisory firm designing bespoke giving portfolios focused on safeguarding the long-term future of humanity.
Interdisciplinary research centre at the University of Cambridge studying risks that could lead to human extinction.
A fund primarily backed by Jaan Tallinn that uses 'S-process' software to distribute grants to organizations working on existential security.
Focuses on existential risks and the long-term future of life, including the ethical treatment of advanced AI systems.
Works with multilateral organizations and governments to embed long-termism into policy-making.
An organization that provides operational support and funding logistics for university research groups working on existential risk.
Fosters long-term thinking and responsibility through projects like the 10,000 Year Clock.
A Swedish research foundation studying future threats and social conditions, often addressing population ethics and future generations.