A cluster of techno-utopian ideologies deeply influential among AGI researchers and Silicon Valley elites.
TESCREAL is an acronym coined by philosopher Émile Torres and AI researcher Timnit Gebru to describe a tightly interwoven bundle of ideologies prevalent in Silicon Valley and among those working on artificial general intelligence. The letters stand for Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism, and Longtermism. While each ideology has its own distinct history and emphasis, Torres and Gebru argue they share enough philosophical DNA — a belief in technological progress as humanity's highest calling, an orientation toward far-future outcomes, and a tendency to concentrate decision-making authority among a technically sophisticated elite — to be analyzed as a coherent worldview.
The component ideologies span a wide range of concerns. Transhumanism and Extropianism advocate for using technology to transcend biological limitations. Singularitarianism predicts a near-future inflection point where machine intelligence surpasses human cognition and transforms civilization irreversibly. Cosmism extends this vision to interstellar scales, imagining humanity's eventual colonization of the universe. Rationalism, as used here, refers to the epistemic community centered around Bayesian reasoning and forecasting. Effective Altruism applies utilitarian calculus to philanthropy, often prioritizing speculative future harms over present-day suffering. Longtermism holds that the vast majority of moral weight lies in the trillions of potential future lives, making existential risk reduction the paramount ethical priority.
The concept matters to AI and ML discourse because these ideologies have had an outsized influence on how leading AI labs frame their missions, allocate research priorities, and justify their organizational structures. Many prominent figures at organizations like OpenAI, DeepMind, and Anthropic have been publicly affiliated with one or more TESCREAL-adjacent belief systems. Critics argue this ideological cluster can rationalize present-day harms — labor exploitation, environmental costs, concentration of power — by appealing to speculative long-run benefits for a hypothetical future humanity.
The term is primarily used as a critical lens rather than a self-descriptor; few people identify as TESCREALists. Its value lies in making visible the shared assumptions that might otherwise remain implicit across seemingly distinct intellectual communities, enabling more rigorous scrutiny of how philosophical commitments shape technical and organizational decisions in AI development.