A governance model where AI-generated prompts guide political and policy decision-making.
Promptocracy is a speculative governance concept, associated with risk theorist Nassim Nicholas Taleb, that proposes using large language models (LLMs) and probabilistic AI systems to generate decision-making guidance for political and policy processes. Rather than replacing human governance outright, the model envisions AI as a structured intermediary — synthesizing vast datasets, surfacing non-obvious trade-offs, and producing prompts that frame choices for human decision-makers in more rigorous, data-informed terms. The underlying premise is that AI systems trained on large corpora can expose patterns and interdependencies that human intuition and traditional deliberation routinely miss.
The mechanism draws heavily on how modern LLMs process and compress information across domains. In a promptocratic framework, queries about policy questions — economic regulation, public health interventions, environmental trade-offs — would be structured as inputs to AI systems capable of probabilistic reasoning. The outputs would not be binding mandates but rather analytically grounded prompts: reframings of the problem, identification of tail risks, or synthesis of expert consensus across disciplines. This positions the AI as a kind of epistemic scaffold rather than an autonomous decision-maker.
The concept sits at the intersection of AI governance, decision theory, and political philosophy, and it inherits Taleb's broader intellectual preoccupations with uncertainty, fragility, and the limits of expert judgment. Promptocracy is explicitly skeptical of naive technocracy — it does not assume AI outputs are correct, but rather that they can impose useful probabilistic discipline on deliberation that is otherwise vulnerable to narrative bias, short-termism, and motivated reasoning. In this sense it is as much a critique of existing governance failures as a proposal for AI integration.
As a concept, promptocracy remains largely theoretical and has attracted both interest and skepticism within AI ethics and policy communities. Critics raise concerns about accountability, interpretability, and the risk of laundering political choices through an opaque algorithmic layer. Proponents see it as a serious attempt to think through how LLM capabilities might be harnessed constructively in high-stakes institutional contexts, rather than left to ad hoc adoption.