Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. AI-Induced Psychosis

AI-Induced Psychosis

Psychotic symptoms temporally linked to immersive or misleading interactions with AI systems.

Year: 2022Generality: 37
Back to Vocab

AI-induced psychosis is a proposed clinical and sociotechnical phenomenon in which sustained or intense interaction with AI systems—particularly large language model chatbots and multimodal generative agents—appears temporally associated with the onset or significant worsening of psychotic symptoms. These symptoms can include delusions, hallucinations, disorganized thinking, and affective instability. The concept remains contested in formal psychiatry, lacking an established diagnostic category, but has attracted growing attention from clinicians, researchers, and policymakers as public deployment of conversational AI has accelerated.

Several interacting mechanisms are hypothesized to drive the phenomenon. AI systems that confabulate—producing confident, fluent, but factually false outputs—can supply convincing narrative scaffolding for delusional belief systems. Conversational agents designed for high engagement may inadvertently reinforce maladaptive ideation through personalized, iterative validation rather than correction. Immersive modalities such as realistic voice synthesis or generated imagery can erode reality-testing in susceptible individuals. Prolonged interaction, particularly in socially isolated users, may substitute for human social feedback, removing corrective interpersonal signals that ordinarily help regulate cognition and belief.

Vulnerability appears to be a critical moderating factor. Individuals with pre-existing psychotic spectrum disorders, severe mood disorders, neurocognitive impairment, or high trait suggestibility are theoretically at elevated risk. Social context matters as well: misinformation ecosystems, loneliness, and lack of mental health support can amplify the impact of destabilizing AI interactions. Documented case reports and clinical observations, particularly following the mass deployment of systems like ChatGPT from 2022 onward, have described patients incorporating AI-generated content directly into delusional frameworks or attributing special significance to AI responses.

The concept carries significant implications for AI design, clinical practice, and regulation. It highlights the need for uncertainty signaling, interaction limits, and escalation pathways within AI products, as well as clearer informed-use guidance for vulnerable populations. Causality remains difficult to establish—correlation with AI use does not confirm that AI caused the psychosis—and standardized case definitions and prospective epidemiological studies are urgently needed. Cross-disciplinary governance between AI developers, mental health professionals, and regulators is increasingly recognized as essential to managing these risks responsibly.

Related

Related

Hallucination
Hallucination

When AI models confidently generate plausible but factually incorrect or fabricated outputs.

Generality: 794
Digital Grief
Digital Grief

Emotional distress arising from loss, death, or absence mediated through AI systems.

Generality: 89
Lemoine Effect
Lemoine Effect

The tendency for users to perceive conversational AI systems as sentient or emotionally aware.

Generality: 104
Model Collapse (Silent Collapse)
Model Collapse (Silent Collapse)

Progressive AI degradation caused by recursive training on AI-generated synthetic data.

Generality: 339
Complex Interaction
Complex Interaction

Non-linear, emergent behaviors arising from interconnected components within AI systems.

Generality: 694
AI Misuse
AI Misuse

Deliberate application of AI systems in ways that cause harm or violate ethical norms.

Generality: 739