Psychotic symptoms temporally linked to immersive or misleading interactions with AI systems.
AI-induced psychosis is a proposed clinical and sociotechnical phenomenon in which sustained or intense interaction with AI systems—particularly large language model chatbots and multimodal generative agents—appears temporally associated with the onset or significant worsening of psychotic symptoms. These symptoms can include delusions, hallucinations, disorganized thinking, and affective instability. The concept remains contested in formal psychiatry, lacking an established diagnostic category, but has attracted growing attention from clinicians, researchers, and policymakers as public deployment of conversational AI has accelerated.
Several interacting mechanisms are hypothesized to drive the phenomenon. AI systems that confabulate—producing confident, fluent, but factually false outputs—can supply convincing narrative scaffolding for delusional belief systems. Conversational agents designed for high engagement may inadvertently reinforce maladaptive ideation through personalized, iterative validation rather than correction. Immersive modalities such as realistic voice synthesis or generated imagery can erode reality-testing in susceptible individuals. Prolonged interaction, particularly in socially isolated users, may substitute for human social feedback, removing corrective interpersonal signals that ordinarily help regulate cognition and belief.
Vulnerability appears to be a critical moderating factor. Individuals with pre-existing psychotic spectrum disorders, severe mood disorders, neurocognitive impairment, or high trait suggestibility are theoretically at elevated risk. Social context matters as well: misinformation ecosystems, loneliness, and lack of mental health support can amplify the impact of destabilizing AI interactions. Documented case reports and clinical observations, particularly following the mass deployment of systems like ChatGPT from 2022 onward, have described patients incorporating AI-generated content directly into delusional frameworks or attributing special significance to AI responses.
The concept carries significant implications for AI design, clinical practice, and regulation. It highlights the need for uncertainty signaling, interaction limits, and escalation pathways within AI products, as well as clearer informed-use guidance for vulnerable populations. Causality remains difficult to establish—correlation with AI use does not confirm that AI caused the psychosis—and standardized case definitions and prospective epidemiological studies are urgently needed. Cross-disciplinary governance between AI developers, mental health professionals, and regulators is increasingly recognized as essential to managing these risks responsibly.