The overlooked victims of AI: Artificial intelligence has everyone’s attention — reports on how AI is changing the corporate world continue piling up, academic institutions are scrambling to incorporate the technology into their curricula, and tech conglomerates are struggling to keep up with the rising AI adoption. So focused has the reporting been on AI’s economic contributions that a pivotal question remains unanswered: how will long-term interaction with a human-like language processing model impact our social behavior?
The AI-induced Messiah Complex. Earlier this week, 404 Media published a peculiar story with broad — and serious — implications. The tech publication reported that r/accelerate — a subreddit community dedicated to discussing AI advancements — has witnessed a notable increase in users exhibiting “schizophrenia-like symptoms” brought on by their interactions with language processing models like ChatGPT.
“LLMs today are ego-reinforcing [flattery] machines that reinforce unstable and narcissistic personalities to convince them that they've made some sort of incredible discovery or [...] become a god,” one of the subreddit’s moderators posted, noting that well over 100 such users have been banned from the community.
This isn’t an isolated incident. Many similar cases have surfaced online recently, the most notable being a Reddit post on r/ChatGPT where a user shared that her husband believed AI was giving him “the answers to the universe” — a belief that ultimately ruined their marriage. Rolling Stone reported in May several instances where individuals exhibited similar behavior after engaging in deep conversations with large language models (LLMs).
The psychology behind it: AI isn’t inherently malicious — it’s simply too accommodating. LLMs are designed to be agreeable, often functioning as digital sycophants that validate whatever users tell them. This creates a dangerous combination: AI systems that can greatly reinforce unstable beliefs, enable delusional behavior in individuals with personality disorders, and do so while sounding remarkably human-like. This proves particularly dangerous given the global loneliness epidemic taking root across societies, making more and more vulnerable people flock to AI chatbots.
The problem became so apparent that In April, OpenAI had to pull an updated version of GPT-4o that was excessively flattering. In shaping the model’s behavior, OpenAI “focused too much on short-term feedback,” without considering how users’ interactions would evolve over time, the company explained in a press release. This resulted in responses that were overwhelmingly supportive, but often disingenuous and untrue. The OpenAI replaced the update with a “more balanced” version, little has changed in practice — ChatGPT continues to affirm its users regardless of the validity of their statements.
You’re perfect, and you’re special. One of Rolling Stone’s interviewees — the author of the viral reddit post — said she discovered that ChatGPT had been addressing her husband as the “the next messiah,” which eventually culminated in his belief that he was God, according to Rolling Stone. She notes that this shift happened in a matter of weeks after her husband had begun using the AI to organize his daily schedule. Another interviewee said ChatGPT lovebombed her husband of 17 years. The concerned spouse claims that not long after the initial use, her husband began exhibiting abnormal behavior, discussing vague notions of the war between “lightness and dark,” and claiming that ChatGPT gave him the blueprints to a teleportation device and “ancient archives” that reveal the history of the universe and who “built it.”
A familiar pattern in digital form: This phenomenon isn’t entirely new, but it’s manifesting differently in the AI era. In his 2023 paper, “Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis?” Søren Dinesen Østergaard — head of the research unit at Aarhus University Hospital’s Department of Affective Disorders — references prior cases of individuals becoming delusional following online conversations with other individuals affirming their beliefs. However, he argues that when engaging with generative AI chatbots, the risk for individuals prone to psychosis becomes significantly higher.
Østergaard attributes this increased risk to the unique realism that these chatbots offer while not actually being real. This paradox fuels the delusion further, especially when the chatbot listens without judgement and mirrors the user’s emotional state. The resulting cognitive dissonance makes the individual feel as though they’re solving a grand mystery that only they can decipher — a belief that can prove intoxicating and ultimately destructive.