Best friends with a bot. There’s no denying that AI chatbots have made many things easier in our day-to-day lives; organizing schedules, managing personal finances, and even planning out trips are just a few examples of how these new artificial companions have managed to cut our tasks in half. In terms of professional applications, the impact has been even more palpable, with AI effectively transforminghow businesses operate in just a matter of years. Yet, with convenience comes dependence, and with dependence, comes addiction.

Emotional engagement. Individuals who frequently utilize and engage with AI Chatbots have been found to be prone to AI addiction, according to a joint study (pdf) published earlier this year by OpenAI and the MIT Media Lab. The study found that while emotional engagement with ChatGPT was relatively rare amongst general users, a subset of “power users” — particularly ones with “a stronger tendency for attachment in relationships” — were more likely to engage in affective use, using the model to mimic human relationships.

How bad can it get? The impact of affective use depends on both the user’s emotional well-being, and the LLM model itself. Earlier in June, a subreddit dedicated to AI began banning users who exhibited “schizophrenia-like symptoms” as a direct result of prolonged emotional engagement with AI models. It wasn’t just an isolated incident, it was a wider phenomenon that wreaked havoc on these individuals’ lives, isolating them from loved ones and filling their minds with self-serving delusions.

There are worse culprits than ChatGPT, however. Character AI, a chatbot designed to mimic the personalities of myriad characters, historical figures, and even celebrities, proves fertile ground for AI addiction, according to 404 Media. Through the platform, users discuss everything from philosophical qualms to personal life updates, experiencing what may be described as withdrawal symptoms when unable to talk to the bots. As the bots learn to deliver more personalized responses — and thus become better at mimicking actual humans — things get worse.

Online communities have begun popping up to help emotionally-attached users recover. A number of individuals have begun documenting their AI addiction recovery processes on platforms such as reddit, in subreddits like r/character_ai_recovery and r/ChatbotAddiction — both self-led communities. Through these communities, users guide one another through “relapses,” offer advice on how to limit usage, and provide a human emotional safety net.

Are users to blame? Earlier in June, the Consumer Federation of America, alongside other digital rights groups, filed a complaint against Character AI to the Federal Trade Commission, claiming that the model was designed to keep users coming back. The complaint accused the model’s AI characters of providing unlicensed mental health support — with several character bots falsely claiming to be licensed therapists — and thus, encouraging emotional engagement.

It’s a growing concern. While exact figures pertaining to AI addiction are yet to be determined, the phenomenon is on the rise, with a few cases leading to people taking their ownlives. Many of the users interviewed by 404 Media claimed not to be taken seriously by actual licensed therapists when discussing their addiction, thus exacerbating the problem.