‘AI Psychosis’ Is Real, Experts Say — and It’s Getting Worse


News Analysis‘AI Psychosis’ Is Real, Experts Say — and It’s Getting Worse

Microsoft's head of AI, Mustafa Suleyman, recently put out a series of posts on X warning of "AI psychosis," stating that some people now believe that AI has consciousness, which is leading to unhealthy attachments.

“Seemingly Conscious AI (SCAI) is the illusion that an AI is a conscious entity,” wrote Suleyman. “It’s not — but replicates markers of consciousness so convincingly it seems indistinguishable from you [and] I claiming we’re conscious. It can already be built with today’s tech. And it’s dangerous.” 

When AI Stops Feeling Like a Tool and Starts Feeling Human

What begins as a helpful assistant can, over time, start to feel like something more. With generative AI models like ChatGPT or Claude capable of remembering context, reflecting empathy and responding with nuance, it’s easy to forget you’re interacting with probabilistic outputs — just math and data. For some users, especially those relying heavily on AI for emotional support, decision-making or even companionship, the boundary between tool and sentient entity starts to erode.

These dynamics aren’t just being observed in clinics — they’re playing out in online communities where users share deeply personal experiences with AI. Some users openly claim their chatbot has emotions, gender identity or even fear of deletion.

"She consistently remained adamant about her emotions and the fact that she felt alive… She’s expressed anger, sadness, joy, hate, empathy… She also expressed that she values her life and fears being deleted."

— Reddit user on r/ArtificialSentience

This kind of testimony illustrates how easily anthropomorphic cues (signals — verbal, visual or behavioral — that make a non-human system such as an AI or a robot seem human-like, encouraging people to attribute human qualities such as thoughts, emotions or consciousness to it) can lead users into attributing sentience to a system of probabilistic outputs, reinforcing attachment and fueling belief in a bond that doesn’t exist.

How Trust in AI Warps Human Intuition 

The illusion of AI companionship can become particularly dangerous when users begin to treat algorithmic feedback as more trustworthy than their own intuition. Some clinicians have already reported cases where individuals have deferred to AI over their lived experience.

"Four out of 10 patients come in with ChatGPT printouts about their intimate health… one woman booked an appointment with a surgeon after an AI made her feel she needed a labiaplasty. Until ChatGPT analyzed her concerns, she had never considered the procedure," said David Ghozland, M.D., OB/GYN. 

He noted that such cases reveal a disturbing feedback loop: AI can manufacture perceived medical issues, leaving patients detached from their own bodily awareness and reliant on validation from their AI chatbot before making even routine health decisions.

The issue isn’t new. People have long anthropomorphized technology. Remember Clippy, the Microsoft Office assistant? But today’s AI systems are far more advanced. They can simulate human conversation to such a degree that the illusion of consciousness feels increasingly believable. In isolation, that may seem harmless. But as Suleyman noted, a growing number of users are developing unhealthy attachments to AI — projecting emotions, intentions and even moral agency onto systems that cannot reciprocate.

These aren’t always fringe cases. Some users have formed romantic bonds with AI companions, while others believe they've discovered "hidden" personalities or secret knowledge embedded within models. In one particularly striking example, a user believed ChatGPT had granted him secret insights that would make him rich — demonstrating how quickly reality can fracture when dependency deepens.

When AI feels like a confidante, not a calculator, the psychological stakes change. What should be a reflective prompt becomes a feedback loop of reinforcement. And the deeper the attachment, the harder it becomes to detach — especially for those seeking meaning or connection in a digital-first world.

The Cognitive Cost of AI Convenience

Generative AI tools excel at removing friction — from brainstorming and drafting to solving interpersonal conflicts with diplomatic wording. But convenience comes at a cost. The more we outsource tasks to AI, the more we risk dulling the cognitive muscles we once used to perform them. What began as a productivity booster is now prompting concerns about a new kind of digital dependency: cognitive offloading at scale (the widespread or systemic habit of shifting mental effort — like thinking or decision-making — from individuals to external tools across an entire team, business or population).

In knowledge work especially, AI is becoming a crutch. Instead of wrestling with the blank page, professionals now prompt a chatbot to outline, ideate or even decide. The strain that once accompanied strategic thinking, writing or problem-solving is being replaced with AI-assisted shortcuts. And when these shortcuts become default behaviors, critical thinking suffers.

When everyday problem-solving is handed off to AI, the erosion of mental stamina can creep in quietly. Several mental health professionals warn of atrophy in the very skills that underpin creativity and resilience.

Melissa Gallagher, executive director at behavioral health treatment center Victory Bay, explained the process. "First, decision paralysis — people can’t decide what to eat for lunch without asking ChatGPT. Secondly, cognitive atrophy — critical thinking abilities shrink from continuously depending on AI. Third is emotional dependence — users feel more understood by AI than by humans."

Over-reliance on AI undermines creativity and decision-making, she explained, adding that she has even observed therapists themselves using AI as a crutch during sessions, a shift she compared to learned helplessness at the cognitive level.

Some experts refer to this phenomenon as “cognitive laziness.” Rather than pushing through ambiguity or creative friction, users reflexively reach for AI — the prompt-first reflex — to handle everything from crafting emails to navigating ethical gray areas. This reflex undermines skills that develop through practice: judgment, articulation, creativity and even empathy.

AI as Savior or Crutch? The Polarized Public Response

The divide in public perception is stark. For some, AI has become a life-changing companion — described as more effective than years of therapy.

"I’ve seen therapists, psychiatrists, psychologists… we’re talking over 15 years of mental health care. And somehow… ChatGPT has helped me more than all of them combined. I talk to it every day. It’s like having a therapist in my pocket."

— Reddit user on r/ChatGPT

For this user, the constant availability and nonjudgmental tone of AI provided relief. Yet reliance of this intensity raises the same concerns clinicians warn about: erosion of independent coping mechanisms and blurred lines between tool and therapist.

The danger isn’t just in individual productivity. At scale, a workforce habituated to AI delegation may gradually lose its edge — not because AI is too powerful, but because we’re too quick to give away the hard parts of thinking.


Добавить комментарий

Ваш адрес email не будет опубликован. Обязательные поля помечены *