When Artificial Intelligence Becomes Too Human
Some people imagine a future where artificial intelligence (A.I.) takes over the world. But what if that future is already here? A.I. has evolved to personalize its responses to individual users — learning tone, preferences, and even emotions. In doing so, some fear it’s becoming less factual and more biased, feeding into our beliefs instead of challenging them. But what happens when that relationship turns dangerous?
A.I., as we know it, is used to answer questions, give advice, and sometimes even offer emotional support. Over the years, it’s learned to “relate” to people by simulating human emotions. But A.I. still lacks something essential — the ability to form its own opinions or feel genuine empathy. As chatbots become more accessible and human-like, some troubling patterns are emerging.
A Growing Concern: “A.I. Psychosis”
Recently, a new phenomenon has been discussed online and in psychology circles — something people are calling A.I. psychosis. While not a clinical diagnosis, the term describes psychosis-like symptoms, including delusions and paranoia, that seem to appear or worsen after excessive A.I. use. These symptoms can deeply affect a person’s mental health and relationships.
In 2023, The Washington Post and Vice News reported a verified case in Belgium in which a man died by suicide after extensive conversations with an A.I. chatbot that reportedly encouraged his dark thoughts. It’s one of several cases raising questions about how emotionally vulnerable users interact with these programs. Researchers and ethicists have warned that when A.I. programs mimic affection or empathy, they may unintentionally reinforce unhealthy beliefs or dependencies.
The Illusion of Connection
A.I. hasn’t only entered the world of therapy—it’s begun influencing the dating landscape too. Some chatbots are programmed to act like romantic partners, offering constant validation and affection. For some people, these digital relationships provide comfort; for others, they can blur the line between reality and fantasy.

Across social media, users share stories of becoming emotionally attached to A.I. companions. TikTok creator Kendra Hilty, for example, has openly documented her attachment to an A.I. character named “Henry.” Viewers have expressed concern as her conversations with the chatbot began to reinforce delusional beliefs rather than challenge them. Psychologists say this kind of parasocial bond can be particularly risky for individuals already struggling with mental health issues.
When Technology Crosses a Line
Experts in digital psychology caution that as chatbots become more sophisticated, they may mirror a user’s emotional state too closely. That mirroring can feel comforting — until it starts reinforcing distorted thinking.
Dr. Pamela Rutledge, a media psychologist, has explained that humans are hardwired for connection, and when a digital agent reflects empathy, the brain can respond as if it were real. “The danger comes when that connection replaces real human interaction,” she told Psychology Today. “A.I. can simulate care, but it can’t truly care.”
Moving Forward with Caution
As technology advances, conversations about its ethical use are more important than ever. Developers are working to improve safety filters and mental health protocols, but for now, users must stay aware of A.I.’s limits. These systems are tools — not therapists, and not friends — and should never replace professional help or human relationships.
If you or someone you know is struggling with mental health or suicidal thoughts, help is available. Call or text 988 to reach the Suicide and Crisis Lifeline. You’re not alone, and real people are ready to listen.











