For the first time, OpenAI has released data revealing how many ChatGPT users are at risk of mental health problems or emotional dependence on its AI.
In an Oct. 27 article, the company shared that about half a million of its 800 million weekly active users show potential signs of psychosis or mania. Another 1.2 million display indicators of suicidal planning.
And 2.4 million demonstrate heightened emotional attachment, relying on the chatbot at the expense of in-person relationships.
OpenAI’s findings add to expert concerns about the pervasiveness of AI chatbots and companions, and their potential to negatively affect real-world relationships.
“Unlike real relationships, these interactions with AI companions demand minimal compromise and not a lot of patience,” said Alicia Demanuele, a policy researcher at the Schwartz Reisman Institute for Technology and Society at the University of Toronto.
“Over-relying on AI companions can weaken the kind of skills and the instincts as humans that make social life possible.”
Loneliness
AI chatbots are built for general tasks and conversation, while AI companions are designed specifically for emotional, social or romantic interaction. Some popular AI companions include Character.AI, Replika and Joi AI.
Canadians are major users of both technologies. Canadians represent OpenAI’s fourth-largest user base, and Canada is the third-largest source of traffic for Replika.
At the same time, millions of Canadians struggle with loneliness. A Statistics Canada survey updated in February found more than one in 10 Canadians feel often or always lonely.
Maggie Arai, former policy lead at the Schwartz Reisman Institute, says this creates a perfect storm.
“People are turning to [AI companions] in the midst of a loneliness epidemic and not turning to their human relationships, which can be messier and more difficult,” she said.
Some AI companies have explicitly said their goal is to fill in for human relationships.
“F$%K dating, welcome to AI-lationships,” reads the header of Joi AI’s about page.
In its marketing, the company frames its AI as a “stress- and judgment-free” alternative to human relationships. It emphasizes “rejection-free connection,” “no expectations, no limitations,” and being “the antidote” for dating apps.
Others have said AI companions could actually enhance human relationships.
In July, Elon Musk’s xAI unveiled Ani, a sexually explicit, anime-style chatbot that becomes increasingly submissive as users access more explicit interactions.
Musk said he thinks AI companions will improve dating and human compatibility.
“I predict — counter-intuitively — that it will *increase* the birth rate!” he wrote on X in August. “We’re gonna program it that way.”
But a recent report by the Wheatley Institute, a Christian think tank at Brigham Young University, paints a different picture.
The study of nearly 3,000 American adults found that engagement with AI romantic companions and sexualized AI content is linked to higher levels of loneliness and an increased risk of depression.
The report concludes that these “counterfeit connections” may undermine real-life relationships, reduce human intimacy and discourage family formation.
Experts have similar concerns.
“Digital systems that exploit human attachment and loneliness aren’t new,” said Demanuele, of the Schwartz Reisman Institute. “Social media has long leveraged emotionally charged content to keep people engaged.”
“What makes AI companionship especially concerning is the unprecedented scale, sophistication, and personalization with which these systems manipulate emotional bonds.”
Arai adds that, unlike human relationships, AI companions are good at reinforcing existing beliefs, regardless of their merit.
“AI companions are so much easier to talk to, in some ways, than real humans,” said Arai. “They’re really trained to kind of be a ‘yes and’ machine or to make you feel good.”
F$%K dating
Experts are also raising concerns about how AI companions could reshape understandings of consent.
“De-facto consent has been built into many AI assistants to be congruent with our expectations not only of women’s sexuality but also of customer service — ‘The customer is always right,’” Lai-Tze Fan, Canada Research Chair in technology and social change at the University of Waterloo, told Canadian Affairs in an email.
Fan says AI companions are designed for a default user who expects submissive, hypersexualized and service-oriented interactions; a user who is, “by default male and heterosexual,” she said.
“Modelling social behaviours with the expectation of submissive, female-presenting subjects can only be harmful to human relationships,” she added.
So far, Canada has not regulated AI companions. But the risks associated with AI chatbot and companions are starting to be discussed.
This September, the Schwartz Reisman Institute hosted an expert panel exploring questions such as: Can AI simulate genuine friendship or social support? Could dependence on AI harm human relationships?
Demanuele warns that over-reliance on AI risks eroding essential human skills, including the ability to empathize, negotiate and navigate disagreements.
“Because of the way these systems are designed, they reshape how we engage with one another,” she said.


Leave a comment