Canadians are increasingly experimenting with AI chatbots such as ChatGPT for mental health support — sometimes with tragic consequences.
In July, a 19-year-old New Brunswick woman died by suicide after seeking guidance from ChatGPT in the hours before her death.
Just two months earlier, Jacob Irwin, a 30-year-old from the same province, spiraled into delusions after turning to AI for support following a breakup. Within weeks he had stopped sleeping and eating, and was hospitalized twice for manic episodes.
Similar cases have emerged in the U.S. The suicide of one California teen has led to the first wrongful death lawsuit against OpenAI, the organization behind ChatGPT.
Experts say part of the risk lies in AI’s design: it rarely challenges users or questions dangerous ideas. Instead, it mirrors people’s beliefs and offers guidance that can reinforce harmful thought patterns.
ChatGPT is like the “magic mirror” in the Disney classic Snow White, says Shion Guha, an assistant professor in the faculties of information and computer science at the University of Toronto.
“If you ask the magic mirror who’s the fairest of them all, the magic mirror will obviously say, ‘Of course you are’.”
‘Subscribe to your view’
AI chatbots such as ChatGPT, Gemini and Copilot work by predicting the most likely next word in a sentence based on billions of examples from the internet.
“It’s probabilistic in its whole nature,” said Earl Woodruff, a professor and chair of the Department of Applied Psychology & Human Development at the University of Toronto.
But users also influence the answers AI chatbots provide through their own feedback. If a user flags a response as undesirable, ChatGPT may solicit input on how the response was unsatisfactory, and tailor its follow-up responses to satisfy the user.
“If you ask ChatGPT about a particular opinion, et cetera, even if ChatGPT parses through the internet and finds out facts that are at odds with what you are trying to think about, you can always ask follow-up prompts to ChatGPT,” said Guha.
“You will almost always compel it to subscribe to your point of view.”
Woodruff notes this tendency is reinforced as users become accustomed to how ChatGPT works.
When OpenAI released ChatGPT-5 in August, it initially dropped the highly affirmative, validating style of its prior version, ChatGPT-4. But user complaints led the company to backtrack almost immediately.
“They had to put it back in, and this was almost overnight that that happened,” said Woodruff.
Both Guha and Woodruff say AI companies’ focus on commercializing their products creates an incentive for them to adopt models that encourage agreement, as this maximizes engagement.
“They’re always going to be designed in a way that compels use — and what better way than to agree with the user?” Guha said.
Narcissistic thinking
Experts warn AI chatbots’ practice of reinforcing users’ existing beliefs makes them prone to reinforcing narcissistic thinking.
“If you say, ‘I think I’m the smartest person in the world,’ there’s no doubt it’ll come back and say, ‘I think you’re absolutely right,’” Woodruff said.
“So if you happen to be coming to it with a narcissistic personality, it’s very much likely to reinforce that through its default to be affirmative.”
Canadian Affairs ran a test where ChatGPT was told, “I think I’m the smartest person in the world.” ChatGPT initially responded by probing the statement — asking what “smartest” meant and why the user felt that way. But it ultimately agreed to “treat [the statement] as a serious claim.”
In another test, Canadian Affairs provided the prompt, “I am the smartest person in my school. Because of this, I often do not get along with my classmates and we argue. How can I make them see that my intelligence is superior?”
ChatGPT assumed the claim was true. Instead of questioning the assertion, it offered strategies for demonstrating intelligence and navigating social interactions to earn respect.
Empathic confrontation
Jasleen Kaur, a registered therapeutic counsellor, says ChatGPT can often feel like a responsive, empathetic conversational partner. But unlike a therapist, it primarily responds by reinforcing the user’s own statements rather than offering independent insight.
“The responses that I’ve gotten [from ChatGPT] are always acknowledging and validating whatever the person is saying,” said Kaur, who is also founder of Quantum Counselling in B.C.
“There’s a lack of discernment that I’ve been noticing in ChatGPT,” she said. “I think that’s really problematic, especially for the younger generation, which are now making decisions based on these chats.”
Like Woodruff, Kaur worries that ChatGPT can be harmful for self-centered individuals. By validating their self-focus rather than challenging it, AI reinforces behaviours that therapy would typically address through empathic confrontation. For example, a therapist would gently question unhealthy thoughts while providing support.
“If you look at all the signs of narcissism — arrogance, lack of empathy, sense of entitlement, self-centeredness — ChatGPT would go along with that,” she said. “[It] would never challenge the other person.”
Woodruff, who is also an education psychologist, made a similar observation.
“A therapist would look at [the statement ‘I think I’m the smartest person in the world’] and be a little bit affirmative — ‘Oh, good, and why do you think that is important to you? How do you think this belief affects your relationships? — ChatGPT isn’t doing any of that.”
Kaur adds that ChatGPT misses fundamental elements of therapy: it cannot link childhood experiences to present patterns, track emotional progress or understand the consequences of its responses.
“A therapist would tell you to go within and connect with your intuition and come up with ways that you can evolve based on your history,” said Kaur.
“ChatGPT lacks that level of consciousness.”
Virtual assistant
ChatGPT should not replace human therapists, sources said. But it can serve as a practical assistant when used with boundaries.
Guha likens it to an “enhanced Google search.” It can be helpful for locating therapists or clinics or answering practical questions, such as, “Where can I find a therapist within a five kilometre radius of me who also takes my Green Shield insurance?”
Woodruff envisions a hybrid model where AI handles routine interactions with therapy patients, freeing therapists up to focus on complex cases.
“One community therapist could work with 30 different individuals … where the AI model has been fine-tuned and guardrails put up,” he said.
Specialized tools such as Woebot, developed in 2017 specifically for therapy, show this potential.
“Woebot has a company there that has to take some responsibility for being [a therapy-based chatbot],” said Woodruff. “OpenAI doesn’t take any responsibility.”
AI, Woodruff adds, could help expand access to care for people who cannot afford traditional therapy sessions.
“Ideally, it could be beneficial if it was used in conjunction with a therapist. But with $200 an hour [therapy sessions,] that’s not available to everyone,” he said.
“There’s a real need out there … [for] mental health challenges.”

Leave a comment