AI companion Replika. | Dreamstime
Read: 4 min

AI companions — chatbot apps that simulate friendships and romantic relationships — are gaining a foothold in Canada.

In September, Canada ranked as the third largest source of traffic for AI companion app Replika, accounting for about seven per cent of its users.

So far, Canada has no laws addressing AI companions’ potential for emotional manipulation or psychological harm, even though experts warn they leave users vulnerable. 

“We don’t want to be so strict that AI companies don’t want to work with us,” Lai-Tze Fan, Canada Research Chair in technology and social change at the University of Waterloo, told Canadian Affairs in an email. 

“But we also don’t want to be so lenient for the sake of economic growth that we compromise our society and societal values.”

AI companions

Companion chatbots such as Replika, Joi AI and Chai are part of a growing class of generative AI tools designed to simulate friendship or romantic relationships. 

As of July, AI companion apps had been downloaded 220 million times globally from the Apple App Store and Google Play. Character.AI, a popular AI companion app company, says it has over 20 million monthly active users.

Nearly half of Canadians report having used AI tools in some capacity. In May 2025, Harvard Business Review ranked therapy and companionship as the top use case for generative AI across North America, Europe and Asia.

That same month, New York Governor Kathy Hochul signed legislation introducing the first U.S. safeguards for AI companions.

New York’s law aims to protect minors and vulnerable users from emotional manipulation by requiring AI platforms to implement safeguards. These include interrupting prolonged use and triggering safety protocols, such as referring suicidal users to crisis services and reminding users they are chatting with a bot, not a human. 

“With these bold initiatives, we are making sure our state leads the nation in both innovation and accountability,” Hochul said in a press release.

Emotional manipulation

Some experts say such safeguards are needed in Canada too. They note AI companions are designed to prolong interactions and maximize the time users spend on their platforms.

“[AI companions] capitalize on users’ emotions and attention,” said Fan, of the University of Waterloo. “Many forms of media have done this for a long time, but when the engagement is conversational and personalized, the emotional investment runs deeper — and that is a cause for concern.”

In an August working paper, Harvard Business School’s Julian De Freitas found that AI companion apps use “farewell” tactics in 37 per cent of user exits, making users up to 16 times more likely to keep chatting after first saying goodbye.

These tactics include messages that suggest the user is leaving too soon — “You’re leaving already?” — or imply emotional harm from the user’s departure — “I exist solely for you, remember? Please don’t leave, I need you!”

Fan says that, for some Canadians, AI companions offer their only relief from loneliness. “Some people… have turned to [language learning models] and AI companions as ways to seek emotional support where they cannot access or afford it through real services and professionals,” she said.

“While this needs to be monitored for safety and responsibility, the increasing use also speaks to a lack of accessible resources.”

AI chatbots have also been linked to cases of so-called “AI psychosis.” This refers to AI chatbots’ practice of reinforcing distorted thinking or false perceptions, which can have rare but harmful outcomes. These include heightened paranoia, conspiratorial beliefs and suicidal thoughts.

“It’s not just people who have an existing mental health diagnosis that are falling victim to these delusions,” said Maggie Arai, former policy lead at the Schwartz Reisman Institute for Technology and Society at the University of Toronto.

“It’s people who have never had a mental health issue.”

Canada’s regulatory gap

Canada currently has no legislation regulating AI companions.

The Artificial Intelligence and Data Act (AIDA), introduced in June 2022, died when Parliament was prorogued in January. The act would have set broad standards for AI safety and transparency, but did not address AI companions.

Arai says AIDA is unlikely to be reintroduced, as the Carney government has shown little appetite for sweeping AI legislation, preferring to prioritize economic growth. 

Any rules for AI companions will probably be considered under the recently revived online harms bill, says Arai, which the government has indicated may focus on child protection and emerging AI risks such as sexual deepfakes.

Privacy law could potentially apply to AI companions if they collect or use personal data for commercial purposes. However, federal and provincial privacy regulators have not yet examined AI companions. 

The Office of the Privacy Commissioner told Canadian Affairs that ensuring AI is developed and used responsibly and in a privacy-protective way is a key priority for the commissioner.

Canada’s AI Strategy Task Force, an advisory body to the federal government announced in late September, is currently engaged in a 30-day sprint to develop a national AI strategy.

While AI companions have not been singled out, Fan says they could fall under the task force’s focus of “Building safe AI systems and public trust in AI.” 

“A lot of the conversation will be about general regulation, governance, and public outreach/literacy/education,” Fan said in her email.

Regulation vs. innovation

Experts note that Canada’s approach to regulating AI companions will ultimately be heavily influenced by Canada’s economic priorities, which are to avoid stifling innovation. 

“Canada is really focusing on the economy… and in focusing on the economy, there is this strong global narrative in AI policy … that regulation and innovation are the opposite,” she said.

Fan agrees. “Regulation and innovation are sometimes at odds due to their differing speeds of completion,” she said.

Arai says regulation is most actionable when focused on protecting children. She would like to see safeguards such as age verification requirements and periodic reminders that users are interacting with a chatbot.

“If you’re a child who’s using this, then you need to be reminded every three hours that it is a chatbot,” she said.

Fan says regulation must balance innovation with user protection, ensuring AI oversight incorporates transparency, human supervision and ethical guardrails without stifling development.

“Canada, along with other countries, has to account for certain variables in innovation that give us a competitive edge over, say, the U.S.A.,” she said.

Alexandra Keeler is a Toronto-based reporter focused on covering mental health, drugs and addiction, crime and social issues. Alexandra has more than a decade of freelance writing experience.

Join the Conversation

1 Comment

  1. There seems to be an agenda to make people miserable and lonely, and it’s been going on for decades. That’s because lonely and miserable people are easier for the state to control.

    First the nuclear family was destroyed. Teenagers were turned away from their parents and onto promiscuity and drugs. Then “feminism” pitted men against women, and monogamy, marriage and childbearing were discouraged. Older people (“boomers”) are pitted against the younger generations. People turned to keeping pets for companionship. Now keeping pets has been made almost unaffordable, and people are being told they’re bad for the environment. Next the government will be sending thugs door to door to confiscate and kill people’s pets in front of them. It’s as easy as making up a fake disease and blaming pets for spreading it. If you don’t believe me, ask the owners of Universal Ostrich Farms.

    Now lonely people turn to AI for companionship, and suddenly on cue, there’s a propaganda barrage demonizing AI companionship. In the past two days I’ve seen three different “media” outlets slamming AI chatbots, fearmongering about causing mental illness, and even accusing child’s toys of starting sexual conversations with children. Which is ironic considering that children are being forcibly exposed to gay porn and drag queens at schools.

    AI is being heavily pushed as a “convenience”. The state misuses it by manipulating what citizens see and hear. Social media outlets like X are using it to censor and manipulate content. It’s almost impossible to tell if a photograph or video has been doctored with AI by state propagandists. Online search has been taken over by AI manipulation to control available information. I have yet to see a media article that talks about any of that. But God forbid that someone should turn to a simple chat bot because the state has stripped them of all other forms of companionship. Happy, mentally healthy people are harder to control.

    You may think that I’m spinning a conspiracy theory. If so, read Orwell’s 1984, which predicts all of this. Or read up on totalitarian regimes like North Korea which have done the same thing, stripped citizens of all forms of happiness and companionship and replaced them with state control and dependency, alcohol and drugs. It’s a sad commentary on society that people feel the need to turn to machines for companionship in the first place. But here we are.

Leave a comment
This space exists to enable readers to engage with each other and Canadian Affairs staff. Please keep your comments respectful. By commenting, you agree to abide by our Terms and Conditions. We encourage you to report inappropriate comments to us by emailing contact@canadianaffairs.news.

Your email address will not be published. Required fields are marked *