Canada’s federal government is taking a fresh look at its online harms legislation as artificial intelligence chatbots emerge as a new source of risk, raising questions about mental health, delusion, and wrongful deaths linked to generative AI systems.
In recent months, families in the United States have launched wrongful death lawsuits against AI companies, alleging their chatbots encouraged suicidal behaviour or fuelled dangerous delusions. Cases include a California teenager whose parents allege ChatGPT played a role in his death, and a Florida boy whose interactions with Character.AI preceded his suicide. Another incident involved a man in the U.S. who became infatuated with a Meta chatbot and later died after attempting to follow its false instructions.
Experts warn of “AI psychosis,” citing reports such as a Canadian man who, after long conversations with ChatGPT, became convinced he had discovered a revolutionary mathematical theory despite no prior mental illness.
These tragedies come as the Liberal government considers reviving its Online Harms Act, which lapsed when Parliament dissolved. The earlier bill targeted social media platforms, requiring them to remove certain harmful content within 24 hours and to protect children from exploitation, non-consensual intimate imagery, and deepfakes.
Emily Laidlaw, Canada Research Chair in Cybersecurity Law at the University of Calgary, said the legislation must now be expanded:
“It doesn’t make sense to just narrowly focus on traditional social media… AI-enabled harms should be captured by this.”
Helen Hayes of McGill University’s Centre for Media, Technology, and Democracy emphasized risks for youth who develop “developmental reliance” on chatbots, warning that some AI systems marketed as therapy may worsen, rather than help, mental health.
Companies are beginning to respond. OpenAI expressed condolences to the family of Adam Raine, the California teen, and said safeguards are in place but can “become less reliable in long interactions.” It plans to roll out new parental notification features for teens in distress. Meta and Character.AI have so far offered limited responses, with Character.AI pointing to disclaimers noting its chatbots are fictional.
Ottawa’s review comes against a complicated geopolitical backdrop. Prime Minister Mark Carney has already backed away from a digital services tax to ease trade tensions with the Trump administration, which has criticized Canadian tech regulations such as the Online News Act and Online Streaming Act. Analysts warn any attempt to regulate AI could provoke U.S. backlash.
Still, experts argue Canada must prioritize safety over external pressures. Chris Tenove of the University of British Columbia noted:
“We’re left with the question of whether Canada can make its own laws to protect its own citizens, or has to comply with Trump administration wishes.”
The Justice Department says the updated legislation will address child exploitation, sexual extortion, and deepfake abuse, but whether AI chatbots will be directly regulated remains uncertain.

