Advocates for women and children say a surge of sexual deepfakes on the social media platform X underscores the need for Canada to establish an independent online safety regulator.
“This highlights the need for regulation within Canada in this space,” said Lloyd Richardson, director of technology at the Canadian Centre for Child Protection. “We need to be able to appropriately address this type of issue when it comes up.”
Richardson and Rosel Kim, a senior staff lawyer with the Women’s Legal Education and Action Fund, are calling for a regulatory body similar to one proposed by the Liberal government in 2024. Kim said a specialized regulator with enforcement powers could address technology-facilitated gender-based violence through legal remedies, direct support, research, and education.
The calls follow global backlash over the spread of sexual deepfakes on X generated by the platform’s chatbot, Grok. The images have largely targeted women and, in some cases, children. While deepfake technology is not new, critics say X made it more accessible by allowing users to edit images directly through Grok. That feature has since been limited to paid users.
European Commission President Ursula von der Leyen described the situation as “unthinkable behaviour,” warning that Europe would intervene if technology companies failed to act. Malaysia and Indonesia have said they will block access to Grok, while a ban is under consideration in the United Kingdom.
In Canada, AI Minister Evan Solomon said Sunday the federal government is not considering a ban. The decision was praised by X owner Elon Musk, who shared the announcement online.
The Liberal government introduced the Online Harms Act in 2024, proposing a 24-hour takedown requirement for non-consensual intimate content, including deepfakes, and the creation of a digital safety commission and an ombudsperson. The bill did not pass before the 2025 election was called.
Asked whether the legislation would be reintroduced, a spokesperson for Culture Minister Marc Miller did not give a direct answer, saying the government remains focused on developing a broader framework for AI safety and online harm prevention.
A separate federal bill introduced late last year would criminalize sexual deepfakes, but advocates argue criminal law alone is insufficient. Kim said women who speak out often become targets of further abuse, leading many to withdraw from online spaces.
“It’s really impacting their freedom of expression and their ability to participate in public life,” she said.
Suzie Dunn, an assistant professor of law at Dalhousie University, said Canada lags behind countries such as the EU, the U.K., Australia and New Zealand, which have online safety laws and support services like helplines for victims of cyber abuse.
“In Canada, there are very few places where people can turn for help,” Dunn said. “There’s a significant gap not just in legislation, but in the social supports available to those experiencing online harm.”

