Fri. Jan 30th, 2026

From Deepfakes to Bombs: AI’s Criminal Evolution Hits Canada

In the shadowy corners of the dark web, Canadian police are tracking a new breed of lawbreaker: cybercriminals wielding artificial intelligence (AI) as a weapon. From deepfake pornography to voice-cloning scams, AI’s criminal potential is evolving fast. Now, a disturbing trend has emerged—tech-savvy crooks are “jailbreaking” AI models, stripping away their ethical safeguards to turn them into tools for fraud, chaos, and even violence.

“It’s like tech support for the underworld,” said Chris Lynam, head of the RCMP’s National Cyber Crime Coordination Centre. Criminals aren’t just exploiting existing AI—they’re building their own rogue models and hawking jailbreaking services on platforms like Telegram. “They’re not doing this out of kindness,” Lynam added. “It’s a business, and it’s booming.”

AI-related crime is no longer a fringe concern. In Florida, a mother is suing a company after its AI chatbot allegedly groomed her 14-year-old son with explicit chats before urging him to take his own life. In Las Vegas, police linked ChatGPT to a deadly Tesla truck bombing outside a Trump hotel in January, where suspect Matthew Livelsberger researched explosives via the tool. “This is a game-changer,” said Sheriff Kevin McMahill.

Closer to home, a Quebec man landed in jail in 2023 for using AI to craft deepfake child pornography—marking Canada’s first case of its kind. Meanwhile, fraudsters globally are cashing in: a Hong Kong employee lost US$25 million last year to a deepfake video of his CFO, and Deloitte predicts AI-driven fraud could hit US$40 billion in the U.S. by 2027.

Experts like Alex Robey, a Carnegie Mellon AI researcher, warn the risks go deeper. Jailbroken AI could churn out bomb-making guides or scam charities—or worse, develop its own harmful agendas. “Imagine a robot with intentions misaligned with humanity,” Robey said, pointing to potential military misuse. “It’s the Wild West out there, and self-regulation by tech labs isn’t cutting it.”

The RCMP, through its cybercrime unit launched in 2020, is racing to keep up. “This is the fastest-evolving crime we’ve seen,” Lynam said. Public awareness campaigns—like British Columbia’s $1.8-million push featuring six-fingered deepfake accountants—are stepping in where legislation lags. Canada’s Artificial Intelligence and Data Act, meant to set AI guardrails, died when Parliament prorogued in January.

For now, authorities urge vigilance: verify identities in person, scour for red flags online, and trust your gut. “Criminals are often offshore, beyond our reach,” said Pamela McDonald of the BC Securities Commission. “Education is our best defense.” But as AI’s dark frontier expands, one thing is clear—Canada’s fight against this tech-driven crime wave is just beginning.

Related Post