Fri. Apr 24th, 2026

Growing Use of AI in Canadian Courts Raises Risks of Errors, Sanctions, and Higher Legal Costs

As artificial intelligence becomes more deeply embedded in everyday life, its presence in Canadian courtrooms is increasing — and lawyers warn the trend carries significant risks when used improperly.

Toronto family lawyer Ron Shulman says it has become common for clients to rely on AI tools to draft emails, organize materials, or even determine legal strategy. What once might have raised suspicion — a client suddenly sending a lengthy, highly structured message — now prompts a straightforward question: Did you use AI?

“Most of the time, the answer is yes,” Shulman said in an interview, noting his firm now encounters AI-generated content almost every week.

While AI can be useful for summarizing information or organizing thoughts, Shulman said some clients treat it as a form of “super intelligence,” relying on it to guide decisions in their legal cases. That reliance can be problematic, he said, because AI systems are not always accurate and often reinforce the assumptions of the user.

The growing use of AI has also led some individuals to represent themselves in court, generating large volumes of AI-produced documents. According to Shulman, this can slow proceedings and increase costs for all parties involved as courts sift through material that may be irrelevant or incorrect.

In recent years, courts, tribunals, and administrative boards across Canada and the United States have seen submissions generated using tools such as ChatGPT. In some cases, those submissions have included so-called “hallucinations” — legal authorities or references that are inaccurate or entirely fabricated — resulting in serious consequences.

Earlier this year, a Toronto lawyer became the subject of criminal contempt proceedings after submitting court cases invented by ChatGPT and later denying the use of AI when questioned by a judge. In a subsequent letter to the court, the lawyer said she misrepresented the situation out of fear and embarrassment.

Courts have also imposed financial penalties. In Quebec, a court ordered a $5,000 sanction against a man who relied on generative AI to prepare filings after dismissing his lawyer. In Alberta, the province’s top court ordered a self-represented litigant to pay $500 in additional costs after her submissions cited three non-existent cases, warning that harsher penalties could follow for future misuse.

In response, courts and professional regulators in several provinces have issued guidance on AI use. Some, including the Federal Court, now require parties to disclose when generative AI has been used in preparing submissions.

Despite the risks, lawyers acknowledge that AI can be helpful when used carefully. Ksenia Tchern McCallum, a Toronto-based immigration lawyer licensed in both Canada and the United States, said she increasingly sees clients arrive with AI-generated research or completed applications for her review.

In other cases, clients use AI to “fact-check” legal documents she has prepared, potentially exposing sensitive personal information and undermining trust in the lawyer-client relationship.

“It can put a lot of strain on client relations,” McCallum said. “AI can tell you what typically happens in a process, but it can’t replace experience or judgment about what actually works.”

She added that online forums often encourage people to use AI to save legal fees, but those efforts can backfire. Courts have rejected AI-generated submissions that cite laws, pathways, or cases that do not exist, sometimes awarding costs against self-represented litigants as a result.

Shulman shared a similar example from his own practice, where a client submitted several pages of AI-generated material about exclusive possession of a matrimonial home — a concept that did not apply because the client was not married.

“You’ve just spent half an hour of fees reading something that was never relevant,” he said.

To manage expectations, Shulman now provides clients with a disclaimer explaining that all materials they send must be reviewed. He also encourages clients to ask lawyers for explanations rather than relying solely on AI, or at least to seek guidance on using it responsibly.

That type of education is increasingly in demand, said Jennifer Leitch, executive director of the National Self-Represented Litigants Project. The organization recently hosted a webinar on safe and appropriate AI use for people without legal representation, drawing about 200 participants.

Leitch described the approach as a form of harm reduction. “People are going to use it,” she said. “So let’s use it responsibly.”

Her advice includes verifying any cases cited by AI, following court rules on disclosure, and adhering to filing length limits. While AI has the potential to improve access to justice, she said its reliability remains uneven — especially compared with professional-grade tools used within law firms.

Those tools, however, are often behind paywalls, she noted, while freely available AI platforms are more prone to errors.

Toronto-area personal injury and disability lawyer Nainesh Kotak said law firms will likely need to adopt some form of AI to remain competitive. The key, he said, is ensuring that lawyers review and correct AI-generated content and comply with privacy, security, and professional standards.

Ultimately, Kotak said, AI remains just that — a tool.

“It cannot replace legal judgment, ethical obligations, or human understanding,” he said.

Related Post