Wed. Apr 1st, 2026

AI Error Sparks Outrage After Canadian Immigration Refusal Cites Fake Job Duties

OTTAWA — A controversial permanent residence refusal is raising serious concerns about the use of artificial intelligence in Canada’s immigration system after an applicant’s job duties were incorrectly generated by an AI-assisted review.

Kémy Adé, a post-doctoral research fellow and teacher at McMaster University, was stunned when her application was rejected with a description of job duties that bore no resemblance to her actual work.

The refusal letter claimed she performed technical tasks such as wiring control circuits, assembling robot panels, and troubleshooting machinery — responsibilities completely unrelated to her field in health sciences and immunology.

“I was disoriented how this could happen,” Adé said, noting that none of those duties were included in her original application.

The letter included a disclaimer stating that generative AI had been used to support processing, marking what is believed to be the first explicit acknowledgment of such technology in an immigration refusal. While officials maintained that a human officer made the final decision and verified the content, critics argue the case highlights serious flaws in oversight.

Immigration lawyers and experts warn that generative AI tools — similar to systems like ChatGPT — can produce “hallucinations,” generating incorrect or fabricated information that can influence outcomes if not carefully reviewed.

Toronto-based immigration lawyer Zeynab Ziaie described the process as a “black box,” raising concerns about transparency and accountability in high-stakes decisions.

The controversy comes as Canada’s immigration department rolls out its first AI strategy aimed at improving efficiency and reducing backlogs, which currently exceed one million applications. Officials say AI tools are used for tasks like summarizing information, analyzing files, and identifying potential fraud.

However, critics argue that relying on such tools without clear safeguards risks undermining trust in the system.

Legal experts are also questioning how a human officer could have approved a decision containing such obvious inaccuracies. “Something seriously went wrong here,” said Adé’s lawyer, who has now requested a reconsideration of the case.

The department has since reopened the file.

The incident has sparked a broader debate about the role of AI in public administration, particularly in sensitive areas like immigration where decisions can have life-altering consequences.

Experts emphasize that while automation can help manage large volumes of applications, it must be paired with rigorous human oversight to prevent errors that could unjustly impact applicants.

As Canada moves toward greater use of AI in governance, this case may become a defining example of both its risks and the urgent need for transparency and accountability.

Related Post