Canada’s national security and intelligence agencies are increasingly adopting artificial intelligence (AI) tools to enhance their ability to protect the country, as oversight bodies move to examine how these powerful technologies are being governed and used.
The National Security and Intelligence Review Agency (NSIRA) has confirmed it is reviewing the use and governance of AI across Canada’s security agencies. The watchdog has formally notified key federal ministers and organizations of the study, reflecting growing interest in ensuring AI tools are deployed responsibly, legally, and ethically in sensitive national security operations.
Canadian Security Intelligence Service (CSIS)
The Canadian Security Intelligence Service (CSIS), which is responsible for countering espionage, terrorism, and foreign interference, is preparing to launch a pilot project in early 2026 to evaluate AI tools that can assist analysts.
According to CSIS spokesperson Eric Balsam, the agency will assess AI applications for audio transcription, language translation, and document analysis, as well as a chatbot-style tool to help draft, edit, and summarize reports. CSIS emphasized that all AI-generated outputs will remain subject to human review to ensure accuracy and appropriateness.
CSIS also applies algorithmic impact assessments to proposed AI tools and consults with the Department of Justice on legal considerations before deployment, underscoring a cautious and structured approach.
Royal Canadian Mounted Police (RCMP)
The Royal Canadian Mounted Police (RCMP) has already incorporated AI-enabled tools into certain investigations. A 2024 RCMP report confirmed the use of face-matching technology embedded in software used to analyze large volumes of images and videos.
RCMP officials stress that such tools are used only with lawfully obtained evidence. Spokesperson Robin Percival noted that while AI can significantly improve efficiency and analysis, it also presents challenges related to privacy, ethics, data accuracy, and unauthorized access.
To address these concerns, the RCMP has established an AI policy and solutions team tasked with developing a framework for the responsible adoption of artificial intelligence, including internal working groups and updated policies.
Communications Security Establishment (CSE)
Canada’s cyber intelligence agency, the Communications Security Establishment (CSE), has long been at the forefront of technological innovation and uses AI extensively to defend federal and critical infrastructure networks.
According to its AI strategy, the CSE relies on machine learning to detect patterns across massive data sets, enabling the identification of cyber threats that traditional antivirus tools may miss. AI is also used for malware classification, particularly against sophisticated custom malware deployed by foreign adversaries.
The agency says AI will become even more critical in coming years, allowing analysts to process larger volumes of data faster and with greater precision, provided the technology is deployed safely and securely.
Global Affairs Canada
At Global Affairs Canada, officials are using an AI-powered document search and analysis platform known as Document Cracker. The tool enables users to scan large volumes of material, monitor emerging trends, and track references to key individuals, locations, and organizations.
According to the federal government’s AI project register, the system helps officials quickly identify pressing international issues, shape policy positions, and monitor the evolving stances of other countries.
Immigration, Refugees and Citizenship Canada
The federal passport program at Immigration, Refugees and Citizenship Canada (IRCC) uses facial recognition technology to authenticate identities, detect fraud, and prevent the issuance of passports and travel documents to ineligible applicants.
Transport Canada
Transport Canada is developing the Risk Evaluation and Conflict Tool, a data-driven initiative designed to improve Canada’s ability to monitor threats to passenger aircraft. The AI-enabled system automates and streamlines the traditionally labour-intensive process of monitoring open-source media, analyzing data, and assessing risks related to conflict zones.
Oversight and Accountability
As AI becomes more deeply embedded in national security operations, NSIRA’s review signals a broader effort to balance innovation with accountability. The study is expected to examine governance frameworks, risk management practices, and safeguards designed to protect privacy, civil liberties, and public trust.
While federal agencies emphasize that AI tools are intended to support—not replace—human judgment, the expanding use of artificial intelligence marks a significant evolution in how Canada approaches intelligence, law enforcement, cybersecurity, and risk assessment in an increasingly complex global environment.

