OTTAWA — Canada’s national security watchdog has launched a broad review of how artificial intelligence is being defined, used, and governed across the country’s security and intelligence agencies.
The National Security and Intelligence Review Agency (NSIRA) has notified key federal ministers and senior officials that it is examining the role of AI in national security activities, including oversight frameworks and potential risks associated with emerging technologies.
Canadian security agencies already rely on artificial intelligence for a range of functions, from translating documents to identifying malware threats.
In a letter sent to ministers and agency heads, NSIRA chair Marie Deschamps said the review will offer insights into the growing use of AI tools, help inform future oversight work, and identify “potential gaps or risks” that may require attention.
Under its mandate, NSIRA has the legal authority to access all relevant information held by departments and agencies under review, including classified and privileged material, with the exception of cabinet confidences.
The letter, published on the agency’s website, states that information requests may include documents, written submissions, briefings, interviews, surveys, system access, and, in some cases, independent inspections of technical systems.
The correspondence was sent to several cabinet ministers, including Prime Minister Mark Carney, Artificial Intelligence and Digital Innovation Minister Evan Solomon, Public Safety Minister Gary Anandasangaree, Defence Minister David McGuinty, Foreign Affairs Minister Anita Anand, and Industry Minister Mélanie Joly.
It was also addressed to the heads of major security agencies, including the Canadian Security Intelligence Service (CSIS), the RCMP, and the Communications Security Establishment (CSE), Canada’s cyber intelligence agency. In addition, NSIRA notified organizations not typically associated with national security, such as the Canadian Food Inspection Agency, the Canadian Nuclear Safety Commission, and the Public Health Agency of Canada.
In response to questions about the review, the RCMP said it supports independent scrutiny of national security and intelligence activities.
“The RCMP believes that establishing transparent and accountable external review processes is critical to maintaining public confidence and trust,” the force said in a media statement.
The review follows a 2024 report by the National Security Transparency Advisory Group, which urged Canadian security agencies to publish more detailed information about their current and planned uses of AI systems. The advisory body forecast increasing reliance on the technology to analyze large volumes of text and images, identify patterns, and interpret trends and behaviours.
At the time, both CSIS and CSE acknowledged the importance of transparency but noted that national security considerations limit what can be disclosed publicly.
Federal principles governing the use of artificial intelligence emphasize openness about when and how AI is used, early identification and management of risks to legal rights and democratic norms, and training for public servants on legal, ethical, privacy, and security issues.
In its most recent annual report, CSIS said it is rolling out AI pilot projects across the agency in line with those principles.
The RCMP notes on its website that responsible AI use depends on careful system design to avoid bias and discrimination, respect for privacy during data analysis, transparency in decision-making, and accountability mechanisms to ensure systems function properly.
CSE’s artificial intelligence strategy outlines commitments to developing new AI and machine-learning capabilities, promoting responsible and secure use of the technology, and countering threats posed by AI-enabled adversaries.
According to the agency, safe and effective deployment of AI would allow it to analyze larger volumes of data more quickly and precisely, improving both the quality and speed of decision-making.
“We will always be thoughtful and rule-bound in our adoption of AI, keeping responsibility and accountability at the core of how we will achieve our goals,” CSE chief Caroline Xavier said in a message included in the strategy.
She added that recognizing the limitations of AI, the agency plans to scale its use gradually, with rigorous testing and evaluation and continued human oversight.

