Politics

National security agencies should explain how they're using AI: federal advisory body

A federal advisory body is calling on Canada's security agencies to publish detailed descriptions of their current and intended uses of artificial intelligence systems and software applications.

'Secrecy breeds suspicion,' says National Security Transparency Advisory Group

People listen to an artificial intelligence seminar at the Forsyth County Senior Center, Tuesday, June 25, 2024, in Cumming, Ga.
People listen to an artificial intelligence seminar June 25, 2024, in Cumming, Ga. In a new report, the National Security Transparency Advisory Group is urging the government to look at amending legislation being considered by Parliament to ensure oversight of how federal agencies use AI. (AP Photo/Mike Stewart)

A federal advisory body is calling on Canada's security agencies to publish detailed descriptions of their current and intended uses of artificial intelligence systems and software applications.

In a new report, the National Security Transparency Advisory Group also urges the government to look at amending legislation being considered by Parliament to ensure oversight of federal agencies' use of AI.

The recommendations are among the latest proposed by the group, created in 2019 to increase accountability and public awareness of national security policies, programs and activities.

The government considers the group an important means of implementing a six-point federal commitment to be more transparent about national security.

Security agencies are already using AI for tasks ranging from translation of documents to detection of malware threats. The report foresees increased reliance on the technology to analyze large volumes of text and images, recognize patterns, and interpret trends and behaviour.

As use of AI expands across the national security community, "it is essential that the public know more about the objectives and undertakings" of national border, police and spy services, the report says.

"Appropriate mechanisms must be designed and implemented to strengthen systemic and proactive openness within government, while better enabling external oversight and review."

The 'black box' problem

As the government collaborates with the private sector on national security objectives, "openness and engagement" are crucial enablers of innovation and public trust, while "secrecy breeds suspicion," the report says.

A key challenge in explaining the inner workings of AI to public is the "opacity of algorithms and machine learning models" — the so-called "black box" that could lead even national security agencies to lose understanding of their own AI applications, the report notes.

Ottawa has issued guidance on federal use of artificial intelligence, including a requirement to carry out an algorithmic impact assessment before creation of a system that assists or replaces the judgment of human decision-makers.

It has also introduced the Artificial Intelligence and Data Act, currently before Parliament, to ensure responsible design, development and rollout of AI systems.

A security camera over a tall, black, pointed metal fence.
Surveillance cameras and metal fences surround the Communications Security Establishment in Ottawa. (Olivier Plant/Radio-Canada)

However, the act and a new AI commissioner would not have jurisdiction over government institutions such as security agencies. The advisory group is recommending that Ottawa look at extending the proposed law to cover them.

The Communications Security Establishment, Canada's cyberspy agency, has long been at the forefront of using data science to sift and analyze huge amounts of information.

Harnessing the power of AI does not mean removing humans from the process, but rather enabling them to make better decisions, the agency says.

In its latest annual report, the CSE describes using its high-performance supercomputers to train new artificial intelligence and machine learning models, including a custom-made translation tool.

The tool, which can translate content from more than 100 languages, was introduced in late 2022 and made available to Canada's main foreign intelligence partners the following year.

The CSE's Cyber Centre has used machine learning tools to detect phishing campaigns targeting the government and to spot suspicious activity on federal networks and systems.

In response to the advisory group report, the CSE noted its various efforts to contribute to the public's understanding of artificial intelligence.

However, it indicated CSE "faces unique limitations within its mandate to protect national security" that could pose difficulties for publishing details of its current and planned AI use.

"To ensure our use of AI remains ethical, we are developing comprehensive approaches to govern, manage and monitor AI and we will continue to draw on best practices and dialogue to ensure our guidance reflects current thinking."

CSIS says there are limits on discussing operations

The Canadian Security Intelligence Service, which investigates such threats as extremist activity, espionage and foreign meddling, welcomed the transparency group's report.

The spy service said work is underway to formalize plans and governance concerning use of artificial intelligence, with transparency underpinning all considerations.

"Given CSIS's mandate," it added, "there are important limitations on what can be publicly discussed in order to protect the integrity of operations, including matters related to the use of AI."

In 2021, Daniel Therrien, the federal privacy commissioner at the time, found the RCMP broke the law by using cutting-edge facial-recognition software to collect personal information.

WATCH | Monitoring the growing risk of AI content on social media 

Monitoring the growing risk of AI content on social media

7 months ago
Duration 2:07
Social media platforms have started to label AI-generated content, but experts say more needs to be done to make users aware of how to tell fact from fiction.

Therrien said the RCMP failed to ensure compliance with the Privacy Act before it gathered information from U.S. firm Clearview AI.

Clearview AI's technology allowed for the collection of vast numbers of images from various sources that could help police forces, financial institutions and other clients identify people.

In response to the concern over Clearview AI, the RCMP created the Technology Onboarding Program to evaluate compliance of collection techniques with privacy legislation.

The transparency advisory group report urges the Mounties to tell the public more about the initiative. "If all activities carried out under the Onboarding Program are secret, transparency will continue to suffer," it says.

The RCMP said it plans to soon publish a transparency blueprint that will provide an overview of the onboarding program's key principles for responsible use of technologies, as well as details about tools the program has assessed.

The Mounties said they are also developing a national policy on the use of AI that will include a means of ensuring transparency about tools and safeguards.

The transparency advisory group also chides the government for a lack of public reporting on the progress or achievements of its transparency commitment. It recommends a formal review of the commitment with "public reporting of initiatives undertaken, impacts to date, and activities to come."

Public Safety Canada said the report's various recommendations have been shared with the department's deputy minister and the broader national security community, including relevant committees.

However, the department stopped short of saying whether it agreed with recommendations or providing a timeline for implementing them.

Add some “good” to your morning and evening.

Your weekly guide to what you need to know about federal politics and the minority Liberal government. Get the latest news and sharp analysis delivered to your inbox every Sunday morning.

...

The next issue of Minority Report will soon be in your inbox.

Discover all CBC newsletters in the Subscription Centre.opens new window

This site is protected by reCAPTCHA and the Google Privacy Policy and Google Terms of Service apply.