Skip to content

Ontario privacy commissioner feels urgency to address 'Wild West' risks of AI

TORONTO — Ontario's information and privacy commissioner says she feels "a sense of urgency" to act on artificial intelligence as her list of concerns with the technology mounts.
20240125100116-7dc388103ae5953cab092a7e3e00df5cf37849f1023c0186252789af269e88a2
Patricia Kosseim, Ontario's information and privacy commissioner, says she feels "a sense of urgency" around artificial intelligence as her list of concerns with the technology mounts. Kosseim is shown in Toronto, Wednesday, Jan. 24, 2024. THE CANADIAN PRESS/Cole Burston

TORONTO — Ontario's information and privacy commissioner says she feels "a sense of urgency" to act on artificial intelligence as her list of concerns with the technology mounts.

Patricia Kosseim's worries about AI include it being used to spread misinformation, dupe Canadians, entrench biases and cause discrimination.

She says AI chatbots like ChatGPT, which can quickly turn simple prompts from users into detailed text, are also concerning.

"When you prompt systems like ChatGPT, what you're getting back is not an organized, curated librarian reference material," she said in an interview.

"It's the Wild West, what you're getting; not knowing what the source is, not knowing how it was created."

Kosseim's remarks come as Canada marks Data Privacy Week, an occasion that coincides with rapid advances in AI that have made the technology a talking point in nearly every industry.

Since OpenAI released ChatGPT in November 2022, an increasing number of companies are exploring how they can deploy AI while regulators consider how they can protect the public from its risks without quelling opportunities. 

Striking the right balance is often on Kosseim's mind as she considers the rapid advances AI has made over the last year.

Those advances are particularly evident when it comes to deep fakes — video, audio clips or photos where technology is used to make someone look, say or do things they have not done.

"This is where conspiracy theorists and other disinformants are just too ready to pounce in and fill those information gaps with disinformation," she said.

Malicious actors have found ways to synthetically mimic executive's voices down to their exact tone and accent, duping employees into thinking their boss is asking them to transfer funds to a perpetrator's account. 

Kosseim has said viral fake images such as one depicting an explosion of the U.S. Pentagon that triggered a brief drop in the stock market and a fabricated video of director Michael Moore voicing support for U.S. presidential nominee Donald Trump are "no laughing matter."

Kosseim feels now is the time to address such risks.

"The penny dropped. Data affects every one of us, every organization will be using and integrating AI in its processes. It is the most fundamental paradigm shift of our generation," she said.

"Technology is not just a remote sort of thing that happens in the corporate tech board rooms or laboratories."

This view is being held by legislators too. The federal government tabled a bill in June meant to place some regulation around AI in June. 

The bill is expected to be implemented no earlier than 2025. In the meantime, the feds have courted tech companies to agree to a voluntary code of conduct, which asks signatories to screen datasets for potential biases and assess any AI they create for “potential adverse impacts.”

Meanwhile, Ontario has created an AI framework to set out risk-based rules for guiding the public sector's use of AI. Kosseim provided comments during the government's public consultation, when the framework was being developed.

Ontario's Trustworthy Artificial Intelligence Framework includes ensuring AI is not used in secret by sharing information around when and how it is being deployed. The other elements of the framework include instilling trust in the use of the technology by defining and preventing its risks and guaranteeing there is a way to challenge decisions made with AI.

But Kosseim says the province needs to go even further.

Since May, she and the Ontario Human Rights Commissioner have been calling on the province to develop and implement effective guardrails on the public sector’s use of AI technologies

While their initial demand didn't outline exactly what the guardrails would be, Kosseim said any mechanisms the province lands on should be "more comprehensive, more robust, granular" and "binding."

"Binding rules backed up by enforcement creates the incentives that organizations need to focus on the right things," she said.

"It's not about punishing them after the fact. It's about encouraging them to pay attention at the front end."

Asked whether she feels her calls will be heeded, Kosseim said, "I'm hopeful. I think they're going to have to."

In response to Kosseim's push, a spokesperson for the Ministry of Public and Business Service Delivery said Ontario has a working group of experts in the sector providing advice on the province's approach to AI.

"Their expertise will help ensure that the Ontario government’s use of AI is responsible, transparent, and accountable," Nicholas Rodrigues said.

This report by The Canadian Press was first published Jan. 25, 2024.

Tara Deschamps, The Canadian Press

Note to readers: This is a corrected story. A previous version stated Patricia Kosseim worked with tech and privacy experts to create an AI framework for Ontario.