Canadian officials grill OpenAI after it bans, but doesn’t report BC mass shooter 


This recording was made using enhanced software.

Summary

OpenAI's response

OpenAI banned a shooter from using ChatGPT last year after receiving concerning messages, though the company did not specify what those messages were.

Canadian government investigation

Canadian officials, including the country's minister of AI Evan Solomon, held a meeting with OpenAI leaders earlier this week.

Mandatory reporting laws

In Canada, mandatory reporting means certain professionals like psychiatrists and clergy members must alert law enforcement to child abuse and neglect.


Full story

Revelations that an artificial intelligence company knew about plans of a mass shooting in Canada and chose not to report it to authorities months in advance have created a conflict between safety and privacy in a burgeoning technology. 

Following a mass shooting in Canada that left eight people dead and dozens wounded, Canadian officials want to know what AI companies knew about the shooter. Jesse Van Rootselaar, 18, killed two members of her family before killing six people at a school, including five children.

QR code for SAN app download

Download the SAN app today to stay up-to-date with Unbiased. Straight Facts™.

Point phone camera here

Last year, that shooter was banned from using OpenAI’s ChatGPT after the chatbot received concerning messages. OpenAI did not specify what those concerning messages were, but chose not to relay its findings to law enforcement.

Canada probe

Minister of AI Evan Solomon and other Canadian officials met with OpenAI leaders earlier this week — a meeting the Canadians would later call “disappointing,” according to Politico.

Members of Canadian Prime Minister Mark Carney’s cabinet hoped to learn more about why law enforcement wasn’t notified about the shooter’s account.

“I’m not surprised by that at all,” Laura Huey, professor of sociology at the University of Western Ontario, told Straight Arrow News. “That would make perfect sense. The reality, though, is where we’re at with artificial intelligence, we needed to be having discussions with tech companies like years ago. We are almost always behind the curve when it comes to new technologies, and AI is just one classic example of that.”

Canadian leaders also reportedly wanted better explanations on safety protocols and when information is shared with law enforcement.

“There’s only so much that can never come out of a meeting like that,” Emily Laidlaw, Canada Research Chair in cybersecurity law and associate law professor at the University of Calgary, told SAN.

Solomon met with OpenAI’s head of policy and six others from the company, but they were reportedly not happy with what they heard.

When asked by a CBC News reporter if he heard anything troubling in the meeting, one federal minister simply replied, “Yes.”

Mandatory reporting

“The government has an important role to play,” Laidlaw said. “They have decisions to make about what laws they want to pass and if they want to implement any sort of mandatory reporting requirements, and what that would look like.”

In Canada, mandatory reporting means certain professionals, like psychiatrists and clergy members, must alert law enforcement to child abuse and neglect.

It does not apply to AI companies.

“They can create their own internal policies about how to handle things and decide when and where they want to share information with law enforcement,” Huey said.

Without specific guidelines in place, Laidlaw said the decision is up to the companies.

“It’s really left to them to figure out,” she said.

Huey added that law enforcement needs to be the one to come to AI or social media companies first. The shooter had a history of concerning behavior on social media as well.

“Police get notification that there’s some interesting information that’s out on social media platforms, on an AI platform, etc., and they would then approach the company and ask for access,” Huey said. “And the company might turn around and require them to go before a judge to get a production order to get access to that information. That happens a lot.”

In recent years, some Canadian lawmakers have tried to alter those laws, including a 2024 measure that would have changed who is subject to mandatory reporting requirements regarding child pornography.

That bill would have made reforms to mandatory reporting, including making the laws applicable to all types of internet services and expanding reporting obligations. However, those mandatory reporting changes still mostly focused on forms of child abuse.

“Canada is just as polarized in many ways as the U.S. right now,” Laidlaw said. “And so, when a bill was introduced, there were talks of censorship, there were important criticisms of the bill itself, it just kind of got swept up into that polarized debate.”

When the Canadian parliament recessed last month, the bill officially died.

“There’s plans to reintroduce the bill in some form,” Laidlaw said. “So, I would expect that, both, chatbots will be scoped in which they were not before and that mandatory reporting of some sort might be added.”

AI policy and regulation

With OpenAI able to create its own policy on this, the company told Politico they considered reporting the shooter’s behavior to police but ultimately opted not to.

“We’ve committed to follow up in the coming days with an update on additional steps we’re taking, as we continue to support law enforcement and work with the government on strengthening AI safety for all Canadians,” the company told Politico.

Right now, there are not many laws in the books regulating AI companies in Canada compared to the U.S.

“We haven’t really tackled this issue, and we don’t have the same litigious culture here,” Huey said. “So, there isn’t going to be as much case law.”

If Canada does pass stricter regulatory laws, that can get hard to enforce, especially because OpenAI is an American company.

“There was a Supreme Court of Canada judgment that required Google to delist worldwide some search results, and a lower court in California held that it was unenforceable based on Section 230, so that challenge about enforceability is very real,” Laidlaw said.

Section 230 has been used for decades, including in an ongoing trial against Meta, by social media companies to skirt liability for things posted on their sites. So far, there’s really no precedent on how it applies to AI providers.

Canada experiences significantly fewer mass shootings per year than the U.S. This latest one shook much of the country, and Laidlaw said this new information about the shooters’ AI history could be a driver behind new regulations.

“This might be a bit of a sea change with what happened,” she said. “Because it’s such a horrific tragedy that it just sheds this light on the role of these companies and made the public acutely aware of their decision-making power and that so much rests on their shoulders.”

Huey isn’t so sure.

“Who’s going to stand behind and champion increased police access to information about your personal stuff on the internet if there aren’t as many groups that are willing to speak up, and the public is not as sympathetic to that, even in situations where you have a significant mass shooting?” she asked.

Tags: , , , , ,

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Why this story matters

Canadian officials are questioning whether AI companies should be legally required to report threatening user behavior to police after a banned ChatGPT user killed eight people, but no such obligation currently exists in Canada or the U.S.

AI companies set their own rules

OpenAI and other AI platforms decide internally when to share user information with law enforcement without legal requirements to report threatening behavior.

Police must request access first

Law enforcement agencies must approach AI companies for user data and may need a court order to obtain it, rather than receiving proactive alerts.

Proposed reporting laws remain stalled

A Canadian bill that would have expanded mandatory reporting requirements for internet services died when parliament prorogued and has not been reintroduced.

Get the big picture

Behind the numbers

OpenAI banned Van Rootselaar's account in June 2025, seven months before the Feb. 10, 2026 shooting. Eight people were killed, including five students ages 12 to 13 and a 39-year-old teaching assistant.

Community reaction

British Columbia Premier David Eby stated he was angry, saying it looks like OpenAI had the opportunity to prevent the tragedy. B.C. Green Party Leader Emily Lowan said the reports sickened her and called OpenAI's actions wildly irresponsible.

Policy impact

Canadian officials are considering regulatory options for AI chatbots, with all options on the table according to Minister Solomon. British Columbia Premier Eby called for national standards for AI companies on reporting potential threats to prevent similar tragedies.

SAN provides
Unbiased. Straight Facts.

Don’t just take our word for it.


Certified balanced reporting

According to media bias experts at AllSides

AllSides Certified Balanced May 2025

Transparent and credible

Awarded a perfect reliability rating from NewsGuard

100/100

Welcome back to trustworthy journalism.

Find out more

Media landscape

Click on bars to see headlines

108 total sources

Key points from the Left

No summary available because of a lack of coverage.

Report an issue with this summary

Key points from the Center

No summary available because of a lack of coverage.

Report an issue with this summary

Key points from the Right

No summary available because of a lack of coverage.

Report an issue with this summary

Other (sources without bias rating):

Powered by Ground News™

Timeline

Timeline

Daily Newsletter

Start your day with fact-based news

Start your day with fact-based news

Learn more about our emails. Unsubscribe anytime.