Read your favorite news, except the excluded topics, by you.
Register
No overlapping ads for registered users
followers endure month's mass shot in Tumbler ridgeline, B.C., questions ar mounting about what artificial intelligence companies should do when users post disturbing content online.
It comes after OpenAI, the company behind ChatGPT, acknowledged it flagged and banned an account belonging to 18-year-old Jesse Van Rootselaar about half a year before she killed eight people, most of them children, and then herself on Feb. 10.
The U.S. Tech company said it did not alert police at the time because the account's activity in June 2025 didn't meet the "higher threshold required."
OpenAI’s response prompted anger and frustration from provincial and federal officials, including from B.C. Premier David Eby, who said the tragedy might have been prevented had the company alerted authorities earlier.
However, some experts say knowing when to flag a user interacting with a chatbot is complicated.
OpenAI has said that Van Rootselaar's account was detected via automated tools and human investigations that "identify misuses of our models in furtherance of violent activities."
The account was banned but the company said the content did not meet its internal standard for referral to law enforcement, which requires signs of "imminent and credible risk" of serious physical harm.
OpenAI must be more clear about commitment to change: AI minister
After the shooting, OpenAI discovered the teen had created a second account and gotten around the ban. It said that after learning of the shooting, it proactively reached out to RCMP with information on Van Rootselaar.
What exactly the 18-year-old discussed with its ChatGPT bot has not been disclosed and it isn’t known what the chatbot said in response.
No. Canada does not currently have a regulatory framework specific to AI.
While existing laws in areas such as health and criminal activity apply to certain uses of AI, there is no federal law requiring AI companies to report potentially violent users to police.
Alan Mackworth, a professor emeritus of computer science at the University of British Columbia, says reporting standards are voluntary and set by individual companies.
Coalition calls on government to retable Online Harms Act before 2026
“We just can't rely on the companies to voluntarily stand up,” Mackworth said. “There needs to be some public accountability by having a regulatory agency with enforcement powers to check standards.”
The UBC professor says Canada is behind the European Union, which passed the AI Act in 2024, and the United Kingdom’s Online Safety Act.
Canada’s Liberal government introduced an online harms bill in 2024, which would have imposed new requirements on social media companies and created an online regulator. But the bill never became law because the 2025 election was called.
Mackworth argues that AI companies should have something like a “duty to report,” similar to teachers or doctors who are legally required to report suspected harm to a minor.
OpenAI has now made some commitments in the wake of the tragedy, some of which include establishing a direct point of contact with Canadian law enforcement, upgrading its model to allow the company to direct users to local mental health supports when warranted and strengthening its detection systems.
According to the tech firm, under its updated referral system, the company would refer the shooter's account "banned in June 2025 to law enforcement if it were discovered today.”
Moira Aikenhead, a lecturer at UBC’s Peter A. Allard School of Law, cautions against assuming that reporting conversations with AI would necessarily have stopped the tragedy.
'Something could have been done' if OpenAI reported what they knew about Tumbler Ridge shooter: May
“People in the wake of tragedies want answers," she said. “But, when we're looking at creating a new digital policy, we need to be really cautious that we avoid knee jerk reactions.”
ChatGPT is not a public forum but a private interaction between a user and a company. However, the UBC lecturer says if companies begin reporting private queries, many Canadians would have serious privacy concerns.
Context is another issue, Aikenhead raises, because people can be asking chatbots anything without really meaning to cause harm.
“You can have kids typing in ‘How could I commit the perfect crime?’ out of curiosity to see what ChatGPT says,” she added. “That could potentially put this child on the RCMP's radar.”
Aikenhead argues that if reporting thresholds are expanded, they must be transparent and clearly defined and shaped by government regulation, not private firms.
Even with regulation, experts say there are technical limits.
Vered Shwartz, an assistant professor at UBC who specializes in AI, says tech companies review massive volumes of conversations, and judging whether content reflects fantasy, curiosity or real intent is not straightforward.
“The question of reporting someone before something happens is a very hard question,” Shwartz said. “It's kind of similar to how the police can't arrest someone unless they have grounds to believe that they are going to commit a crime.”
She says that users could be wrongly flagged, and gave the example of a father whose account was disabled by Google after a photo he sent of his infant son to a doctor was flagged as “harmful content.”
Minister troubled by talks with OpenAI after Tumbler Ridge, B.C., shooting
Artificial Intelligence Minister Evan Solomon says OpenAI's recent commitments to adjust its policies do not go far enough.
Solomon says he is meeting with OpenAI CEO Sam Altman this week to seek further clarity on stronger safety protocols.
He says he will also sit down with other major tech companies, adding that all regulatory options remain on the table.
In today's interconnected world, staying informed about global events is more important than ever. ZisNews provides news coverage from multiple countries, allowing you to compare how different regions report on the same stories. This unique approach helps you gain a broader and more balanced understanding of international affairs. Whether it's politics, business, technology, or cultural trends, ZisNews ensures that you get a well-rounded perspective rather than a one-sided view. Expand your knowledge and see how global narratives unfold from different angles.
At ZisNews, we understand that not every news story interests everyone. That's why we offer a customizable news feed, allowing you to control what you see. By adding keywords, you can filter out unwanted news, blocking articles that contain specific words in their titles or descriptions. This feature enables you to create a personalized experience where you only receive content that aligns with your interests. Register today to take full advantage of this functionality and enjoy a distraction-free news feed.
Stay engaged with the news by interacting with stories that matter to you. Like or dislike articles based on your opinion, and share your thoughts in the comments section. Join discussions, see what others are saying, and be a part of an informed community that values meaningful conversations.
For a seamless news experience, download the ZisNews Android app. Get instant notifications based on your selected categories and stay updated on breaking news. The app also allows you to block unwanted news, ensuring that you only receive content that aligns with your preferences. Stay connected anytime, anywhere.
With ZisNews, you can explore a wide range of topics, ensuring that you never miss important developments. From Technology and Science to Sports, Politics, and Entertainment, we bring you the latest updates from the world's most trusted sources. Whether you are interested in groundbreaking scientific discoveries, tech innovations, or major sports events, our platform keeps you updated in real-time. Our carefully curated news selection helps you stay ahead, providing accurate and relevant stories tailored to diverse interests.
No comments yet.