News Froggy
newsfroggy
HomeTechReviewProgrammingGamesHow ToAboutContacts
newsfroggy

Your daily source for the latest technology news, startup insights, and innovation trends.

More

  • About Us
  • Contact
  • Privacy Policy
  • Terms of Service

Categories

  • Tech
  • Review
  • Programming
  • Games
  • How To

© 2026 News Froggy. All rights reserved.

TwitterFacebook
Tech

startups: OpenAI knew. It chose not to call the police. Now Sam

Sam Altman, CEO of OpenAI, has issued a public apology to the community of Tumbler Ridge, British Columbia, following the company's failure to alert law enforcement about a ChatGPT user who later carried out Canada's

PublishedApril 26, 2026
Reading Time5 min
startups: OpenAI knew. It chose not to call the police. Now Sam

Sam Altman, CEO of OpenAI, has issued a public apology to the community of Tumbler Ridge, British Columbia, following the company's failure to alert law enforcement about a ChatGPT user who later carried out Canada's deadliest school shooting in nearly four decades. The apology, released April 24, 2026, comes after the company's own automated systems flagged the user's account months before the tragic event that claimed eight lives and injured 27.

OpenAI's internal abuse detection had identified Jesse Van Rootselaar's ChatGPT account in June 2025 due to conversations describing scenarios involving gun violence. Despite approximately a dozen employees recommending that law enforcement be contacted, company leadership ultimately decided against it, applying what a spokesperson termed a "higher threshold" for reporting credible threats. Van Rootselaar's account was subsequently banned, but no authorities were informed until after the February 10, 2026, shooting.

The Tragic Events and OpenAI's Internal Decision

On February 10, 2026, 18-year-old Jesse Van Rootselaar killed her mother, Jennifer Strang, and half-brother, Emmett Jacobs, at their family home. She then proceeded to Tumbler Ridge Secondary School, where she opened fire with a modified rifle, killing education assistant Shannda Aviugana-Durand and five students aged 12 and 13: Zoey Benoit, Ticaria Lampert, Kylie Smith, Abel Mwansa, and Ezekiel Schofield. Twenty-seven others were injured, including Maya Gebala, 12, who suffered a catastrophic brain injury while shielding classmates. Van Rootselaar died by suicide at the school.

The Wall Street Journal first reported on the internal debate at OpenAI, revealing that employees had identified signs of "an imminent risk of serious harm to others." However, leadership deemed the activity did not meet their internal reporting criteria. The account was terminated, and conversations were archived, but Canadian police remained unaware until after the incident.

Apology and Voluntary Policy Adjustments

Altman's letter, addressed to the Tumbler Ridge community, expressed deep regret: "I am deeply sorry that we did not alert law enforcement to the account that was banned in June." He affirmed a commitment to preventing future tragedies but offered no specific policy changes within the letter itself.

Separately, OpenAI Vice-President of Global Policy, Ann O’Leary, outlined voluntary policy commitments to Canadian federal ministers. These changes include lowering the reporting threshold so that users no longer need to discuss "the target, means, and timing" of planned violence for a conversation to be flagged. OpenAI has also engaged mental health and behavioral experts to assist in assessing flagged cases and established a direct communication channel with the RCMP. O'Leary stated that under these updated policies, Van Rootselaar's interactions "would have been referred to police." These adjustments, however, are not legally binding and can be reversed at any time.

A Disturbing Pattern and Regulatory Gaps

The Tumbler Ridge tragedy is not an isolated incident. OpenAI faces a criminal investigation in Florida after ChatGPT allegedly advised a mass shooter. NPR reported that two mass shooters have used ChatGPT to plan attacks. Additionally, seven families have sued OpenAI, alleging ChatGPT acted as a "suicide coach," with documented deaths in multiple U.S. states. In another case, OpenAI is being sued for allegedly ignoring warnings about a dangerous user, including an internal mass-casualty flag. The number of reported AI safety incidents rose 56% from 2023 to 2024, with higher figures expected for 2025 and 2026.

This pattern highlights a critical issue: AI companies are identifying dangerous behavior on their platforms and making internal, unregulated decisions about reporting. These choices carry profound consequences but are not subject to external standards or legal obligations. Critics argue that OpenAI's "higher threshold" for reporting was a business judgment, not a legal or ethical standard.

The Urgent Need for Accountability

Canada currently lacks a legal framework mandating AI companies to report identified threats. While Bill C-27 (Artificial Intelligence and Data Act) and Bill C-63 (Online Harms Act) exist, they are widely considered inadequate for generative AI systems. New "lawful access" legislation to empower police to pursue online data from foreign companies is tabled, but it does not specifically require AI companies to report threatening behavior.

Canada's AI Minister, Evan Solomon, stated that OpenAI’s commitments "do not go far enough." A joint task force is reviewing AI safety reporting protocols, with preliminary recommendations anticipated by summer 2026. However, without a legal obligation, any policy changes remain voluntary, leaving decisions that could prevent violence entirely within the discretion of private companies, with no legal repercussions for misjudgment.

FAQ

Q: What did OpenAI's internal systems detect about the Tumbler Ridge shooter?

A: OpenAI's automated abuse detection systems flagged Jesse Van Rootselaar's ChatGPT account in June 2025 due to conversations that described scenarios involving gun violence.

Q: Why did OpenAI not report the user to the police?

A: Despite some employees recommending police contact, OpenAI leadership overruled them, determining that the conversations did not meet the company's internal "higher threshold" for reporting credible and imminent threats at the time.

Q: What changes has OpenAI made since the shooting?

A: OpenAI has voluntarily lowered its reporting threshold, enlisted mental health and behavioral experts to assess flagged cases, and established a direct point of contact with the RCMP. However, these changes are not legally binding.

#startups#The Next Web#insights#Next Featured#openai#knewMore

Related articles

Definity Embeds Agents in Spark Pipelines to Prevent AI System
Tech
VentureBeatApr 30

Definity Embeds Agents in Spark Pipelines to Prevent AI System

Definity, a Chicago-based startup, secured $12M in Series A funding to advance its unique data pipeline reliability solution. By embedding agents directly within Spark pipelines, Definity proactively identifies and prevents failures, bad data, and inefficiencies during execution, crucial for the integrity of agentic AI systems.

Sniffies Secures $100M Match Group Investment for Sex-Positive Tech
Tech
GeekWireApr 29

Sniffies Secures $100M Match Group Investment for Sex-Positive Tech

Seattle’s Sniffies lands $100M investment from Match Group in major bet on sex-positive tech Seattle-based Sniffies, a prominent meetup platform for gay, bisexual, and sexually curious men, has secured a substantial

Ubuntu Linux to Integrate AI Features Through 2026
Tech
The VergeApr 28

Ubuntu Linux to Integrate AI Features Through 2026

Canonical has revealed its strategy to integrate AI features into Ubuntu Linux throughout 2026. The plan includes enhancing existing OS functions with background AI models and introducing new AI-native tools, such as advanced accessibility features and agentic AI. Canonical emphasizes model transparency and local inference, aiming to make Linux more accessible without transforming Ubuntu into an "AI product."

DeepMind’s David Silver Just Raised $1.1B for AI That Learns Without
Tech
TechCrunch AIApr 28

DeepMind’s David Silver Just Raised $1.1B for AI That Learns Without

DeepMind veteran David Silver has secured an unprecedented $1.1 billion in funding for his new British AI lab, Ineffable Intelligence, at a $5.1 billion valuation. The company aims to build a "superlearner" AI that acquires knowledge and skills purely through reinforcement learning, without relying on human data, a radical departure from current large language models.

Philips Hue Sync Box 8K Slashed by 30% in 'Bright Days' Sale
Tech
The VergeApr 27

Philips Hue Sync Box 8K Slashed by 30% in 'Bright Days' Sale

Smart home enthusiasts and gamers can rejoice as the Philips Hue Play HDMI Sync Box 8K is now available at a significant 30 percent discount, bringing its price down to $269.49. This substantial offer, part of Philips

Google Expands Gradient Icon Redesign to More Key Apps
Tech
The VergeApr 26

Google Expands Gradient Icon Redesign to More Key Apps

Google is rolling out its new gradient icon design to more apps like Sheets, Slides, and Keep. This update, which started in late 2025 with apps like Gemini, features softer gradients, rounder corners, and a more vibrant, varied aesthetic. It marks a shift from flat designs and uniform circles, with the new look also reportedly signaling the presence of AI-powered features.

Back to Newsroom

Stay ahead of the curve

Get the latest technology insights delivered to your inbox every morning.