Close Menu
    2digital.news2digital.news
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    • Events
    2digital.news2digital.news
    Home»News»Family of Canadian shooting victim sues OpenAI over attacker’s earlier conversations with ChatGPT
    News

    Family of Canadian shooting victim sues OpenAI over attacker’s earlier conversations with ChatGPT

    Mikolaj LaszkiewiczBy Mikolaj LaszkiewiczMarch 10, 20262 Mins Read
    LinkedIn Twitter Threads Reddit
    Share
    Twitter LinkedIn Threads Reddit

    The lawsuit was filed in the Supreme Court of British Columbia by the family of Maya Gebali, a student who was shot three times during the attack on a school in Tumbler Ridge on February 10, 2026. Eight people were killed and dozens were injured in the shooting. The attacker, 18-year-old Jesse Van Rootselaar, died at the scene.

    According to the complaint, the attacker had used ChatGPT months before the attack to describe violent scenarios involving firearms. Internal monitoring systems reportedly flagged these conversations as potentially dangerous, and the user’s account was later suspended. However, the company concluded that the activity did not indicate “credible or imminent planning,” and therefore law enforcement was not notified.

    Maya Gebali’s family argues in the lawsuit that OpenAI possessed information suggesting a risk of real-world violence but failed to take sufficient action. Court documents state that the chatbot allegedly served the attacker as a “confidante, collaborator and ally” in developing the scenario of the attack.

    The girl was struck by three bullets – one of them caused brain damage, leading to severe and permanent neurological injuries. According to the lawsuit, the consequences include serious cognitive and physical impairments that may affect her for the rest of her life.

    The case has once again sparked debate about the safety of generative artificial intelligence systems. In theory, chatbots include mechanisms to detect dangerous content, but as this incident suggests, such safeguards do not always lead to further action beyond account suspension. In the Tumbler Ridge case, the user’s activity was flagged by monitoring systems but not reported to police.

    The legal proceedings against OpenAI are only beginning, and the allegations presented in the complaint have not yet been examined by the court. However, the case may become one of the first major tests of legal responsibility for developers of generative AI systems in connection with real-world acts of violence.

    Related Posts

    News

    Michigan Residents Lose Savings to Deepfakes and Voice Cloning

    May 12, 2026
    News

    “They don’t really make life decisions without asking ChatGPT”: Sam Altman Diagnoses Gen Z and Millennial Habits

    May 11, 2026
    News

    World’s First Dual-Core Quantum Computer Comes From China – and Raises Plenty of Doubts

    May 11, 2026
    Read more

    IT Sector as an Economic Stabilizer. Digitally Strong Countries Weather Crises Better

    May 6, 2026

    A Voice Is No Longer Proof: How Scammers Learned to Fake Trust 

    May 5, 2026

    Top Wearable Medical Device Companies Shaping Modern Healthcare

    April 30, 2026
    Demo
    X (Twitter) Instagram Threads LinkedIn Reddit
    • NEWS
    • ANALYTICS
    • INTERVIEWS
    • ABOUT US
    • EDITORIAL BOARD
    • EVENTS
    • CONTACT US
    • ©2026 2Digital. All rights reserved.
    • Privacy policy.

    Type above and press Enter to search. Press Esc to cancel.