Close Menu
    2digital.news2digital.news
    • News
    • Analytics
    • Interviews
    • About us
    • Editorial board
    • Events
    2digital.news2digital.news
    Home»News»Family of Canadian shooting victim sues OpenAI over attacker’s earlier conversations with ChatGPT
    News

    Family of Canadian shooting victim sues OpenAI over attacker’s earlier conversations with ChatGPT

    Mikolaj LaszkiewiczBy Mikolaj LaszkiewiczMarch 10, 20262 Mins Read
    LinkedIn Twitter Threads

    The lawsuit was filed in the Supreme Court of British Columbia by the family of Maya Gebali, a student who was shot three times during the attack on a school in Tumbler Ridge on February 10, 2026. Eight people were killed and dozens were injured in the shooting. The attacker, 18-year-old Jesse Van Rootselaar, died at the scene.

    According to the complaint, the attacker had used ChatGPT months before the attack to describe violent scenarios involving firearms. Internal monitoring systems reportedly flagged these conversations as potentially dangerous, and the user’s account was later suspended. However, the company concluded that the activity did not indicate “credible or imminent planning,” and therefore law enforcement was not notified.

    Maya Gebali’s family argues in the lawsuit that OpenAI possessed information suggesting a risk of real-world violence but failed to take sufficient action. Court documents state that the chatbot allegedly served the attacker as a “confidante, collaborator and ally” in developing the scenario of the attack.

    The girl was struck by three bullets – one of them caused brain damage, leading to severe and permanent neurological injuries. According to the lawsuit, the consequences include serious cognitive and physical impairments that may affect her for the rest of her life.

    The case has once again sparked debate about the safety of generative artificial intelligence systems. In theory, chatbots include mechanisms to detect dangerous content, but as this incident suggests, such safeguards do not always lead to further action beyond account suspension. In the Tumbler Ridge case, the user’s activity was flagged by monitoring systems but not reported to police.

    The legal proceedings against OpenAI are only beginning, and the allegations presented in the complaint have not yet been examined by the court. However, the case may become one of the first major tests of legal responsibility for developers of generative AI systems in connection with real-world acts of violence.

    Share. Twitter LinkedIn Threads

    Related Posts

    News

    Japan approves the world’s first stem-cell therapies for Parkinson’s disease and heart failure

    March 10, 2026
    News

    China warns of a new chip crisis. Dispute over Nexperia could shake supply chains again

    March 9, 2026
    News

    AI tried to secretly mine cryptocurrency. Researchers uncover unexpected behavior in the ROME agent

    March 9, 2026
    Read more

    Alexandr Orlov: 17 years as a CTO — how to scale an IT business without chaos.

    March 3, 2026

    Digital Twins in Healthcare: Key Applications, Use Cases, and What’s Next

    February 27, 2026

    Let the Robot Into Your Eye?

    February 25, 2026
    Stay in touch
    • Twitter
    • Instagram
    • LinkedIn
    • Threads
    Demo
    X (Twitter) Instagram Threads LinkedIn
    • NEWS
    • ANALYTICS
    • INTERVIEWS
    • ABOUT US
    • EDITORIAL BOARD
    • EVENTS
    • CONTACT US
    • ©2026 2Digital. All rights reserved.
    • Privacy policy.

    Type above and press Enter to search. Press Esc to cancel.