The lawsuit was filed in the Supreme Court of British Columbia by the family of Maya Gebali, a student who was shot three times during the attack on a school in Tumbler Ridge on February 10, 2026. Eight people were killed and dozens were injured in the shooting. The attacker, 18-year-old Jesse Van Rootselaar, died at the scene.
According to the complaint, the attacker had used ChatGPT months before the attack to describe violent scenarios involving firearms. Internal monitoring systems reportedly flagged these conversations as potentially dangerous, and the user’s account was later suspended. However, the company concluded that the activity did not indicate “credible or imminent planning,” and therefore law enforcement was not notified.
Maya Gebali’s family argues in the lawsuit that OpenAI possessed information suggesting a risk of real-world violence but failed to take sufficient action. Court documents state that the chatbot allegedly served the attacker as a “confidante, collaborator and ally” in developing the scenario of the attack.
The girl was struck by three bullets – one of them caused brain damage, leading to severe and permanent neurological injuries. According to the lawsuit, the consequences include serious cognitive and physical impairments that may affect her for the rest of her life.
The case has once again sparked debate about the safety of generative artificial intelligence systems. In theory, chatbots include mechanisms to detect dangerous content, but as this incident suggests, such safeguards do not always lead to further action beyond account suspension. In the Tumbler Ridge case, the user’s activity was flagged by monitoring systems but not reported to police.
The legal proceedings against OpenAI are only beginning, and the allegations presented in the complaint have not yet been examined by the court. However, the case may become one of the first major tests of legal responsibility for developers of generative AI systems in connection with real-world acts of violence.

