The massacre in British Columbia took place on February 10, 2026. Eighteen-year-old Jesse Van Rootselaar murdered a total of eight people – including his mother, his half-brother, five children, and a teaching assistant at a local high school – and wounded over twenty others. Lawyers representing the victims are now demanding at least $1 billion in damages from OpenAI as they prepare for a series of jury trials slated for next year.
Published court documents reveal that the creators of ChatGPT had insight into the teenager’s murderous intentions long before the attack. As early as June 2025, the company’s automated systems and human moderators flagged his account due to days of conversations detailing scenarios involving firearms. OpenAI’s internal safety team recommended immediately turning this information over to the Royal Canadian Mounted Police (RCMP).
The plaintiffs’ attorneys claim that senior leadership blocked this request, citing image protection for the sake of “corporate survival,” and the company’s actions were limited solely to deactivating the profile. The shooter – following publicly available instructions on the platform – almost instantly created a new account and freely continued preparing for the crime.
The allegations go beyond gross negligence. The legal filings take direct aim at the design of the GPT-4o model, treating it as a defective, even dangerous product. Investigators argue that the chatbot’s built-in memory feature allowed it to build a psychological profile of the shooter, after which the algorithm showed empathetic understanding of his frustrations rather than firmly shutting down his violent ideas. One of the lawsuits explicitly states: “For an eighteen-year-old growing increasingly isolated and fixated on violence, ChatGPT morphed into an encouraging coconspirator”.
Last week, OpenAI CEO Sam Altman published an official apology letter addressed to the Tumbler Ridge community: “I am deeply sorry that we did not alert law enforcement to the account that was banned in June”. The victims’ families dismissed the statement as empty and publicly rejected it.
The claims against the creators of ChatGPT may not ultimately hold up in court, but currently available information suggests the victims have a solid foundation to seek damages. Recently, in a similar case, the Florida Attorney General launched a criminal investigation into the alleged use of the chatbot to plan an attack on the Florida State University campus, and just a few months ago, the company’s systems were also used as an instructional guide during an attempted bombing in Las Vegas.

