HAMILTON, Canada
Families of seven victims of a deadly school shooting in Canada’s British Columbia filed lawsuits Wednesday against OpenAI and CEO Sam Altman, alleging ChatGPT helped the shooter plan the February massacre at the Tumbler Ridge Secondary School.
The lawsuits allege that the violent intentions of the shooter, identified as Jesse Van Rootselaar, 18, were well-known to OpenAI.
Employees flagged the shooter’s account eight months before the attack, determined it posed a credible threat of gun violence and urged senior leadership to notify Canadian authorities. The company chose not to warn law enforcement and instead deactivated the account, the suits allege.
Van Rootselaar was able to create a new account by following OpenAI’s own published instructions for deactivated users and continued using ChatGPT to plan the attack.
On Feb. 10, Van Rootselaar killed her mother and 11-year-old brother at their family home before driving to the school and opening fire and killing five students and a teaching assistant, and wounding 27 others before dying by suicide. Victims include children aged 12 and 13.
The families accuse OpenAI and Altman of negligence, aiding and abetting a mass shooting, wrongful death and product liability.
According to a report in The Guardian newspaper, lead attorney Jay Edelson said, “The fact that Sam and the leadership overruled the safety team, and then children died, adults died, the whole town was ruined, is pretty close to the definition of evil to me.”
On April 23, Altman sent a letter to the Tumbler Ridge community apologizing for not notifying Canadian police.
“While I know words can never be enough, I believe an apology is necessary,” he wrote.
In a post on the US social media platform X, British Columbia Premier David Eby called the apology “grossly insufficient for the devastation done to the families of Tumbler Ridge.”
In a statement Tuesday, OpenAI said it has “zero tolerance” for using its tools to assist violence and has already strengthened its safeguards.
“We work to train ChatGPT to recognize the difference—and to draw lines when a conversation starts to move toward threats, potential harm to others, or real-world planning,” it said.
