Seven lawsuits were filed in U.S. court on Wednesday against OpenAI on behalf of families impacted by the February mass shooting in the small Canadian mining town of Tumbler Ridge.
The artificial intelligence behemoth has faced intense criticism over its decision not to report the troubling ChatGPT usage of Jesse Van Rootselaar, the 18-year-old transgender woman who killed eight people at her home and a school.
OpenAI banned her account in June 2025 but said it did not report the account to Canadian police because it saw no evidence of an imminent attack.
The lawsuits filed in a U.S. federal court in California allege OpenAI decided not to report Van Rootselaar “because reporting one case would mean reporting thousands,” a statement from the legal team said.
The lawsuits also challenge the assertion that Van Rootselaar’s ChatGPT account was actually banned.
They allege that when an account is shut down for dangerous behaviour, OpenAI instructs the individual on how to resume usage, including tips on how to circumvent the 30-day suspension period.
“OpenAI also tells users that if they don’t want to wait, they can open a new account immediately using a different email address,” the statement said.
Van Rootselaar reportedly opened a second ChatGPT account after her first one was shut down.
The U.S. legal team said it is working in co-ordination with Canadian lawyers who had previously filed a lawsuit against OpenAI on behalf of the family of Maya Gebala, a 12-year-old gravely injured in the shooting.
But the U.S. actions will “supersede” the Canadian case, Wednesday’s statement said.
“There are more cases to come. Over the next several weeks, a cross-border team … will be filing over two dozen cases on behalf of the victims of the Tumbler Ridge mass shooting. The lawsuits will be filed in waves,” it added.
OpenAI CEO Sam Altman apologized to the remote community of Tumbler Ridge earlier this month, saying he “was deeply sorry that we did not alert law enforcement to the account that was banned in June.”
The company has also said that under its current security policies, which have been revised since June, Van Rootselaar’s conduct would have been flagged to police.
Asked to comment on Wednesday’s legal filing, an OpenAI spokesperson said: “We have a zero-tolerance policy for using our tools to assist in committing violence. As we shared with Canadian officials, we have already strengthened our safeguards, including improving how ChatGPT responds to signs of distress.”
Van Rootselaar killed her mother and brother at the family’s home before heading to the local secondary school, where she shot dead five children and a teacher.
She died of a self-inflicted gunshot wound after police entered the building.
