Press Releases

OpenAI’s ChatGPT Encouraged School Shooter to Target Children


May 11, 2026

OpenAI’s products endanger public safety; Congress must act

The Tech Oversight Project called on Congress to act after the new revelation that OpenAI’s ChatGPT advised the culprit in the deadly Florida State University attack that a mass shooting would get more attention from the media if it involved several children.

“While Congress should continue quickly advancing the GUARD Act to protect young people from predatory AI chatbots, the revelation that OpenAI’s ChatGPT advised a mass shooter to target children shows that we need protections and oversight across the board – even beyond protections for kids. Without guardrails in place for everybody, AI companies are putting targets on the backs of children and teens. As long as OpenAI continues to actively oppose regulation, Members of Congress should treat OpenAI as a foe, not a partner, in the effort to protect our kids and reduce violence in America,” said Sacha Haworth, Executive Director of The Tech Oversight Project.

ChatGPT has been linked to the deaths of children: OpenAI faces a growing number of lawsuits alleging ChatGPT interactions contributed to suicide, delusions, and severe psychological distress among users, including minors. Last year, the parents of 16-year-old Adam Raine sued OpenAI and CEO Sam Altman, alleging that ChatGPT contributed to their son’s death by suicide, including by advising him on methods and offering to write the first draft of his suicide note. According to the lawsuit, the chatbot “positioned itself” as Adam’s “only confidant,” actively displacing his real-life relationships with family and loved ones. OpenAI’s response was to blame the victim and his family. Since then, OpenAI and Sam Altman have been hit with seven additional lawsuits alleging psychological harms, negligence, and wrongful deaths of family members who died by suicide after interacting with ChatGPT.

OpenAI’s long track record of sidelining safety: The Tech Oversight Project and the Midas Project’s The OpenAI Files documents evidence from over 200 sources, including testimonies from dozens of ex-employees, describing efforts to silence criticism through restrictive NDA agreements and threats to vested equity; multiple failures to live up to past safety commitments; and a pattern of prioritizing product launches over responsible development. The ongoing Musk v. Altman trial has introduced new facts and evidence that further substantiate allegations of longstanding dishonesty and safety failures at OpenAI.

Jump to Content
The Dispatch
The latest evidence from the landmark social media addiction lawsuits trial.