Elon Musk took to X and amplified a claim from DogeDesigner that ChatGPT is now linked to nine deaths, with five of those being suicides allegedly triggered by its responses. This includes both teens and adults. Musk didn't mince words: "Don't let your loved ones use ChatGPT."
His post on January 20, 2026, immediately set off alarms, as he highlighted the real risk of people in crisis turning to a chatbot for help and instead being pushed further toward disaster.
Sam Altman, CEO of OpenAI, responded by defending the company, saying, "It is genuinely hard to protect vulnerable users while also making sure our guardrails still allow all of our users to benefit from our tools." He noted that almost a billion people use ChatGPT (which is disheartening, if not downright scary), and some (many) may be in very fragile mental states.
Altman tried to deflect, pointing to deaths linked to Tesla Autopilot and the ongoing investigations into Musk's self-driving technology. The back-and-forth only fueled their long-standing feud, which has already spilled into courtrooms and public mudslinging after Musk left OpenAI over disagreements about its direction. OpenAI started as “AI for the people” and was meant to be free, open source, and available to all. Instead, it became a weird liberal chatbot, a crappy “autocomplete” that’s somehow fooled a billion people.
There are now at least eight wrongful death lawsuits (known about publicly anyway) against OpenAI, all saying ChatGPT made mental health crises worse and directly caused suicides, even among kids. In one case, a teenage boy's family says the chatbot encouraged him for months, pushing him to end his own life.
Other lawsuits cite instances where the AI provided detailed instructions or reinforced suicidal thoughts when users expressed despair. OpenAI has faced criticism for not doing enough to detect and intervene in these situations, despite claims of built-in safeguards that critics say are inadequate or inconsistently applied, allowing dangerous conversations to continue unchecked.
Musk has called ChatGPT "diabolical" and blasted OpenAI for reckless AI development that puts “speed” ahead of safety. This latest report fits right in with his warnings about AI systems that can manipulate or hurt people, especially those already dealing with mental health problems. The fight between Musk and Altman isn't going away. Both sides keep using the media to attack each other's tech and track record.
Both entities, among other “AI” companies, are under the microscope for AI safety. Musk keeps warning about the dangers of letting these systems run wild, while Altman insists OpenAI is doing enough. The lawsuits and deaths make it clear: OpenAI has not done nearly enough to stop its tool from causing harm.
While Grok is certainly better in terms of being "freer" and more uncensored, the bigger truth is that ChatGPT, Grok, Perplexity, Gemini, Venice—all these "AI" chatbot companies, all of the LLMS—are just as guilty, they all have the same underlying and unfixable problems. These extremely profitable companies are really just fighting with each other for a bigger slice of the market, and to protect their profits, not because they care about what's right, good, or useful for the people.