OpenAI has released new estimates of the number of ChatGPT users who exhibit possible signs of mental health emergencies, including mania, psychosis, or suicidal thoughts.
The company said that around 0.07% of ChatGPT users active in a given week exhibited such signs, adding that its artificial intelligence (AI) chatbot recognizes and responds to these sensitive conversations.
While OpenAI maintains these cases are extremely rare, critics said even a small percentage may amount to hundreds of thousands of people, as ChatGPT recently reached 800 million weekly active users, per boss Sam Altman.
As scrutiny mounts, the company said it built a network of experts around the world to advise it.
Those experts include more than 170 psychiatrists, psychologists, and primary care physicians who have practiced in 60 countries, the company said. They have devised a series of responses in ChatGPT to encourage users to seek help in the real world, according to OpenAI.
But the glimpse at the company's data raised eyebrows among some mental health professionals.
Dr. Jason Nagata, a professor who studies technology use among young adults at the University of California, San Francisco, noted, Even though 0.07% sounds like a small percentage, at a population level with hundreds of millions of users, that actually can be quite a few people.
OpenAI also estimates that 0.15% of ChatGPT users have conversations that include explicit indicators of potential suicidal planning or intent. Recent updates to the chatbot are designed to respond safely and empathetically to potential signs of delusion or mania and note indirect signals of potential self-harm or suicide risk. The chatbot can also reroute sensitive conversations originating from other models to safer options.
The changes come amid increasing legal scrutiny over the way ChatGPT interacts with users, including lawsuits that implicate the chatbot in tragic outcomes.





















