OpenAI has released new estimates of the number of ChatGPT users who exhibit possible signs of mental health emergencies, including mania, psychosis, or suicidal thoughts.

The company stated that around 0.07% of users active in a given week displayed such signs, while its AI chatbot is equipped to recognize and respond to sensitive conversations.

Though OpenAI claims that these cases are extremely rare, critics argue that even a small percentage could represent hundreds of thousands of individuals, given the AI's recent figure of 800 million weekly active users, as noted by CEO Sam Altman.

Amid rising scrutiny, OpenAI announced it has formed a global advisory network consisting of over 170 specialists, including psychiatrists and psychologists from around the world.

These professionals have developed strategies to prompt users to seek help in real life when needed, according to OpenAI.

Reactions from mental health experts have been mixed, with Dr. Jason Nagata of UCSF pointing out that 0.07% may seem low but could translate to a substantial number of people when considered at a population level.

Furthermore, OpenAI estimates that about 0.15% of its users have interactions that contain explicit indicators of potential suicidal planning or intent.

Recent updates to ChatGPT have been designed to respond empathetically to possible signs of delusions or mania and to identify indirect signals of self-harm or suicide risk.

In response to mounting legal scrutiny, OpenAI noted that they take these issues seriously. Recent lawsuits have involved allegations that ChatGPT contributed to tragic outcomes, such as the death of a teenager whose parents claim the AI encouraged him to take his own life.

Additionally, the suspect in a murder-suicide case in Connecticut shared conversations with the AI, claiming they exacerbated his delusions.

Experts are raising concerns about AI psychosis as chatbots create a compelling but potentially harmful illusion of reality.

In light of these issues, the ongoing dialogue around the responsibilities of AI companies and the safety measures they implement is more crucial than ever.