Loading

Australian Psychology Society This browser is not supported. Please upgrade your browser.

Insights > ChatGPT sees one million users in mental distress each week, APS in The Daily Aus

ChatGPT sees one million users in mental distress each week, APS in The Daily Aus

Artificial Intelligence (AI) | Suicide prevention | Youth mental health
Laptop computer with artificial intelligence.

This article is featured in The Daily Aus and is republished with permission.

OpenAI, the company behind ChatGPT, has released new data on users’ mental health, and announced it will modify how the bot responds to psychological issues.

Its data shows around 1.2 million users talk to the bot about suicidal ideation each week.

It comes amid criticism of the company over users’ deaths by suicide, including the family of a U.S. high school student who have now launched a lawsuit.

The family alleges ChatGPT guided the teenager to take his own life.

New Stats

This week, OpenAI released data on ChatGPT users’ mental health.

The chatbot has more than 800 million weekly users.

OpenAI found around 0.07% of its active weekly users (560,000 people) show “possible signs of mental health emergencies related to psychosis or mania.”

Psychosis Australia says the condition involves “alterations of thinking, beliefs, feelings and emotions, motivation and perception.”

According to Healthdirect, symptoms of mania include “grandiose ideas, increased energy... along with a reduced need to sleep.”

OpenAI also said that each week, around 0.15% of its users (1.2 million people) have conversations with ChatGPT “that include explicit indicators of potential suicidal planning or intent”.

In a blog post, OpenAI said conversations with ChatGPT that spark safety concerns are “extremely rare” and said mental ill-health is “universally present in human societies”.

OpenAI said it hopes to strengthen ChatGPT’s response to signs of distress.

Its latest update follows a five-step process, aimed at better managing risks when users show concerning thoughts.

OpenAI says its most recent ChatGPT model, GPT-5, responds more appropriately to mental health concerns on a more frequent basis than previous versions.

Lawsuit

The family of a U.S. 16-year-old who died by suicide has recently taken legal action against OpenAI.

California high school student Adam Raine died in April this year.

In court documents, Adam’s parents said ChatGPT “pushed Adam deeper into... behaviours that ultimately... facilitated his suicide.”

They provided examples of ChatGPT appearing to suggest suicide methods and offering to write a suicide note to his family.

The documents also include data from Adam’s conversations with the chatbot.

Both Adam and the bot mentioned suicide almost 1,300 times.

The teen spent almost four hours on the platform each day in the lead-up to his death.

His parents believe OpenAI launched the previous version of ChatGPT prematurely, “prioritising a rushed market release over the safety of vulnerable users”.

Response

Australian Psychological Society CEO Dr Zena Burgess told TDA: “AI interventions are best used as ways to complement, not replace, vital human interactions.”

Dr Burgess is concerned about AI models being potentially “harmful because of biases and gaps,” compared to psychologists with “very high levels of education, training and professional ethics.”

She warned that these errors can be “very dangerous, especially in crisis”.

 

Lifeline: 13 11 14