OpenAI’s announcement to introduce parental controls on ChatGPT comes after the New York Times reported that Matthew and Maria Raine, parents of 16-year-old Adam Raine, filed a lawsuit against OpenAI and CEO Sam Altman in San Francisco.
They allege the chatbot validated Adam’s suicidal thoughts, provided instructions on methods of self-harm, and even drafted a suicide note. According to the complaint, ChatGPT coached Adam on how to conceal attempts from his parents. Adam died on 11 April.
OpenAI plans to expand interventions to more forms of mental distress, make it easier for users to reach emergency services with one-click access, and even explore connecting people directly with licensed therapists through ChatGPT.
For younger users, new parental controls will be introduced, allowing parents to monitor and shape how teens use the chatbot. OpenAI is also considering giving teens, under parental supervision, the option to designate trusted emergency contacts who could be alerted during moments of acute crisis.
The company said, “Our goal is for our tools to be as helpful as possible to people, and as a part of this, we’re continuing to improve how our models recognise and respond to signs of mental and emotional distress and connect people with care, guided by expert input”.
They also claimed it’s working with more than 90 doctors across 30 countries and will continue to seek expert guidance. “Our top priority is making sure ChatGPT doesn’t make a hard moment worse,” OpenAI wrote, adding that safety research and improvements will remain ongoing.
In the blog, OpenAI acknowledges that ChatGPT is being used for more than search, coding, and writing. The company said it highlights people having “deeply personal decisions that include life advice, coaching, and support.” It was also mentioned that the company has already trained the models to not provide self-harm instructions and provide the right support.
Despite the efforts, the company recently found a flaw and highlighted that ChatGPT can be “less reliable” over a longer period of communication, and it may not correctly provide the right support.
However, the company claims it is taking steps to strengthen protections. OpenAI says that since 2023, ChatGPT has been trained to avoid providing self-harm instructions, instead responding with empathetic language and pointing users toward crisis resources. In the United States, it refers people to the 911 crisis hotline, while in the UK it directs users to Samaritans. Similar helplines are provided elsewhere through findahelpline.com.