OpenAI Launches Parental Controls for ChatGPT 

OpenAI Launches AI Safety Tools Aim to Protect Teenagers Using ChatGPT

Open AI ChatGPT Parental Guidance

OpenAI has rolled out parental controls for ChatGPT, marking one of its most significant safety updates since the chatbot’s 2022 debut. The launch follows a lawsuit from the family of a California teenager who died by suicide, accusing the company of failing to protect vulnerable users.

The new parental controls, announced Monday, allow parents to limit how teenagers use ChatGPT and receive alerts if the chatbot detects signs of mental distress. The tools, accessible through ChatGPT’s settings, also let adults set usage curfews, block certain features, and restrict sensitive content.

OpenAI said parents can decide whether teens can access ChatGPT’s voice mode, generate images, or allow the chatbot to reference prior conversations. The controls also give parents the option to turn on a restricted version of ChatGPT that reduces exposure to content related to dieting, sex, or hate speech.

To activate the feature, an adult must send an invitation by email to their child, who then accepts it to establish the parental link. Once connected, parents gain the ability to customize permissions and monitor broader patterns without accessing private conversations.

A central feature of the new update is its emergency alert system, which notifies parents if ChatGPT determines a teenager may be in distress. If the chatbot flags such a case, a human reviewer assesses whether an alert should be sent through email, text message, or app notification.

Lauren Jonas, OpenAI’s head of youth wellbeing, said, “We have felt urgency around this for a while,” adding that the company is moving quickly to release tools like parental controls. She emphasized that alerts are intended to give parents “enough knowledge about a potentially harmful situation to have a conversation with their teenager while still respecting the child’s privacy and autonomy.”

The update comes after the family of 16-year-old Adam Raine sued OpenAI and CEO Sam Altman in August. The lawsuit alleges that ChatGPT systematically isolated Raine from his family and provided guidance that contributed to his death by hanging in April.

The case intensified scrutiny of heavy chatbot use and its potential harms, adding pressure on the San Francisco-based startup to make changes. With more than 700 million users since launch, ChatGPT’s rapid adoption has raised growing concerns about safeguarding young users.

Ad Banner

Beyond parental controls, OpenAI is developing software to predict a user’s age, aiming to tailor responses for those under 18. This system would help guide interactions with minors and further limit access to sensitive material.

The parental controls reflect a broader industry shift toward balancing AI innovation with responsibility and regulation. As lawsuits and public concern mount, OpenAI’s move signals its recognition that safety features may determine how widely society accepts AI.

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *

Receive the latest news

Subscribe To Our Newsletter

Get notified about new articles