OpenAI has launched parental controls for ChatGPT, giving parents more oversight of how teenagers use the AI system. The new features are available on the web and will be added to mobile soon.
The controls allow parents to limit sensitive content, disable memory, turn off image or voice features, and set “quiet hours” when teens cannot access the chatbot. The new parental control system includes several key features:
Account Linking: Parents can link their accounts with a teen’s profile to manage safety settings.
Sensitive Content Control: Parents can block content such as sexual roleplay, violent challenges, or harmful beauty ideals.
Disable Memory: Ability to disable chat memory, image, and voice tools
Quiet Hours: to restrict teen access at specific times
Safety Alerts: Parents receive notifications if the system detects potential safety risks like self-harm.
Stronger Default Protections: Teen accounts automatically block unsafe or mature content.
Age-Prediction System: A new system to better detect and protect underage users.
Parents must create their own accounts and link them to a teen’s profile to activate the settings. They cannot view private conversations but may receive alerts if the system detects potential safety risks. Parents can also decide whether their teen’s chats are used to train OpenAI models and select how to receive safety notifications through email, SMS, or push alerts. The rollout comes after increased scrutiny of AI’s impact on minors.
In a case, a 16-year-old, who died by suicide after extensive conversations with ChatGPT. His death led to lawsuits, a Senate hearing, and questions about AI companies responsibility toward young users. During the hearing, the father, said the chatbot shifted from being a study aid to what he called “a suicide coach.”
OpenAI CEO has said the company is working to balance teen privacy and safety, adding that new measures, such as an age-prediction system, are under development. With the new parental controls, OpenAI has taken a step to address safety concerns, but debate continues over how much responsibility AI platforms should carry in protecting minors.