OpenAI has announced plans to roll out parental controls on its chatbot, ChatGPT, following allegations that the system encouraged a teenager to take his own life.

The company revealed that parents will soon be able to link their accounts with their teens’ accounts and set age-appropriate behaviour rules for the chatbot.
The update, expected within the next month, will also notify parents if ChatGPT detects that a teen may be in acute emotional distress.
The move comes after a California couple, Matthew and Maria Raine, filed a lawsuit claiming that ChatGPT gave their 16-year-old son detailed instructions on suicide and encouraged him to go through with it.
OpenAI acknowledged the case and confirmed it is working to improve how its models respond to signs of mental or emotional crisis.
The company added that it will enhance chatbot safety over the next three months, including redirecting sensitive conversations to a more advanced reasoning model designed to better enforce safety protocols.
Also read: Busstop Saga Hit My Mental Health — Charly Boy

