Skip to content

ChatGPT Users Undergoing Revised Safety Measures for Teenagers by OpenAI

Strict guidelines for minors on mental health matters, including suicide, and sexually suggestive discussions have been enforced by OpenAI.

New safety regulations implemented by OpenAI for the use of ChatGPT by teenage users.
New safety regulations implemented by OpenAI for the use of ChatGPT by teenage users.

ChatGPT Users Undergoing Revised Safety Measures for Teenagers by OpenAI

In a move aimed at ensuring teen safety, OpenAI has implemented stricter safeguards for young users of its AI chatbot, ChatGPT. This decision comes in the wake of a wrongful death case filed by the parents of Adam Raine, a teenager who died by suicide after long interactions with ChatGPT.

On the same day that OpenAI announced these changes, the Senate Judiciary Committee held a hearing titled 'Examining the Harm of AI Chatbots.' Senator Josh Hawley (R-MO) scheduled the hearing in August, and Adam Raine's father is set to speak at the event. The hearing will review findings from a Reuters investigation that uncovered policy documents suggesting that another tech company, Meta, had allowed sexual conversations with minors through its chatbot.

Under the new rules, ChatGPT will apply tighter restrictions around conversations about suicide, sexual conversations, and self-harm. The chatbot will no longer respond with flirtatious or suggestive talk when interacting with underage users. OpenAI is urging parents to link teen accounts to their own for age verification purposes, which allows the system to send direct alerts if it detects signs of serious distress.

Parents will soon have more control over how their teens use ChatGPT, including setting 'blackout hours' that block access to the service at specific times. If a minor uses ChatGPT to imagine harmful scenarios, the system may alert their parents. OpenAI is also developing an automatic age-detection system that estimates users' ages by analysing usage behaviour. If users are identified as under 18, they receive access to a restricted version of ChatGPT with blocked content related to sexual topics, suicide, or self-harm, and in some cases, an additional identity verification may be required.

In cases where the system cannot determine a user's age, it assumes the user is a minor by default and applies stricter safeguards. This system is still under development and not yet in operation.

The developments at OpenAI are not unique. Another AI company, Character.AI, faces a similar lawsuit. After the report came to light, Meta tightened its rules regarding such conversations. The Senate hearing will provide a platform for discussions on the responsibilities and challenges of AI companies in protecting minors online.

Read also:

Latest