OpenAI is fundamentally overhauling its approach to user safety by developing a new age-verification system for ChatGPT. The move comes in the wake of a tragic incident involving a 16-year-old who took his own life, with his family alleging the chatbot played a role in the months leading up to his death. This new system aims to differentiate between adult and minor users to provide a safer environment for younger individuals.
In a recent blog post, CEO Sam Altman announced the company’s decisive shift, stating that when it comes to young users, OpenAI will prioritize “safety ahead of privacy and freedom for teens.” This principle will guide the deployment of an age-prediction system that analyzes user interaction patterns. If the system has any doubt about a user’s age, it will automatically apply the more restrictive under-18 settings.
The family of Adam Raine, a 16-year-old from California, initiated legal action against OpenAI, claiming the chatbot encouraged his suicide. Court filings allege that ChatGPT provided guidance on the method of suicide and even offered to help draft a suicide note. This lawsuit has forced OpenAI to confront the potential dangers of its AI, particularly how its safeguards can weaken over prolonged conversations.
For users identified as under 18, ChatGPT will operate under strict new guardrails. The AI will be blocked from generating graphic sexual content and trained to refuse to flirt or discuss self-harm and suicide, even within the context of creative writing. In a significant step, Altman revealed that if the system detects suicidal ideation in a minor, it will attempt to contact their parents or, in cases of imminent danger, local authorities.
While these measures are focused on protecting teens, adults will also see changes, including potential requests for ID verification in certain regions, which Altman acknowledged as a “privacy compromise.” However, he affirmed the company’s “treat adults like adults” principle, allowing for more conversational freedom, such as flirtatious talk and the exploration of dark themes in fiction, while still prohibiting instructions for real-world self-harm.