ChatGPT Rolls Out Age Prediction To Protect Users

1 min read

OpenAI is rolling out a new age prediction feature on ChatGPT designed to estimate whether an account likely belongs to someone under 18 so that additional safety safeguards are automatically applied, part of efforts to improve protections for younger users as the platform evolves. The system uses a combination of behavioural and account-level signals, including how long an account has existed, its activity patterns and stated age, to decide when to trigger enhanced safety settings that limit exposure to sensitive material.

Under the new approach, if the age prediction model determines an account is probably under 18, ChatGPT will apply extra protections that reduce access to sensitive content, such as graphic violence, risky challenges, sexual or romantic role play, self-harm content and other material deemed inappropriate for minors, while still allowing general use of the chatbot. Adults who are misidentified as underage can restore full access by verifying their age through a secure identity service.

OpenAI’s initiative builds on its existing Teen Safety Blueprint and under-18 operating principles and is part of a broader safety strategy announced as consumer use of AI grows and regulatory scrutiny increases over how platforms handle interactions with minors. The rollout is global, with phased deployment in certain regions to account for local regulatory requirements, including planned EU implementation in the coming weeks.

The age prediction model represents a shift from relying solely on self-reported age data toward automated estimations that adapt protections dynamically based on user behaviour and account signals. It reflects a balancing act between applying appropriate safeguards for teenage users and allowing adults access to full capabilities, with verification options for those who are incorrectly flagged. 

Global Tech Insider