Meta has announced an expansion of its artificial intelligence-based system designed to identify teenage users on Instagram and automatically apply additional safety protections, even when users enter an adult date of birth.
The company said the technology, which was first introduced last year, will now be extended to 27 countries in the European Union as well as Brazil. It will also be deployed on Facebook in the United States for the first time, with further expansion to the United Kingdom and EU expected in June.
The system uses artificial intelligence to analyse account behaviour and profile details in order to detect signs that a user may be under 18. This includes examining contextual indicators such as activity patterns, interactions, and account information to determine whether an account likely belongs to a teenager.
According to Meta, the goal is to ensure that users who are identified as minors are automatically placed into “Teen Accounts,” which come with stricter privacy settings, content restrictions, and safety measures.
In a statement shared through its newsroom, the company said it had already deployed the system in several countries, including the US, UK, Canada, and Australia. It added that millions of accounts had been moved into teen-specific protections since the rollout began.
The expansion comes amid increasing regulatory scrutiny of social media platforms over child safety. Last week, the European Commission issued preliminary findings suggesting that Meta may have failed to adequately prevent children under 13 from accessing Instagram and Facebook. The Commission said these shortcomings could represent a breach of the EU’s Digital Services Act, which requires platforms to actively assess and reduce risks to minors.
Regulators argued that Meta had not done enough to identify and mitigate underage usage on its platforms. The investigation remains ongoing.
Meta has rejected the preliminary findings, stating that both Instagram and Facebook are intended for users aged 13 and above. The company also said it already uses systems designed to detect and remove accounts belonging to users under the minimum age requirement.
A company spokesperson reiterated that safety measures are in place and that enforcement of age restrictions remains a priority. However, Meta did not directly address the specific concerns raised by European regulators in its response.
The expansion of AI-based age detection reflects a broader shift in the company’s approach to online safety, as governments around the world increase pressure on technology firms to better protect younger users. The move also highlights growing reliance on automated systems to enforce platform rules at scale.
As regulatory scrutiny intensifies, further changes to age verification and content moderation practices are expected across major social media platforms in both Europe and other global markets.

You must be logged in to post a comment Login