Meta has announced a new wave of AI-powered age assurance measures aimed at ensuring teenagers are placed in safer, age-appropriate experiences across its platforms, including Instagram and Facebook.
The update marks a significant expansion of Meta’s efforts to tackle underage usage and enforce protections for teens, combining advanced artificial intelligence with policy changes and parental engagement tools.
AI to Detect and Remove Underage Users
Meta said it is strengthening enforcement against users below the minimum age of 13 by deploying more sophisticated AI systems capable of identifying underage accounts even when users misrepresent their age.
The technology analyses a range of signals across profiles, including posts, captions, comments, and interactions. New visual analysis tools will also scan photos and videos for general age-related cues such as physical appearance and context.
Meta stressed that the system does not use facial recognition, but instead evaluates patterns and characteristics to estimate whether a user may be underage.
Accounts flagged as potentially belonging to users under 13 will be deactivated, with users required to complete age verification to restore access.
Expansion of Teen Account protections
The company is also expanding its automated system that places suspected teens into “Teen Accounts”, which come with built-in safeguards such as:
- Restrictions on who can contact them
- Limits on the type of content they can view
- Default privacy and safety settings
Previously launched in countries including the United States, United Kingdom, Canada, and Australia, the feature is now being extended to 27 European Union countries and Brazil on Instagram, with plans to roll out on Facebook in the U.S. and later in Europe and the U.K.
Meta said it aims to expand the system globally over the course of the year.
AI-assisted Moderation and Reporting
To improve enforcement, Meta is also upgrading its reporting systems. Users will find it easier to report suspected underage accounts, while AI models will assist moderation teams by reviewing reports more quickly and consistently.
According to the company, early testing shows that AI-assisted review delivers higher accuracy and faster resolution times compared to human review alone.
The firm is also working to prevent repeat violations by strengthening systems that block users flagged as underage from creating new accounts.
New Tools to Support Parents
Recognising the role of families in online safety, Meta will begin sending notifications to parents in the U.S. via Facebook and Instagram. These will include guidance on how to verify their teens’ ages and tips for discussing honest age disclosure online.
Parents globally can also access resources through Meta’s Family Center, which provides tools for monitoring and managing teens’ digital activity.
Industry-wide Challenge
Meta acknowledged that accurately verifying users’ ages online remains a broader industry challenge. While it continues to invest in its own systems, the company is advocating for a more centralised approach.
It argues that app stores and operating systems should handle age verification, allowing developers to build age-appropriate experiences based on verified data. Meta says such an approach would offer greater consistency and privacy protection than requiring each app to independently enforce age checks.
The announcement reflects growing global scrutiny of social media platforms over youth safety, harmful content exposure, and age verification gaps.
By combining AI detection, stricter enforcement, and parental tools, Meta is attempting to strike a balance between user privacy and stronger safeguards for younger users though the effectiveness of these systems will likely remain under close watch from regulators and digital rights groups.




















