In the wake of two massive US court losses around child safety online, Meta is expanding its teen safety programs. The company said on Thursday that it is expanding the use of its PG-13 movie rating-inspired standard to teen accounts internationally, particularly to Europe.
The PG-13 rating, as we all know it, was created by the Motion Picture Association. It’s essentially a warning sign to parents that a film has content that may not be suitable for kids under 13 years old due to violence, language, sex or other factors. Instagram’s teen accounts restrict the kinds of content those users are fed, saying it is now “inspired” by the MPA’s PG-13 rating.
Meta rolled out these changes in the US, UK, Australia and Canada in October 2025. Shortly after, the MPA sent Meta a cease-and-desist letter in November 2025 for using the PG-13 term without permission. By March, the two organizations struck an agreement to clarify what the standard meant when used in reference to social media content.
“While of course there are differences between movies and social media, we made these changes so teens’ experiences in the 13-plus setting feel closer to the Instagram equivalent of watching a movie that’s been rated appropriate for 13-plus,” Meta said in a Thursday blog post announcing the policy.
It’s the latest move from Meta in an attempt to reassure the public that its platforms are safe for young users. On March 24, a New Mexico jury found Meta liable for negligence for misleading users about the safety of children and allowing child exploitation, ordering the company to pay $375 million in punitive damages. The next day, March 25, a Los Angeles jury found Meta and Google liable for deliberately building their social media platforms to be addictive. Meta disagreed with both verdicts, and Google said it plans to appeal.
Instagram’s 13-plus teen accounts
Instagram’s teen accounts program launched in the US in 2024. The goal was to move all users younger than 16 to these more private and secure accounts. Teen accounts are private by default, meaning it’s more difficult for strangers to interact with them. These accounts are also subject to Instagram’s most restricted content settings, meaning that they shouldn’t be shown posts that are sexually suggestive, contain graphic or disturbing images or adult substances like tobacco and alcohol. Teens need a parent’s permission to opt out of these default settings. Meta took these content restrictions a step further in October 2025 with its first version of 13-plus ratings.
Instagram teen accounts will “hid[e] or not recommend posts with strong language, certain risky stunts and additional content that could encourage potentially harmful behaviors, such as posts showing marijuana paraphernalia,” Meta reiterated in its blog post this week.
It added that teens will no longer be allowed to follow people they already follow if those accounts have an “inappropriate” username and consistently share content that’s “age-inappropriate” for teens. And for parents who don’t feel like these default settings are enough, Meta has a new “limited content” setting that’s even stricter around commenting and AI interactions. Meta came under fire last year when an internal report showed that its AI chatbots would have “sensual” interactions with children.
Critics have said that Instagram’s content controls for younger users could limit them from seeing informative content. Instagram’s moderation systems, increasingly powered by AI, can struggle to interpret the context around specific keywords. For example, posts about reproductive health or eating disorders may get flagged, even if they’re educational content.
Read the full article here
