Meta introduces enhancements for adolescent safety and purges 635,000 accounts associated with child sexualization
### Meta Introduces New Safety Measures for Teens on Instagram
In response to mounting criticism and lawsuits over the impact of social media on young people's mental health and safety, Meta, the parent company of Instagram, has announced a series of new safety features aimed at protecting teen users.
The suite of measures includes stricter privacy settings, improved reporting tools, AI-driven age verification, and enhanced protections for child-focused accounts. These changes are designed to address both immediate safety risks and ongoing concerns about the well-being of young social media users.
#### Privacy Updates
Starting in 2024, all accounts identified as belonging to teens will be set to private by default, restricting who can see their posts and stories. Additionally, teens can now only receive direct messages (DMs) from people they follow or are already connected with, reducing unsolicited and potentially harmful contact. Even for adult-managed accounts featuring children (under 13), strict message settings and Hidden Words for comment filtering are being applied.
#### Improved Reporting Tools
Teens now have a quick option to block and report accounts directly from a safety notice, and Meta has streamlined the process for reporting and blocking suspicious accounts. Meta has also taken aggressive action against accounts violating its policies, including the removal of 135,000 Instagram accounts for leaving sexualized comments or requesting sexual images from adult-managed accounts featuring children, and an additional 500,000 linked accounts were also deleted for related inappropriate interactions.
#### AI-Driven Age Verification
Meta is testing artificial intelligence to identify accounts where users may be lying about their age. If a user is found to be underage (Instagram’s minimum age is 13), their account is automatically converted to a teen account with stricter privacy and communication defaults.
#### Broader Context
These measures are part of Meta’s broader response to criticism over how its platforms affect youth mental health and safety. The company faces lawsuits from dozens of U.S. states alleging it knowingly designed addictive features harmful to children. The new features aim to reduce risks from predatory adults, scammers, and exploitative content, while giving teens more tools and information to protect themselves online.
The new safety features are a significant step towards making social media platforms safer for teen users, especially in light of the growing concerns about their mental health and well-being. However, it remains to be seen if these measures will also be implemented on other platforms owned by Meta.
- In the realm of technology and business, Meta, known for Instagram, is implementing tech-driven safety measures to promote health-and-wellness and mental-health among teen users, as part of a broader response to concerns.
- The science behind artificial intelligence will be leveraged by Meta to verify the age of users and ensure the application of stricter privacy and communication settings, protecting teens from potential risks.
- The entertainment and general-news landscape is impacted as Meta announces new safety measures, aiming to address crime-and-justice concerns over its platforms' influence on young people's mental health and safety.
- In the quest for improving teen users' safety, Meta introduces enhanced reporting tools that help eliminate social-media accounts violating company policies, particularly those involving inappropriate interactions and exploitative content.
- The new safety features, focusing on privacy updates and AI-driven age verification, serve as therapies-and-treatments that promote a safer online environment for teenagers, helping to address the mounting criticisms and lawsuits facing Meta.