Saturday, December 28, 2024
Business

Meta denies that its new 'nighttime nudge' telling your kid to log off Instagram has to do with an unsealed lawsuit about child safety

Meta has introduced a “nighttime nudge” to remind young Instagram users to limit their screen time when they should be in bed instead, part of Meta’s plan to help parents better supervise their children online. The announcement comes a day after newly unredacted documents from a New Mexico lawsuit against Meta, reviewed by Fortune, highlighted claims that it failed to protect children from solicitations for explicit photos and sexual exploitation.

“Our investigation into Meta’s social media platforms demonstrates that they are not safe spaces for children but rather prime locations for predators to trade child pornography and solicit minors for sex,” New Mexico Attorney General Raúl Torrez said in a statement Wednesday.

Described as a “wellness tool” to help teens prioritize sleep, the nudge will automatically show teen Instagram users a black screen asking them to take a break from the app if they use it after 10 p.m. The screen, which cannot be turned off, will appear if the user has spent more than 10 minutes on the app at night.

Meta says the nudge is part of a wider effort to help users limit Instagram use, increase parental involvement in time management on the app and monitor their teens’ app usage. Introduced in June, parental supervision tools on Instagram Messenger allow parents to view how much time their kids spend on the app, who can message their kid (no one, friends or friends of friends on the app) and their kids’ privacy and safety settings.

Meta introduced a raft of policy changes on Jan. 9, including placing teens in “most restrictive content control setting on Instagram and Facebook,” making it more difficult for users to find sensitive content on the app’s Search and Explore functions.

Meta’s continued efforts to enhance protections for children using their apps have allegedly fallen short. In October, 41 states sued Meta, accusing the company of harming children by creating and designing apps with addictive features. While Meta’s recent policy updates indicate a keen awareness of these grievances, parents and state attorneys general are not letting the company off the hook easily.

A Meta spokesperson denied to Fortune that the company’s extensive policy changes are related to the pending lawsuits against it.

“Our work on teen safety dates back to 2009 and we’re continuously building new protections to keep teens safe and consulting with experts to ensure our policies and features are in the right place,” a Meta spokesperson told Fortune. “These updates are a result of that ongoing commitment and consultation and are not in response to any particular timing.”

Meta accused of ignoring past ‘red flags’ over child safety

Executives from Meta, X, Snap, Discord and TikTok will testify before the Senate on child safety on Jan. 31. 

Instances of sexual exploitation and endangering children outlined in the New Mexico lawsuit against Meta date back to 2016. A BBC investigation that year, cited in the case, focused on a Facebook group made up of pedophiles who circulated explicit photos of children.

Court documents show Instagram accounts selling child sexual abuse material such as child pornography and “minors marketed for sex work.” The complaint called Instagram and Facebook a “breeding ground for predators who target children for human trafficking, the distribution of sexual images, grooming, and solicitation.”

Meta was not always as aggressive about implementing protections and policies to protect young users as it is now, according to the lawsuit’s complaint. It cites an action filed by the Federal Trade Commission in April 2023 that alleged that adults on Instagram were able to message children over the “Messenger Kids” feature, even though the feature was not supposed to permit messaging from accounts that under-13 users were not following.

The complaint states Meta “systematically ignored internal red flags” that showed that teen usage of apps was harmful and that the company instead prioritized “chasing profits.”

Internal Meta documents outlined in the court documents indicated the company made it more difficult for users to report inappropriate content in order to curtail the number of reports.

“Meta knew that user reports undercounted harmful content and experiences on its platforms, but nonetheless made it harder, not easier to report and act on this information,” the complaint read.

Subscribe to the Eye on AI newsletter to stay abreast of how AI is shaping the future of business. Sign up for free.

source

Leave a Reply

Your email address will not be published. Required fields are marked *