Friday, November 22, 2024
Uncategorized

TikTok ‘French Scar’ challenge triggers safety probe in Italy

TikTok has another problem to add to its growing pile: Italy’s consumer watchdog has opened an investigation over user safety concerns — stepping in after a so-called “French scar” challenge went viral on the video sharing platform in which users have been seen apparently pinching their faces in order to create and show off red lines as mock scars. (Yes, really.)

In a press release today, the AGCM accused TikTok of lacking an adequate moderation systems for user generated content, asserting that it’s failing to uphold community guidelines set out in its T&Cs where it claims to remove dangerous content, such as posts inciting suicide, self-harm and eating disorders. But apparently pinching yourself doesn’t make the bar.

The AGCM’s investigation is targeting the Irish company, TikTok Technology Limited, which it says handles the platform’s European consumer relations, as well as English and Italian TikTok entities. And it said it carried out an inspection at the Italian headquarters of TikTok today, aided by the Special Antitrust Unit of the Guardia di Finanza.

The authority said it decided to look into TikTok after numerous videos of teens emerged engaging in “self-injurious behavior” — including the aforementioned “French scar” challenge, which last month led to a number of warnings from dermatologists that the activity could lead to permanent marks or redness.

The AGCM said it’s concerned TikTok has not set up adequate content monitoring systems, especially given the presence of particularly vulnerable users such as minors. It is also accusing the platform of failing to apply its own rules and remove dangerous content that its T&Cs claim is not allowed.

Additionally, it wants to look into the role of TikTok’s artificial intelligence in spreading the problematic challenge.

The platform famously uses AI to select content shown to users in the ‘For you’ feed, which is ‘personalized’ based on TikTok’s tracking and profiling of users, including factoring in signals like other similar content they’ve viewed or otherwise interacted with through the like function. Although how it works exactly is a commercially guarded secret. So one question to consider is how much of a role TikTok’s algorithm had in amplifying and spreading this potentially harmful challenge?

We reached out to the AGCM with questions, including whether it intends to audit the TikTok algorithm. But the regulator told us it’s unable to provide further public comment at this time.

TikTok was also contacted about the investigation. A company spokesperson sent us this statement:

More than 40,000 dedicated safety professionals work to keep our community safe, and we take extra care to protect teenagers in particular. We do not allow content showing or promoting dangerous activities and challenges, suicide, self-harm or unhealthy eating behaviours. In addition, our recommendation policies help ensure content is suitable for a general audience and we provide access to well-being and safety resources across our app. We will fully cooperate with relevant authorities to address any questions about our policies and processes.

On the French scar challenge specifically, TikTok also said that its Trust and Safety team has carried out evaluations and restricted content if it shows the pinching of cheeks. It added that only users aged 18 or older will be able to view such content and it will not be eligible for recommendation.

It did not specify when these reviews had taken place.

It’s not the first time TikTok safety concerns have triggered action by Italian regulators: Back in 2021, the data protection watchdog stepped in over child safety concerns linked to a “blackout” challenge apparently doing the rounds on the platform — after local media reported the death of an underage user (a ten-year-old girl). That intervention led TikTok to delete over half a million accounts which it was unable to verify did not belong to children.

The same regulator was also quick to warn TikTok against making a planned privacy policy switch last summer, after the platform had said it would stop asking user consent for its ad targeting. After other DPAs also intervened it went on to drop the plan.

More recently TikTok has been fighting a growing tide of national security concerns that have led a number of Western governments to ban their staff from using the app on official devices. And the Biden administration has been further amping up pressure by threatening a total U.S. ban of the app if the company doesn’t split with its Chinese ownership.

This report was updated with responses from TikTok and the AGCM 

source

Leave a Reply

Your email address will not be published. Required fields are marked *