Thursday, November 21, 2024
Uncategorized

Workers that made ChatGPT less harmful ask lawmakers to stem alleged exploitation by Big Tech

Kenyan workers who helped remove harmful content on ChatGPT, OpenAI’s smart search engine that generates content based on user prompts, have filed a petition before the country’s lawmakers calling them to launch investigations on Big Tech outsourcing content moderation and AI work in Kenya.

The petitioners want investigations into the “nature of work, the conditions of work, and the operations” of the big tech companies that outsource services in Kenya through companies like Sama — which is at the heart of several litigations on alleged exploitation, union-busting and illegal mass layoffs of content moderators.

The petition follows a Time report that detailed the pitiable remuneration of the Sama workers that made ChatGPT less toxic, and the nature of their job, which required reading and labeling graphic text, including describing scenes of murder, bestiality and rape. The report stated that in late 2021 Sama was contracted by OpenAI to “label textual descriptions of sexual abuse, hate speech, and violence” as part of the work to build a tool (that was built into ChatGPT) to detect toxic content.

The workers say they were exploited, and not offered psychosocial support, yet they were exposed to harmful content that left them with “severe mental illness.” The workers want the lawmakers to “regulate the outsourcing of harmful and dangerous technology” and to protect the workers that do it.

They are also calling on them to enact legislation regulating the “outsourcing of harmful and dangerous technology work and protecting workers who are engaged through such engagements.”

Sama says it counts 25% of Fortune 50 companies, including Google and Microsoft, as its clients. The San Francisco-based company’s main business is in computer vision data annotation, curation and validation. It employs more than 3,000 people across its hubs, including the one in Kenya. Earlier this year Sama dropped content moderation services to concentrate on computer vision data annotation, laying off 260 workers.

OpenAI’s response to the alleged exploitation acknowledged that the work was challenging, adding that it had established and shared ethical and wellness standards (without giving further details on the exact measures) with its data annotators for the work to be delivered “humanely and willingly.”

They noted that to build safe and beneficial artificial general intelligence, human data annotation was one of the many streams of its work to collect human feedback and guide the models toward safer behavior in the real world.

“We recognize this is challenging work for our researchers and annotation workers in Kenya and around the world — their efforts to ensure the safety of AI systems has been immensely valuable,” said OpenAI’s spokesperson.

Sama told TechCrunch it was open to working with the Kenyan government “to ensure that baseline protections are in place at all companies.” It said that it welcomes third-party audits of its working conditions, adding that employees have multiple channels to raise concerns, and that it has “performed multiple external and internal evaluations and audits to ensure we are paying fair wages and providing a working environment that is dignified.”

source

Leave a Reply

Your email address will not be published. Required fields are marked *