Kenyan Workers Petition Lawmakers to Investigate Big Tech’s Outsourcing Practices

Follow @alesnesetril on Instagram for more dope photos!
Wallpaper by @jdiegoph (https://unsplash.com/photos/-xa9XSA7K9k)

Workers in Kenya who were employed to remove harmful content from OpenAI’s smart search engine, ChatGPT, have filed a petition urging lawmakers to investigate the outsourcing practices of big tech companies. The petition focuses on the nature of work, working conditions, and operations of these companies, particularly Sama, which has faced allegations of exploitation, union-busting, and illegal mass layoffs.

The workers’ concerns were highlighted by a Time report that shed light on the low wages received by Sama employees who played a crucial role in making ChatGPT less toxic. Their job involved reading and labeling graphic text containing scenes of murder, bestiality, and rape. Sama was contracted by OpenAI in late 2021 to label textual descriptions of sexual abuse, hate speech, and violence as part of the effort to develop a tool for detecting toxic content.

The workers claim that they were exploited and lacked psychosocial support despite being exposed to distressing content that led to severe mental illness. They are demanding regulations on the outsourcing of harmful technology and greater protection for workers engaged in such activities. Additionally, they call for legislation governing the outsourcing of dangerous technology work while safeguarding the workers involved.

Sama boasts high-profile clients such as Google and Microsoft, comprising 25% of Fortune 50 companies. The San Francisco-based company primarily focuses on computer vision data annotation but also operates hubs worldwide, including one in Kenya. Earlier this year, Sama discontinued its content moderation services and laid off 260 workers to concentrate on computer vision data annotation.

OpenAI responded to the allegations by acknowledging the challenging nature of the work while emphasizing that ethical and wellness standards had been established and shared with data annotators—although specific measures were not disclosed. OpenAI stated that human data annotation was essential for building safe artificial general intelligence systems through collecting human feedback to guide models toward safer behavior.

OpenAI’s spokesperson recognized the efforts made by researchers and annotation workers in Kenya and worldwide, acknowledging their valuable contributions to ensuring AI system safety. Sama expressed its willingness to collaborate with the Kenyan government in implementing baseline protections across all companies. The company welcomed third-party audits of its working conditions, highlighting multiple channels available for employees to raise concerns and asserting that fair wages and dignified working environments were ensured through various external and internal evaluations.

The workers’ petition raises significant issues concerning the outsourcing practices of big tech companies and the treatment of workers involved in content moderation and AI-related tasks. Their demand for investigations, regulations, and worker protections reflects a growing awareness of the potential risks associated with outsourced content moderation work and underscores the need for industry-wide reforms to ensure the welfare of these workers.