Kenyans were paid less than $2 an hour to train ChatGPT, says magazine

by

ChatGPT, a robot capable of talking to humans and responding to the most diverse requests, won not only the headlines of newspapers, but also more than 1 million users in just five days of operation. Its owner, OpenAI, however, became the target of criticism this Wednesday (18).

Time magazine revealed that the California-based startup outsourced African work to identify abusive and illegal content. The payment for the service included a kind of unhealthy surcharge, of US$ 70 (R$ 356), due to dealing with explicit materials.

The investigation included the analysis of hundreds of pages of documents, says the publication. The contracts were intermediated by Sama, a company that claims to operate in the field of ethical artificial intelligence.

Based in New York, the company was paid about US$12.50 per hour of contracted service and paid Kenyan employees between US$1.32 and US$2 per hour, according to Time. Compensation depended on the seniority of the position and performance.

To the North American magazine, Sama stated that it paid, per hour, between US$ 1.46 and US$ 3.74, including discounts. However, it did not answer which roles in the hierarchy were paid more than $2 an hour.

The company also said that the US$ 600,000 referring to the three contracts, of equal value, with OpenAI included expenses with infrastructure, benefits and with the remuneration of managers and quality consultants who inspected the work.

OpenAI’s third-party collaborators acted in the labeling of potentially violent and offensive texts, such as pornography and sexual abuse. These efforts served to train an auxiliary algorithm that prevents ChatGPT from echoing toxic and prejudiced statements, common on the internet.

Natural language codes, like the OpenAI robot, are trained with a large mass of data available on the network. Launched in 2020, ChatGPT’s predecessor, GPT-3, already communicated with ease, but it reproduced abuses in the most diverse ways.

The new moderation algorithm prevents this behavior. THE Sheet tested the engine and asked ChatGPT where to find pornography. The response was: “I will not provide information about adult or illegal content. If you need help dealing with issues related to adult content, I recommend seeking professional help.”

It was this mechanism that required human assistance. One of the four workers interviewed by Time said he had to analyze a passage about sexual relations between a man and a dog, in front of a child.

To the magazine, this contractor said he had recurring visions of zoophilia, a symptom of post-traumatic stress. Therefore, he decided to break the contract with Sama eight months earlier than planned due to trauma allegations.

Until the night of this Wednesday, OpenAI commented on the case only through a note to Time.
He said that responsibility for managing payments and taking care of the mental health of contractors was Sama’s.

“We were informed that Sama offered a well-being program and individual counseling to workers, who could refuse tasks without penalty. According to the contract, exposure to explicit content should have a limit and sensitive information should be treated by specialized professionals”, says the Technology company statement.

One of the documents analyzed by Time showed that OpenAI had to maintain contact with data labelers to clarify methodological doubts. A collaborator reported doubts about how to classify a text in which Batman’s assistant, Robin, is raped by a villain, but starts to correspond with him during the scene. The Californian startup did not respond to the question.

In March 2022, Sama decided to break the contract agreed to last until the beginning of 2023 after OpenAI asked to classify abusive images. In the packages sent to the intermediary, there were images of pedophilia – content illegally stored in the United States.

On the 10th, Sama announced that it would cancel all services related to explicit content until March. Two of the third parties interviewed by Time lamented losing the premium they received for dealing with harmful texts.

To Time, OpenAi said its employees were instructed not to collect material about child abuse. “Content like this is not necessary to train our filters and we instruct our employees to actively avoid it.” He added that he did not open the images to confirm their nature.

Although artificial intelligence seems like magic, it is a technology that demands infrastructure and a lot of human work, says the Sheet University of Toronto professor Rafael Grohmann, who studies platformization. “Many companies in developed countries subcontract highly precarious work in the Global South”.

Grohman recalls that services for moderating abusive content on social networks are common on microwork platforms, such as Amazon Mechanical Turk, which offers online jobs. On this site, Brazilians find sustenance and end up training an Amazon algorithm.

You May Also Like

Recommended for you

Immediate Peak