“You have to turn speech into practice to have fair AI”, says Oxford researcher

by

Almost everyone says they are against discrimination and in favor of transparency and understanding, but each one can give different meanings to these values. Building consensus on these principles is a key step towards making AI-based technologies more ethical, according to Oxford Internet Institute researcher Callum Cant, who coordinates the Fair Work For AI project.

Sponsored by the Global Alliance for Action on AI, which brings together some of the biggest technology companies, this initiative has developed ten principles to ensure that the development and use of artificial intelligence is more ethical, not just in advertising. To implement them, it provides consultancy to companies such as Microsoft, Uber and entities such as the ILO (International Labor Organization).

See the ten principles:

  • Ensuring decent work, according to ILO standards
  • Build fair production chains
  • Understanding—it should be clear to supply chain workers how the algorithm works
  • Pursuit of fairness—preventing AI systems from increasing discrimination
  • Fair decision-making—stakeholders involved in these projects must be accountable
  • Honest use of data — limiting the concentration of information and considering the protection of personal data
  • Increase security—avoid overexploitation of work and surveillance
  • Creating jobs that resist change and ensure stability
  • Avoid inappropriate design—tests must meet high standards to prevent damage
  • Giving voice to collectives of workers—listening to different actors as the risks and rewards of AI systems are understood in different ways

Can you say the goals of the Fair Work for AI project and how your team works to achieve them? The advancement of AI capabilities in the job market creates many opportunities, but also many risks, because these systems are often applied and developed without prior testing. We’ve already seen how a number of applications can involve risks in terms of biases and biases, in terms of work intensification, in terms of surveillance, and so on.

We started working on a global partnership on artificial intelligence to define a series of principles about what decent work would look like within this AI market. From there, we can offer global and tripartite consultancy to the International Labor Organization, Uber, Microsoft, etc.

Our goal is to understand what these companies and organizations consider fair [dentro de uma relação de trabalho] and produce a report, in which we list ten principles.

Now that we have these principles defined, let’s measure how much of this has been put into practice and investigate what has been happening in the real world.

The case of Kenyans working with content moderation for the development of ChatGPT made it clear that there is human work in various forms behind artificial intelligence projects. All AI systems developed since 2012 have essentially relied on training machines with huge amounts of data. If you show, for example, many pictures of a horse with an indication of the animal’s name to an algorithm, it will be able to determine which are horses and which are not over time. But the labeling part of the information doesn’t happen by chance. A lot of data depends on a lot of workers to do the tagging.

In this recent history of machine learning, we can see the development of platforms such as Amazon Mechanical Turk, Clickworker, Microwork and others, where the objective is to create a global market with the outsourcing of data labeling and content moderation, bringing these activities to the lowest paid locations in the world. There are even records of children acting on these platforms. Thus, this work can be done at low cost and on a large scale.

In the current AI scenario, it seems that all the reasoning found in a real-life ChatGPT comes from the machine, instead of crediting the ability to speak or produce texts of this engine to two sources: the data used in training, which is the history of the language human and script, and the people who tag that data.

And today is it possible to develop artificial intelligence with social responsibility? If so, what are the elements that would make the development of this technology responsible? It’s possible. It would demand attention at many points in the production chain. We need to think of a way in which AI systems are literally trained to compute what has been done at every step of the machine learning process. Where do the resources spent to carry out that operation also come from? Video cards and computers are a big part of that job. Globally, it is necessary to consider mining responsible for the raw material involved in the production of these parts, to observe where the waste goes, how energy is used, if fossil fuels are used, and it is also necessary to consider the data labeling process.

Those people who sort information can have decent work. But this requires much higher standards than those practiced by the market. It’s interesting to see some Google-funded think tanks studying how this can be done, in a way that large corporations are also comfortable putting into practice. But that would require a significant change in the way the industry operates right now.

It is also important to reflect on whether consumers are also affected in some way. Do the intrinsic biases of these technologies lead to social fragmentation? To think about justice in artificial intelligence in really broad terms it is necessary to think of a society that approaches the question of technology in an ethical way.

And you said you’re working with big multinationals on the Fair Work for AI project. Do these companies show interest in making this technology more sustainable now? There seems to be interest. Of course it is limited by commercial interests. In a private economy, corporations don’t make decisions because they look cool or look good. They act to gain competitive advantage, they make changes to increase profits and position their products in the market. So there is a contradictory incentive, where corporations want to make as much profit as possible. But there is still room to make adjustments to gradually increase integrity over time.

Are there any laws or regulations in this sector under discussion right now that can serve as an example for the rest of the world? There’s a lot in the early stages. The European Union’s artificial intelligence act shows potential in creating guidelines on platform work, but this is under challenge in the European Parliament. Even so, this measure presents interesting points in the sense of understanding the technology.

At the moment, there is no legislation that serves as a paradigm to point out. We are in the process of developing some of these laws. But it seems that the formulation process will be difficult, because from the beginning there are many interests at stake, and lobbyists already try to influence the process.

I wouldn’t expect governments to go in that direction without being under pressure. If we really want this technological development process to proceed in a way that respects workers, it is up to trade unions, civil society associations and social movements to demand higher standards.

Organizations such as Sama [que terceirizou trabalho de moderação no Quênia para treinar o ChatGPT] call themselves ethical AI companies. For you, what would be an ethical AI? I have serious concerns about how those Kenyan workers were treated. And I would say in the present it feels like they haven’t been treated fairly. This raises suspicions about the possibility for a theater to prove itself ethical and the risk that the guidelines that are publicly announced are not carried out in the actual operation. We need to be very careful with corporations that use the discourse of ethics and social responsibility, while not putting it into practice.

This concern can reassure the people they are labeling data of secure, well-paid jobs that they can depend on in the long term, it can mean respecting important measures against child labor and other rights.


Callum Cant, 31

He coordinates the Fair Work for AI research project in partnership with the Global Partnership on Artificial Intelligence, which aims to develop artificial intelligence with social responsibility. He is a Postdoctoral Research Fellow at the Oxford Internet Institute. His first book, “Delivery Fight! – The fight against the Faceless Bosses” (Veneta, 2021), tells the experience of London couriers in the first person.

You May Also Like

Recommended for you

Immediate Peak