Opinion: Artificial Intelligence should pay human artists for training

by

A flat above a fried chicken shop in London’s Notting Hill is an odd place to be the center of what has been called “one of the most important legal issues” of the 21st century. artificial intelligence that is bothering artists around the world.

Stability AI is run by Emad Mostaque, a computer scientist and former hedge fund employee. The firm operates Stable Diffusion imaging software, described in a US lawsuit as “a 21st-century collage tool that remixes multi-million dollar copyrighted works.” Type in “Elon Musk in a Van Gogh painting” and it will produce a fun imitation.

The three women artists behind the process in the United States have support. Getty Images, the stock photo agency with 135 million copyrighted images in its database, last week took another legal action against Stability AI in the UK courts. Getty images, along with millions of others, are used to train Stable Diffusion so it can perform its tricks.

The generative AI (artificial intelligence) revolution has exploded quickly: Stable Diffusion launched in August and already promises to “empower billions of people to create stunning art in seconds.” Microsoft has made a multibillion-dollar investment in OpenAI, which last year unveiled the Dall-E text image generator and drives ChatGPT.

Visual art isn’t the only discipline where AI agents threaten to cause a stir. The music industry is shivering at the prospect of millions of songs (and billions of dollars in intellectual property) being sifted through by AI to produce new tracks. Chinese entertainment group Tencent Music has released more than 1,000 tracks with synthetic voices.

In theory, algorithmic art cannot escape copyright and other intellectual property laws as well as humans: if an AI tool produces music or an image that does not transform the works it is based on enough to be original , artists who have been exploited can sue. Using a black box to disguise what has been dubbed “music laundering” is not a convincing legal strategy.

An AI agent learning from a database is also not entirely different from what humans have always done. They listen to music from rival bands and study other artists to learn from their techniques. Although the courts are full of disputes over whether composers copied illegally, no one tells them to cover their ears or warns painters to close their eyes at exhibitions.

But scale makes all the difference, as the music industry knows only too well. It was pretty safe in the pre-digital era, when music was sold on vinyl and CDs and sometimes copied on tape by fans. When Napster allowed the mass downloading and distribution of digital tracks, the industry was in big trouble before being rescued by Spotify and licensed streaming.

AI tools not only process databases, they also manufacture images to order: why stop at Van Gogh when you can get a Musk made by Monet, Gauguin or Warhol just by typing commands? It’s not high-end art, but Estelle Derclaye, professor of intellectual property law at the University of Nottingham in the UK, notes that “if AI starts to replace human creativity, we’ll have a problem.”

Humans retain many advantages: a synthetic version of Harry Styles by another name would not be nearly as popular as the artist, even if the owner of the AI ​​tool got away with it. But there are other uses — background music in video games, for example — for which a synthetic band that plays like BTS might be good enough.

Trying to stop the art of AI would be impossible as well as undesirable. But it is necessary to define the legal framework to prevent human creativity from being financially harmed. The question raised by Getty is whether companies like Stability AI should be able to train their AI tools on large amounts of copyrighted material without asking permission or paying license fees.

This is legal for research in many countries, and the UK government has proposed extending it to commercial use. There have been similar calls in the US for AI models to be given the right to “fair learning” from such data, because it would be impossible to track all license holders of gigabytes of material extracted from the web, seek authorization and reward them.

This strikes me as very blasé, akin to the arguments from the days of illegal downloading that the digital horse had taken off and everyone had to get used to it. Stability AI was valued at $1 billion and Microsoft’s investment in OpenAI shows that the money is available; what’s missing is a mechanism to distribute more among creators.

Individuals need more protections: It’s one thing to train the AI ​​software on a mass of material, but what if someone feeds it the works of a single living artist and then asks for a new design in his style? An illustrator in Los Angeles was recently subjected to such an AI “fine-tuning” by a user of Stable Diffusion; it’s unclear whether a court would call this fair use, but I wouldn’t.

“Please know that we take these matters seriously,” he promised Stability AI last week. Was your statement written by a human or an AI tool? Nowadays, it is very difficult to know.

You May Also Like

Recommended for you