About a year ago, millions of viewers across South Korea were watching the MBN channel to catch up on the latest news.
In prime time, the newspaper’s usual host, Kim Joo-Ha, began reading the day’s headlines. It was a relatively normal list of stories for the end of 2020 — packed with updates on the Covid-19 pandemic.
However, this bulletin was far from normal as Kim Joo-Ha wasn’t actually on screen.
She had been replaced by a deepfake version of herself — a computer-generated copy that seeks to perfectly reflect her voice, gestures and facial expressions.
Viewers were informed in advance that this would happen, and according to the South Korean press, the public’s reaction was mixed.
While some people were surprised at how realistic it was, others said they were worried that the real Kim Joo-Ha might lose her job.
MBN said it would continue to use deepfake for some breaking news, while the company behind the artificial intelligence technology — South Korea’s Moneybrain — announced it would be looking for other buyers in China and the US.
When most people think of deepfakes, they think of fake celebrity videos.
In fact, not long after this South Korean deepfake, a fake — but very realistic — video of actor Tom Cruise was featured in the news around the world when it appeared on TikTok.
Despite the negative connotations surrounding the colloquial term deepfake (people generally don’t want to be associated with the word “fake”), the technology is increasingly being used commercially.
More diplomatically called artificial intelligence-generated videos, or synthetic media, their use is growing rapidly in some areas such as news, entertainment and education — and the technology is becoming increasingly sophisticated.
An early adopter was Synthesia, a London-based company that creates AI-powered corporate training videos for companies such as global advertising group WPP and business consultancy Accenture.
“This is the future of content creation,” says Synthesia CEO and Co-founder Victor Riparbelli.
To make an AI-generated video using Synthesia’s system, you simply choose from multiple avatars, type in the word you want them to say, and you’re done.
Riparbelli says this means global companies can easily make videos in different languages — for in-house training courses, for example.
“Let’s say you have 3,000 warehouse workers in North America,” he says. “Some speak English, but some may be more familiar with Spanish.”
“If you need to communicate complex information to them, a four-page PDF is not a good way. It would be much better to do a two- or three-minute video, in English and Spanish.”
“If you had to record each of these videos, it would be a huge job. Now we can do that with a [pequeno] production cost and the time it takes someone to write the script. This exemplifies very well how the technology is used today.”
Mike Price, chief technology officer at ZeroFox, a US cybersecurity firm that tracks deepfakes, says that their commercial use is “growing significantly year after year, but exact numbers are hard to pin down.”
Chad Steelberg, chief executive of Veritone, an American provider of artificial intelligence technology, notes, however, that growing concern about malicious deepfakes is holding back investment in the technology’s legitimate commercial use.
“The term deepfake definitely had a negative response in terms of capital investment in the sector,” he says. “The media and consumers, rightly, can clearly see the associated risks.”
“This has definitely stopped corporations and investors from investing in the technology. But I think you’re starting to see this opening.”
Mike Papas, chief executive of Modulate, an artificial intelligence company that lets users create the voice of a different character or person, says companies in the commercial synthetic media sector “really care about ethics.”
“It’s amazing to see the depth of thought they put into this,” he says.
“This has ensured that investors are also concerned about this. They are asking about ethical policies and how do you view it.”
Lilian Edwards, professor of law, innovation and society at the University of Newcastle, UK, is an expert on deepfakes. And, she said, an issue surrounding the commercial use of the technology that hasn’t been fully addressed is who owns the rights to the videos.
“For example, if a dead person is used, how [o ator] Steve McQueen or [o rapper] Tupac, there is an ongoing debate over whether their family should own the rights [e obter renda com isso]”, he explains.
“Currently, it differs from country to country.”
Deborah Johnson, a professor of applied ethics at the University of Virginia in the US, recently co-authored an article titled What To Do About Deepfakes? (“What to do with deepfakes?”, in literal translation).
“Deepfakes are part of the larger misinformation problem that undermines trust in institutions and the visual experience — we can no longer trust what we see and hear online,” she says.
“Identification is probably the simplest and most important way to fight deepfakes — if viewers are aware that what they are seeing has been fabricated, they are less likely to be misled.”
Professor Sandra Wachter, an artificial intelligence researcher at the University of Oxford in the UK, says that deepfake technology “is advancing rapidly”.
“If you’ve watched the Tom Cruise video, you can see how good the technology is getting,” she says.
“It was much more realistic than President Obama’s four years ago.”
“We shouldn’t be too afraid of technology, and there needs to be different approaches to it. Yes, there should be laws to clamp down on harmful and dangerous things like hate speech and revenge pornography. Individuals and society should be protected from that.”
“But we shouldn’t have a total ban on deepfakes out of satire or free speech. And the growing commercial use of the technology is very promising, like porting movies into different languages or creating engaging educational videos.”
An example of the educational use of artificial intelligence-generated videos is at the Shoah Foundation at the University of Southern California, USA, which houses more than 55,000 video testimonies from Holocaust survivors.
His Dimensions In Testimony project allows visitors to ask questions that lead to real-time responses from survivors in pre-recorded video interviews.
Steelberg believes that in the future, this technology will allow grandchildren to talk to artificial intelligence versions of deceased grandparents.
“That’s transformative, I think, for the way we think about our society.”
Additional reporting by Will Smalle
I have over 8 years of experience in the news industry. I have worked for various news websites and have also written for a few news agencies. I mostly cover healthcare news, but I am also interested in other topics such as politics, business, and entertainment. In my free time, I enjoy writing fiction and spending time with my family and friends.