Analysis: Democracies must use artificial intelligence to defend open societies

by

The world would have been a much darker place if Nazi Germany had surpassed the United States in building the world’s first atomic bomb. Fortunately, the self-destructive hatred of the Adolf Hitler regime sabotaged its own efforts. A 1933 law dismissed “civil servants of non-Aryan descent”, removing a quarter of German physicists from their university positions.

As historian Richard Rhodes has noted, 11 of these 1,600 scholars had already won the Nobel Prize, or would later. Scientists refugees from Nazi Europe later played a central role in the Manhattan atomic bomb project in the US.

Scientists’ harrowing soul-searching about building nuclear weapons resounds loudly today, as researchers develop artificial intelligence (AI) systems that are increasingly adopted by the military. Enthusiastic about the peaceful uses of AI, researchers know that it is a general-purpose, dual-purpose technology that can have highly destructive applications.

The Stop Killer Robots coalition of more than 180 non-governmental organizations from 66 countries is campaigning to ban so-called AI-powered, lethal autonomous weapons systems.

The Ukrainian War increased the urgency of the debate. Earlier this month, Russia announced that it had created a special department to develop AI-enabled weapons. He added that his experience in Ukraine would help make his weapons “more efficient and smarter”. Russian forces have already deployed the autonomous mine-clearing robot Uran-6, as well as the KUB-BLA unmanned suicide drone, which, according to its maker, uses AI to identify targets (although this is disputed by experts).

Russian President Vladimir Putin spoke about the “colossal opportunities” of AI. “Whoever becomes the leader in this sphere will be the ruler of the world,” he said. However, the Kremlin’s efforts to develop AI-enabled weapons are sure to be hampered by the recent exodus of 300,000 Russians, many from the tech sector, and the poor performance of its conventional forces.

The Russian initiative followed the Pentagon’s announcement last year that it was stepping up efforts to achieve superiority in AI. The US Department of Defense was “working to create a competitive military advantage by adopting and leveraging AI,” said Kathleen Hicks, deputy secretary of defense. China is also developing AI for economic and military uses, with the clear aim of overtaking the US in what has been called the AI ​​arms race.

While much of the debate over the use of nuclear weapons has been relatively clear and limited for decades, while terrifying, the AI ​​discussion is far more confusing and kaleidoscopic. To date, only nine nation states have developed nuclear weapons. Only two atomic bombs have ever been used in modern warfare, on Hiroshima and Nagasaki in 1945. Their terrible destructive power made them weapons of last resort.

AI, on the other hand, is less visible, more diffuse, and more unpredictable because it has a lower initial barrier to use, as veteran strategist Henry Kissinger wrote. It is perhaps best seen as a force multiplier that can be used to enhance the capabilities of drones, cyber weapons, anti-aircraft batteries, or troops in combat.

Some strategists fear that Western democracies may be at a disadvantage against authoritarian regimes because they have more ethical constraints. In 2018, more than 3,000 Google employees signed a letter saying the company “shouldn’t be in the war business” and calling (successfully) for its withdrawal from the Pentagon’s Project Maven, created to apply AI to warfare.

The Pentagon now emphasizes the importance of developing “responsible” AI systems, governed by democratic values, controls and laws. The Ukrainian War may also be influencing public opinion, especially in Europe. “Young people worry about climate change. And now they worry about living in open societies,” Torsten Reil, co-founder of Helsing, a German startup that uses AI to integrate war data, told me. “If we want to live in an open society, we have to be able to deter and defend, and do it credibly.”

To some, it may sound like a hypocritical “rebranding” of the death industry. But, as physicists learned during World War II, it’s hard to be morally pure when you have to make terrible choices in the real world. To their great credit, many AI researchers are pushing today for significant international conventions to curb uncontrollable killer robots. But it would be dangerous to abandon the responsible use of AI technology in defense of democratic societies.

You May Also Like

Recommended for you

Immediate Peak