Momentum to limit use of killer robots grows, but US and Russia resist

by

It may have seemed like an obscure UN conclave, but a meeting this week in Geneva was closely watched by experts in artificial intelligence, military strategy, disarmament and humanitarian law.

What motivated this interest? Killer robots — that is, drones, guns and bombs that use artificial brains to decide for themselves whether to attack and kill — and what, if any, should be done to regulate or ban them.

Once limited to sci-fi movies like the Terminator and RoboCop series, assassin robots are technically known as Lethal Autonomous Weapon Systems (LAWS) and have been invented and tested at a fast pace. , with little supervision. Some prototypes have already been used in real conflicts.

The evolution of these machines is seen as a potentially seismic event in the art of warfare – something as impactful as the invention of gunpowder or nuclear bombs.

This year, for the first time, most of the 125 countries that have signed an agreement called the United Nations Convention on Certain Conventional Weapons (CCAC) have said they want restrictions on killer robots. But they ran into opposition from other signatory countries that are developing these weapons, especially the United States and Russia.

The group’s conference ended on Friday with nothing more than a vague statement about considering possible measures that would be acceptable to all. The Campaign to Stop Killer Robots, a group that fights for disarmament, considered that the result of the meeting “left drastically to be desired”.

What is the Convention on Certain Conventional Weapons?

Also known as the Convention Against Inhuman Weapons, the pact is a set of rules that prohibit or limit the use of weapons that are believed to cause unnecessary, unjustifiable or indiscriminate suffering, such as incendiary explosives, blinding lasers and traps that do not distinguish between combatants and civilians. The convention does not contain clauses on killer robots.

What exactly are killer robots?

Opinions differ on the exact definition, but they are widely considered to be weapons that make decisions with little or no human involvement. Rapid advances in robotics, artificial intelligence and image recognition are making these weapons possible.

The drones that the US has used extensively in Afghanistan, Iraq and other countries are not considered robots because they are operated remotely by people, who select the targets and decide if it is appropriate to fire.

Why are killer robots considered attractive?

For military planners, these weapons offer the possibility of preventing soldiers from being injured or killed. In addition, they make decisions in less time than a human would, giving more battlefield responsibilities to autonomous systems like drones and pilotless tanks that independently decide when to attack.

What are the objections?

Critics argue that it is morally repugnant to entrust lethal decision making to machines, no matter how technologically sophisticated. How can a machine tell an adult from a child, a bazooka-wielding combatant from a civilian with a broom in hand, a hostile combatant from an injured or surrendering soldier?

“Fundamentally, autonomous weapons systems raise ethical concerns for society about replacing human life-and-death decisions with automated processes, sensors and software,” Peter Maurer, chairman of the International Committee of the Red Cross and software, told the Geneva conference. declared adversary of the assassin robots.

Why was the Geneva conference important?

The conference was seen by many disarmament experts as the best opportunity so far to think of ways to regulate or ban the use of killer robots under the terms of the CCAC.

The conference was the fruit of years of discussions by a group of experts who were invited to identify the challenges and approaches that could be taken to reduce the danger of killer robots. But experts could not agree on even the basics.

What do those who oppose a new treaty say?

Some countries, such as Russia, insist that any decision on limits on the use of killer robots must be unanimous. In practice, this would give veto power to those who oppose the imposition of restrictions.

The United States argues that existing international law is sufficient and that banning autonomous weapons technology would be premature. Chief US representative at the conference, Joshua Dorosin, proposed a non-binding “code of conduct” on the use of killer robots. Disarmament advocates rejected the proposal, which they called a delay tactic.

Franz-Stefan Gady, a researcher at the think tank International Institute for Strategic Studies, said “the arms race for autonomous weapons systems is already underway and will not be canceled anytime soon.”

Are there disagreements in the defense establishment regarding killer robots?

Yes. Despite advances in autonomous weapons technology, there is reluctance to employ them in combat, said Gady, due to fears of errors.

“Can military commanders trust the judgment of autonomous weapons systems? The answer is evidently no, for now, and must continue to be no for the foreseeable future,” he said.

The debate over autonomous weapons has already reached Silicon Valley. In 2018, Google said it would not renew a contract with the Pentagon after thousands of its employees signed a letter protesting the company’s work with a program that used AI to interpret images that could be used to select drone targets. The company has drafted new ethical guidelines that prohibit the use of its technology in weapons and surveillance.

Others feel that the US is not doing enough to compete with its rivals.

Former Air Force director of software Nicolas Chaillan told the Financial Times in October that he resigned because of what he considers to be weak technological progress being made by the US military, especially in the use of AI. He said that while lawmakers are slowed down by ethical considerations, countries like China are moving forward.

Where have autonomous weapons been used?

There aren’t many confirmed examples of its use on the battlefield, but critics point to a few incidents that indicate the technology’s potential.

In March of this year, UN investigators said an “autonomous lethal weapons system” was used by Libyan government forces against militia fighters. A drone called the Kargu-2, manufactured by a Turkish defense company, tracked and attacked fighters as they were fleeing a rocket attack, investigators said. Their report did not make it clear whether any humans were in control of the drones.

In the 2020 war in Nagorno-Karabakh, Azerbaijan fought Armenia with attack drones and missiles that hover in the air until they detect the signal from a designated target.

What will happen now?

Many disarmament advocates said the outcome of the conference had hardened determination to push for a new treaty in the coming years, such as treaties banning the use of landmines and cluster munitions.

Daan Kayser, an autonomous weapons specialist with PAX, a Dutch peace body, said the failure of the conference even to reach a decision to negotiate over killer robots was “a very clear indication that the CCAC is not up to the task of this challenge”.

AI expert Noel Sharkey, chairman of the International Committee on Robotic Weapons Control, said the meeting proved that a new treaty would be preferable to more Convention deliberations. According to him, “there was an urgent awareness in the room that, if nothing progresses, we won’t want to remain stuck on this conveyor belt”.

.

You May Also Like

Recommended for you

Immediate Peak