Skip to main content

Autonomous computer systems and robots in recent years have demonstrated their ability to perform a remarkable array of complex tasks, from running and jumping to playing musical instruments to driving cars.

But there is one thing many scientists and ethicists say robots should never be allowed to do: target enemies on the battlefield and decide on their own when to take human life.

Once the stuff of science fiction, the idea of killer robots is no longer a remote possibility but an easy-to-imagine offshoot of current technology, experts warn. Drones and other military machines can already be piloted remotely. The step to a machine that directs itself in battle is not only plausible but practically inevitable unless the world acts to prohibit their use.

“We are not talking about Terminator,” said Toby Walsh, a researcher in artificial intelligence at the University of New South Wales in Australia. “We’re talking about much simpler technologies that, at best, are few years away … and many of which we can see in development today.”

Dr. Walsh is among those pressing the United Nations to ban the use of robots as killing machines in war, a development he said would not only raise serious moral questions but also increase the likelihood of violent conflict by removing the inhibition on military and civilian leader that comes with sending human soldiers into harm’s way.

Together with fellow panelists, he made his case for such a ban on Thursday ahead of a session devoted to the issue at the annual meeting of the American Association for the Advancement of Science in Washington. Rather than reducing civilian casualties and saving soldier’s lives, Dr. Walsh predicted a new arms race if autonomous robots take on combat roles, as well as the possibility they would eventually be used by police and border agents.

In 2014, Clearpath Robotics of Kitchener, Ont., became the first company to state publicly that it would not manufacture weaponized robots capable of killing without human control or knowingly facilitate such systems.

Ryan Gariepy, the company’s chief technology officer, also a presenter at the conference, said one of the challenges with such technology is it’s relatively simple to convert from benign to lethal ends. As a piece of engineering, an autonomous drone designed for search and rescue is effectively the same as one built to search and destroy. The absence of a technical barrier and the continuing development of technologies that could be precursors to robotic killers make it all the more important for industry and governments to step in and rule out their use, he said.

“It’s not a foregone conclusion that these things need to exist,” he added.

None of the panelists said they were seeking to bar the use of robotics in all military applications. However, a sharp moral line should be drawn when it comes to robots or autonomous systems that can decide on their own when to use lethal force without “supervision or meaningful human control,” said Peter Asaro, an associate professor at the New School in New York and co-founder of an organization of scientists and technologists in support of robot arms control.

Last year, the group authored a letter from 1,400 scientists in support of employees of tech giant Google who were protesting the company’s involvement in a U.S. Department of Defense project that sought to leverage commercial artificial intelligence for military use. The project, called Maven, involved developing algorithms that could recognize objects spotted by cameras on autonomous drones. Google has since said it would not continue the project after its contract with the Pentagon expires this year.

The group has also joined forces with other organizations seeking to advance talks toward an international ban under what is known as the United Nations Convention on Certain Conventional Weapons. The convention has previously been able to prevent some other existing technologies from becoming commonplace on the battlefield, including lasers capable of permanently blinding combatants.

Mary Wareham of the Washington-based advocacy group Human Rights Watch, said the effort has already made it less palatable for weapons makers to portray the use of killer robots as a benefit. But, she added, it is clear from the progress of the UN negotiations that some countries, including the United States and Russia, are opposed to any legally binding measures that would curb the technology.

She noted the Canadian government is among those that have so far chosen not to take a stronger stand on the issue. Last year, in response to a letter from Canadian artificial intelligence experts asking that he join the call for an international ban, Prime Minister Justin Trudeau referred the matter to Cabinet ministers for defence, science and foreign affairs who, he said, would give the matter “every consideration.”

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe