Researchers debate where the fault divides between the operator and the machine

The U.S. military is working very hard to develop autonomous robotic warriors. The U.S. is not alone -- the technology is the wave of the future in warfare, and is being pursued by dozens of countries worldwide. While expensive, weapons-toting robots can provide deadly accuracy and protect a nation's human soldiers.

autonous tuning test
© Boeing
Autonomous and semi-autonomous robots like this stealthy Boeing X-45C may face new legal repercussions

warrobot
©iRobot
IRobot's war-robot is one of the autonomous robotic soldiers that may eventually appear on the battlefield, legality pending. Could such a machine commit atrocities?

maars
©The Wired Danger Room
Another deadly autonomous bot is the Foster-Miller MAARS (Modular Advanced Armed Robotic System). The MAARS implements advanced technology to eliminate friendly fire problems.

Already, the U.S. has extensively utilized unmanned aerial vehicles in the war in Iraq, for both surveillance and offensive strikes, with drones such as the Hunter UAV shooting deadly missile strikes into enemy hideouts. The SWORD robots, designed by the army and armed with machine guns, patrol the streets in Iraq. Meanwhile a semi-autonomous Bradley Fighting Vehicle, named the Black Knight, is being developed in the U.S. Not to be left out, the Air Force states that it wants to have unmanned heavy bombers by 2020.

The key feature among all these robotic killers that have been deployed or are under development is that they need a human to pull the trigger. However, there is growing sentiment among military circles that eventually robots should be developed to be fully autonomous -- fighting and killing enemies, all without human intervention. Exactly when and how such robots should be legal and troubling moral issues raised were topics of discussion at Royal United Service Institute's conference, "The Ethics of Autonomous Military Systems", a summit of international scientists and military officials held on February 27 in London.

The question "Can a robot commit a war crime?" was raised during the conference. Such a concern -- that robots might malfunction and target civilians or friendly soldiers -- remains a frightening thought to many military men.

English Barrister and Engineer Chris Elliot explained carefully his thoughts on the legality of autonomous robotic weapons systems in terms of international criminal and civil laws. He points out that at a certain point the robot's engineers can no longer be held reasonably culpable, and the blame for errors resulting in catastrophic loss of life may come to rest on the shoulders of the robot who committed the assault. He states, "We're getting very close to the where the law may have to recognize that we can't always identify an individual - perhaps an artificial system can be to blame."

The idea of a robot being charged with murder raises provocative questions about punishment and the fairness of such measures. Elliot did not back down from taking other hard stances on issues. He made it clear that currently there was a clear legal burden for humans choosing to deploy systems lacking in sufficient judgment. He stated, "Weapons intrinsically incapable of distinguishing between civilian and military targets are illegal."

Elliot stated that robots should only be allowed to autonomously wage war when they pass a "Military Turing Test." He explains, "That means an autonomous system should be no worse than a human at taking decisions [about valid targets]."

The original Turing test, developed by computer pioneer Alan Turing, states that if a human is unable to tell in a conversation with a robot and a real human, which is the man and which is the machine, then the robot conversing with the human has achieved intelligence.

Elliot could not say when or how such a test could or would be administered, stating simply, "Unless we reach that point, we are unable to [legally] deploy autonomous systems. Legality is a major barrier."

Bill Boothby, of the UK Ministry of Defense, argued a slightly differing perspective that if the situation was carefully controlled, an autonomous fighting machine would not need to be quite as intelligent as Elliot's test would require. Boothby hopes that by lowering the requirements, robots could assist on the battlefield sooner as opposed to later. He describes this stance, stating, "There is no technology today that can make that kind of qualitative decision about legitimate targets ... [but] it may be possible to take precautions in the sortie-planning phase that enable it to be used within the law."

Boothby argued that in a way, human operators might be no better as they might simply "rubber stamp" the robot's targeting decisions. Boothby's comments were found more appealing to many military commanders who wished for sooner deployment of such robots as the new SWORD autonomous fighters or Foster-Miller's MAARS combat robot, which implements advanced technology to reduce friendly-fire incidents.

The difference in opinions expressed between Elliot and Boothby are reflective of the mixed feelings society holds about deploying independently operating killing machines to warzones. There remain many fears in the mind of the public, both realistic ones, based on practical assessment of the current limitation, and fantastic ones, fueled by popular culture such as the Terminator movies, which depict a future in which cold-blood lethal robots have turned upon mankind. The issue is sure to only become more contentious with time and technological advances.