terminator
© Terminator Wikia
Long a staple of science fiction, the notion of autonomous robots that can kill is starting to take root in the U.S. military. It'll only be a matter of time before these "thinking" machines are unleashed on the battlefield - a prospect that's not sitting well with people both inside and outside of the Pentagon.

Is it really ethical to develop and deploy these terrible creations? And what, if anything, can we do to prevent it?

One person who believes that this is an issue that needs to be addressed immediately is Wendell Wallach, a scholar and consultant at Yale's Interdisciplinary Centre for Bioethics and coauthor of Moral Machines: Teaching Right From Wrong. io9 recently spoke with Wallach to get a better understanding of this issue, and to find out why he feels that autonomous killing machines should be declared illegal.

But first, it's worth doing a quick overview to get a sense of just how close the U.S. military is to deploying such weapons.

In nascent form

Autonomous killing machines aren't anything new. We already have various levels of autonomy in a number of weapons systems, including cruise and patriot missiles. The Aegis Combat System, which is found aboard naval ships, has an autonomous mode in which it uses powerful computers and radars to track and guide weapons to destroy enemy targets.

But these are largely defensive systems with limited intelligence. They typically find their way to a target, or take certain action without human oversight - but only after being launched or triggered by a human.

As time passes, however, these systems are getting more sophisticated, and their potential for increased autonomy is growing. Take Samsung Techwin's remote-operated sentry bot, for example, that works in tandem with cameras and radar systems. Working in the Korean DMZ, the device can detect intruders with heat and motion sensors and confront them with audio and video communications. They can also fire on their targets with machine guns and grenade launchers. As of right now, the robots cannot automatically fire on targets, requiring human permission to attack. But a simple change to engagement policy could override all that.

Another example are the packbots used by the U.S. military. These devices have an attachment called REDOWL which uses sensors and software to detect the location of a sniper. Once it detects the threat, it shines a red laser light on it, indicating its presence to human soldiers who can then choose to take it out. It wouldn't take much to modify this system such that the REDOWL could act on its own - and with its own weapons.

Deploying intelligent machines that can choose to kill

Indeed, usually when we talk about autonomous weaponry we're referring to systems that have more intelligence to them. In theory, they'll be able to select a target and then make the decision to destroy it. Such systems could either be defensive or offensive as there's no reason to believe that these weapons would strictly be used for defensive purposes.

As we move forward, therefore, the Big Issue that emerges is whether or not such systems should ever be deployed. The United States, along with a number of other countries, will soon be confronted with this problem.

drone
© MilitaryPhotos
Wallach told io9 that these systems aren't very complicated and that virtually any country has the potential to develop their own versions. "The larger question," asks Wallach, "is whether or not the U.S. military is producing such weapons - and other countries." He suspects that there are more than 40 countries now involved in developing unmanned vehicle programs similar to the ones deployed by the United States, including drones.

Complicating the issue are ever-increasing levels of autonomy in military machines. The U.S. airforce is starting to change the language surrounding their engagements, referring to systems that are "in the loop" to "on the loop" to describe the level of future human involvement. By being "on the loop", humans are largely outside of the process, but can intervene if the weapons system is about to do something inappropriate. Trouble is, says Wallach, is that the speed of modern warfare may preclude human involvement. "It's dubious to think that a human can always react in time," he says.

And take REDOWL, for example. Once the system points out an enemy sniper, the question emerges: Who is in whose loop? Is the soldier in the REDOWL's loop, or vice-versa?

A vulgar display of power

Critics of autonomous killing machines have expressed a multitude of concerns. The general public is uneasy with the possibility, worried by such scenarios as the ones portrayed in The Terminator, Battlestar Galactica, and Robopocalypse. Wallach in particular believes that these machines will be rejected outright by public opinion - even if those concerns are driven primarily by nightmarish sci-fi visions. But there is also growing concern from an increasing number of military thinkers who also worry about going down this route.

robot tank
© Boston Dynamics
"A common concern among some military pundits is that it lowers the barriers to starting new wars," says Wallach, "that it presents the illusion of a quick victory and without much loss of force - particularly human losses." It's also feared that these machines would escalate ongoing conflicts and use indiscriminate force in the absence of human review. There's also the potential for devastating friendly fire.

And once developed, the systems are likely to proliferate widely. The fear is that their presence would introduce a serious, unpredictable element in future conflicts. Just because, say, the United States adheres to international laws and restraints doesn't mean that other state actors and interests will, too. It could very well instigate an arms race.

"The difficulty with all of this is that even if you accept the honourable intent of military and policy planners using an autonomous weapon," says Wallach, "we live in world of asymmetric warfare." Such an imbalance, argues Wallach, will only serve to convince such forces to develop their own autonomous weaponry.

Walach also believes that the proposed use of autonomous killing machines would be rejected by human rights law. "International law states that there has to be a human taking direct responsibility for lethal force," he says, "it's therefore unacceptable from a political perspective."

Lack of discussion

Wallach is alarmed at how little this issue is being discussed, which is something that he's hoping to change. "There are various policy makers, military thinkers, and academics who suggest that autonomous killing machines are science fiction and that no one is moving in that direction," he notes. Wallach cites the work of Werner Dahm, chief scientist of the Air Force, who he feels is not taking the issue seriously enough - and even potentially downplaying the threat.

Quite understandably, some military thinkers see the tremendous advantage that these systems could bring. Unmanned smart weapons could increase capabilities, reduce collateral damage through greater precision, decrease loss of personnel, lower manpower costs, and enable the projection of lethal force in a future where manpower resources will be far more limited.

And for better or worse, these rationales point to a future in which wars are fought by robots pitted against each other. "This is not just the concern of futurists or nay-sayers," says Wallach, "but also from both retired generals and active military leaders who are very concerned that this could lead to a robust lack of control and undermine the human levels of engagement."

Wallach dismisses arms control proposals outright: "There have been over three decades of discussions on various agreements to control such things as cruise missiles, but it hasn't worked." Arms control, he argues, is almost impossible to move forward.

A Presidential Order

Instead, Wallach proposes an executive order from the President of the United States banning the use of autonomous killing machines. Such an action would make these systems illegal in the same way that space weapons, eye-blinding lasers, and cluster bombs are. The only question is whether or not such an injunction should be taken further and applied to international law such that the United States could have the larger community of nations in support.

movie machine creature
© Terminator Wikia
Wallach calls this the "strong form" of his proposal, in which it could be applied to international humanitarian law. By having the President invoke such an order, the U.S. could align all of NATO behind it.

"Sure, there's always the fear that someone could still develop these weapons," he told io9, "and given the presence of asymmetrical warfare there's no guarantee that weaker parties will go along with international policies." But the onus, says Wallach, "will fall upon them." Such a development would justify the use of lethal force against those rogue interests, but that lethal force does not have to come from autonomous weaponry. "Moral principles don't get established because you can guarantee that people will go along with them, he says, "you get broad consensus because it's the right thing to do."

The next step for Wallach is to organize an invitation-only meeting in Washington to bring together all the relevant stakeholders to see if there's sufficient support for his proposal.

In his conversation with io9, Wallach seemed frustrated that some people see this issue as something that's too futuristic to care about. "We're at a potential inflection point in the development of autonomous weaponry," he said. "That inflection point won't last for a long period of time, and if we wait too long, other vested interests will take over that prospect."