The Terminator
According to a U.S. Navy official, the nightmare scenario of wars entrusted to machines could become a reality.

It's long been an image restricted to popular culture: unstoppable robot killers firing their high-powered rifles at clusters of helpless human soldiers with no choice but to flee the battlefield or risk sustaining tremendous losses.

The scenario of military robots and the artificial intelligence (AI) network "Skynet" spinning free from human control forms the basis of the Terminator series starring Arnold Schwarzenegger that has captivated moviegoers around the world. But now, according to a U.S. Navy official, the science fiction nightmare of wars entrusted to machines that "can't be reasoned with [and] doesn't feel pity, or remorse, or fear," — as one character in the original film says — could become a reality.


The comments come as the Navy continues to upgrade its autonomous capabilities and bulk up its ranks with more advanced robotic systems.

However, this has been accompanied by work meant to prevent the service from putting too much trust into a system that could, some fear, one day have a mind of its own.

Steve Olsen, deputy branch head of the Navy's mine warfare office, told Defense News:
"Trust is something that is difficult to come by with a computer, especially as we start working with our test and evaluation community.

I've worked with our test and evaluation director, and a lot of times it's: 'Hey, what's that thing going to do?' And I say: 'I don't know, it's going to pick the best path.'"
Comparing the pitfalls of autonomous warfighting systems to the car crashes involving semi-autonomous private automobiles, Olsen continued:
"And they don't like that at all because autonomy makes a lot of people nervous. But the flip side of this is that there is one thing that we have to be very careful of, and that's that we don't over-trust. Everybody has seen on the news [when people] over-trusted their Tesla car. That is something that we can't do when we talk about weapons' system.

The last thing we want to see is the whole 'Terminator going crazy' [scenario], so we're working very hard to take the salient steps to protect ourselves and others."
The Navy is already experimenting with a 135-ton autonomous unmanned surface vehicle (USV) named the Sea Hunter, which is meant to provide an autonomous platform for anti-submarine and electronic warfare as well as provide a decoy in any live-fire clash involving human forces.

Earlier this year, the Sea Hunter was the first ship of any kind to ever sail without a crew from San Diego, California, to Pearl Harbor, Hawaii and back.


And according to Defense One, the Air Force will begin work on flying cars this fall in a program called Agility Prime. Will Roper, Assistant Secretary of the Air Force for Acquisition, Technology and Logistics had this to say of the program:
"The task I gave the team was to prepare a series of challenges from things that would involve smaller vehicles, maybe moving a couple of special aviators around, to things involving smaller logistics sets, ammo, meals that kind of thing out of harm's way, up to moving heavy logistics, like weapons to reload on an aircraft, all the way to a bigger system."
In 2017, AI technology experts including Tesla founder Elon Musk wrote an open letter to the United Nations warning of the potential dangers of weapons systems equipped with integrated autonomous capabilities.

The letter noted:
"Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora's box is opened, it will be hard to close."
The Campaign to Stop Killer Robots has also worked to fully ban autonomous weapons from the battlefield, noting that fully autonomous weapon systems "would decide who lives and dies, without further human intervention, which crosses a moral threshold," according to its website.

The campaign added:
"As machines, they would lack the inherently human characteristics such as compassion that are necessary to make complex ethical choices."

According to the Campaign to Stop Killer Robots, the stage is being set for a potentially destabilizing "robotic arms race" that could see countries worldwide working to gain the upper-hand in building their autonomous warfighting capabilities.

The militaries of the U.S., Russia, China, Israel, South Korea and the United Kingdom have already developed advanced systems that enjoy significant autonomy in their ability to select and attack targets, the campaign notes.

And while countries across the Global South have urged the UN to impose a ban on killer robots, states who possess these technologies have opposed such a ban at every turn — signaling that they are unwilling to let go of their revolutionary new implements of death.

And on Sunday, former top Google engineer Laura Nolan told the Guardian that she had joined the Campaign to Stop Killer Robots because the robot systems envisioned by Big Tech firms and militaries could potentially do "calamitous things that they were not originally programmed for."


The former Google worker, who resigned last year in protest of Google's Project Maven — which was meant to dramatically upgrade U.S. military drones' AI capabilities — has also briefed U.N. diplomats on the hazards of robotic weaponry.

Nolan explained:
"The likelihood of a disaster is in proportion to how many of these machines will be in a particular area at once. What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed.

There could be large-scale accidents because these things will start to behave in unexpected ways. Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous."
According to a supplemental report to President Trump's budget request, the federal government is poised to spend nearly $1 billion on nondefense AI research and development in fiscal year 2020.

Defense One reports that on Monday the Trump administration held an AI summit at the White House, "hosting 200 leaders from the government, industry and academia to address priorities across the burgeoning landscape."