Man with robotic arm
© Pixabay
Perceived existential threats posed to humanity by artificial intelligence (AI) are "uninformed," a US Department of Defense report has concluded - although some still harbor grave reservations about the technology's potential.

Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD, authored by a group of independent scientists belonging to JASON, the secretive organization that counsels the US government on sensitive scientific matters, states growing public suspicion of AI is "not always based on fact," especially in respect of military technologies.

Noting that in January 2015 Elon Musk, founder of space transport services company SpaceX and chief product architect at Tesla Motors, declared AI the "biggest existential threat" to the survival of the human race, the report suggests alleged hazards do not cohere with the current research directions of AI. Instead, they "spring from dire predictions about one small area of research within AI, Artificial General Intelligence."

AGI relates to the development of machines capable of long-term decision-making and intent, akin to real human beings.

While this goal is headline grabbing and paranoia-inducing, its visibility and worries arising therefrom are disproportionate to its size or present level of success, the report says; AGI not used to create robots or machines that think and act like humans, but processes and programs that can optimize and support human action — at least for the time being. Machines that can, independently of human guidance, be left to their own devices, are many years away from being even remotely possible — and that time may never come.

The report particularly singles out "irresponsible" media reporting as the source of much anxiety over AI's capabilities, potential and use. The victory of AI programs over humans in certain gaming scenarios, such as Google's AlphaGo, do not illustrate breakthroughs in machine cognition, the report states — instead, they rely on Deep Learning processes, which train machines to generate appropriate outputs in response to inputs.

Moreover, the report downplays fears around autonomous weapons systems, which can select and engage targets without human intervention, stating weapons systems and platforms "with varying degrees of autonomy" exist today in all domains of modern warfare, including air, sea and ground.

Military technologies such as self-driving tanks may be several decades away from realization — as the report notes, work on self-driving cars "has consumed substantial resources," including "millions of hours of on-road experiments and training," but performance is only currently acceptable in benign environments.

Nonetheless, despite the report's assurances that humans have nothing to fear from AI, some remain deeply concerned about the technology's implications for mankind. A spokesperson for the Future of Life Institute, a think tank researching existential world risks such as AI, biotech and nuclear weapons, believes the imminent issue of autonomous weapons is "crucial."

"AI programed to do something devastating, such as kill, could easily cause mass civilian casualties in the hands of the wrong person. An AI arms race could also lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply "turn off," so humans could plausibly lose control. This risk is present even with narrow AI, but grows as levels of AI intelligence and autonomy increase," a statement by the Future of Life Institute said.

The spokesperson also suggests AI programed for beneficial ends could develop a destructive means of achieving that goal. For instance, in an individual asked an AI car to take them to the airport as fast as possible, it might get them there "chased by helicopters and covered in vomit, doing not what they wanted but literally what they asked for."

"The concern about advanced AI isn't malevolence but competence. Super-smart AI will be extremely good at accomplishing its goals, but if those goals aren't aligned with ours, we have a problem," they concluded.

The Institute's concerns are not fringe worries — Bill Gates, Stephen Hawking and Steve Wozniak have all suggested AI could be highly destructive for the world, and mankind with it. As AI has the potential to become more intelligent than any human, humans have no surefire way of predicting how it will behave. Humans currently reign supreme on the Earth due to their superior intelligence —if they cease to be the most intelligent force on the planet, their continuing control cannot be assured.