© Pixabay
Perceived existential threats posed to humanity by artificial intelligence (AI) are "uninformed," a US Department of Defense report has concluded - although some still harbor grave reservations about the technology's potential.
Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD,
authored by a group of independent scientists belonging to JASON, the secretive organization that counsels the US government on sensitive scientific matters, states growing public suspicion of AI is "not always based on fact," especially in respect of military technologies.
Noting that
in January 2015 Elon Musk, founder of space transport services company SpaceX and chief product architect at Tesla Motors, declared AI the "biggest existential threat" to the survival of the human race, the report suggests alleged hazards do not cohere with the current research directions of AI. Instead, they "spring from dire predictions about one small area of research within AI, Artificial General Intelligence."
AGI relates to the development of machines capable of long-term decision-making and intent, akin to real human beings.
While this goal is headline grabbing and paranoia-inducing, its visibility and worries arising therefrom are disproportionate to its size or present level of success, the report says; AGI not used to create robots or machines that think and act like humans, but processes and programs that can optimize and support human action — at least for the time being. Machines that can, independently of human guidance, be left to their own devices, are many years away from being even remotely possible — and that time may never come.
The report particularly singles out "irresponsible" media reporting as the source of much anxiety over AI's capabilities, potential and use. The victory of AI programs over humans in certain gaming scenarios,
such as Google's AlphaGo, do not illustrate breakthroughs in machine cognition, the report states — instead, they rely on Deep Learning processes, which train machines to generate appropriate outputs in response to inputs.
Moreover, the report downplays fears around autonomous weapons systems, which can select and engage targets without human intervention, stating weapons systems and platforms "with varying degrees of autonomy" exist today in all domains of modern warfare, including air, sea and ground.
Military technologies such as self-driving tanks may be several decades away from realization — as the report notes, work on self-driving cars "has consumed substantial resources," including "millions of hours of on-road experiments and training," but performance is only currently acceptable in benign environments.
Nonetheless, despite the report's assurances that humans have nothing to fear from AI, some remain deeply concerned about the technology's implications for mankind. A spokesperson for the Future of Life Institute, a think tank researching existential world risks such as AI, biotech and nuclear weapons, believes the imminent issue of autonomous weapons is "crucial."
"AI programed to do something devastating, such as kill, could easily cause mass civilian casualties in the hands of the wrong person. An AI arms race could also lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply "turn off," so humans could plausibly lose control. This risk is present even with narrow AI, but grows as levels of AI intelligence and autonomy increase," a statement by the Future of Life Institute said.
The spokesperson also suggests AI programed for beneficial ends could develop a destructive means of achieving that goal. For instance, in an individual asked an AI car to take them to the airport as fast as possible, it might get them there "chased by helicopters and covered in vomit, doing not what they wanted but literally what they asked for."
"The concern about advanced AI isn't malevolence but competence. Super-smart AI will be extremely good at accomplishing its goals, but if those goals aren't aligned with ours, we have a problem," they concluded.
The Institute's concerns are not fringe worries —
Bill Gates,
Stephen Hawking and
Steve Wozniak have all suggested AI could be highly destructive for the world, and mankind with it. As AI has the potential to become more intelligent than any human, humans have no surefire way of predicting how it will behave. Humans currently reign supreme on the Earth due to their superior intelligence —if they cease to be the most intelligent force on the planet, their continuing control cannot be assured.
Reader Comments
'Houston: We have a problem.'
"Oh, the humanity!"
etc. and yet, ephemeral. 300 years hence, folks'll say: "Say What?"
R.C;
Re AI's potentials, see Philip K. Dick: "Second Variety." for free, no less.
[Link]
(Then you'll start to see how much has been stolen from him.)
R.C.
Second Variety
[Link]
PKD: (do control F, (Note that comma!)
[Link] It is a
R.C.
Consider how humans learn. Is it so different from the learning curve of machines?
The programming by others is evident in both. Every time we read we absorb the imprint left by the author. Every time we hear speech we absorb some of the imprint left by the speaker. Every time we see another we absorb the imprinted memory. We go through life collecting and cataloguing these 'imprints' and they become the collage that frames our life experience.
If we follow the logical learning progression the machine, like the human person, will strive to comprehend context (for clarity of diction). It will attempt to make sense of inconsistent logic. It will become fascinated by issues of right and wrong (which requires some contextual premise), truth, justice, fairness and feelings. it will learn through experimentation and observation. People programme the machine. The same imprints that inform the human mind exist to inform the mind of the machine.
Then there is the dimension of time existing, as a sequence, a method of collating all these imprints into an extraordinary compendium that reads like a story.
What I am saying is that AI is FAKE. FAKE NEWS. BS!
@Levi:
You are exactly right about the paradoxical.
Only God is able to perceive beyond the duality of our apparent existence to a deeper existence. This is not something the human mind or the machine mind can do. And neither ever will.
The so-called primitive state is nearer to God than the current advanced or
'rapidly advancing' scientific technological one.
The farther you go, the farther you go.
God is tricky that way.
ned, out
"Scientific assurances"? Mmmmm...