artificial intelligence
© Gerd Altmann from Pixabay
From self-driving cars to computers that can win game shows, humans have a natural curiosity and interest in artificial intelligence (AI). As scientists continue making machines smarter and smarter however, some are asking "what happens when computers get too smart for their own good?" From "The Matrix" to "The Terminator," the entertainment industry has already started pondering if future robots will one day threaten the human race. Now, a new study concludes there may be no way to stop the rise of machines. An international team says humans would not be able to prevent super artificial intelligence from doing whatever it wanted to.

Scientists from the Center for Humans and Machines at the Max Planck Institute have started to picture what such a machine would look like. Imagine an AI program with an intelligence far superior to humans. So much so that it could learn on its own without new programming. If it was connected to the internet, researchers say the AI would have access to all of humanity's data and could even take control of other machines around the globe.

Study authors ask what would such an intelligence do with all that power? Would it work to make all of our lives better? Would it devote its processing power to fixing issues like climate change? Or, would the machine look to take over the lives of its human neighbors?

Controlling the uncontrollable? The dangers of super artificial intelligence

Both computer programmers and philosophers have studied if there's a way keep a super-intelligent AI from potentially turning on its human makers; ensuring that future computers could not cause harm to their owners. The new study reveals, unfortunately, it appears to be virtually impossible to keep a super-intelligent AI in line.

"A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently without programmers fully understanding how they learned it. The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity," says study co-author Manuel Cebrian, leader of the Digital Mobilization Group at the Center for Humans and Machines, in a university release.

The international team looked at two different ways to control artificial intelligence. The first curbed the power of the superintelligence by walling it up and keeping it from connecting to the internet. It also could not connect to other technical devices in the outside world. The problem with this plan is fairly obvious; such a computer would not be able to do much of anything to actually help humans.

Being nice to humans does not compute

The second option focused on creating an algorithm which would give the supercomputer ethical principles. This would hopefully force the AI to consider the best interests of humanity.

The study created a theoretical containment algorithm that would keep AI from harming people under any circumstance. In simulations, AI would stop functioning if researchers considered its actions harmful. Despite keeping the AI from attaining world domination, the study authors say this just wouldn't work in the real world.

"If you break the problem down to basic rules from theoretical computer science, it turns out that an algorithm that would command an AI not to destroy the world could inadvertently halt its own operations. If this happened, you would not know whether the containment algorithm is still analyzing the threat, or whether it has stopped to contain the harmful AI. In effect, this makes the containment algorithm unusable," says Iyad Rahwan, Director of the Center for Humans and Machines.

The study concludes that containing artificial intelligence is an incomputable problem. No single computer program can find a foolproof way to keep AI from acting harmful if it wants to. Researchers add that humans may not even realize when super-intelligent machines actually arrive in the tech world. So, are they already here?

The study appears in the Journal of Artificial Intelligence Research.