BERLIN, Germany – From self-driving cars to game-winning computers, humans have a natural curiosity and interest in artificial intelligence (AI). However, while scientists are getting smarter and smarter, some are asking ‘what happens when computers get too smart for their own good?’ From “The Matrix” to “The Terminator”, the entertainment industry has already begun to consider whether future robots will one day threaten humanity. A new study has concluded that there may be no way to stop the rise of machines. An international team says people will not be able to prevent super artificial intelligence from doing what they want.
Scientists from the Max Planck Institute’s Center for Humans and Machines began to imagine what such a machine would look like. Imagine an AI program with an intelligence that is much better than humans. So much so that it can learn on its own without new programming. When connected to the internet, researchers say that AI has access to all of humanity’s data and can even take control of other machines around the world.
Study authors ask what would such intelligence do with all the power? Would it work to make our whole lives better? Would it devote its processing power to resolving issues such as climate change? Or would the machine take over the lives of its human neighbors?
Control of the uncontrollable? The dangers of super artificial intelligence
Both computer programmers and philosophers have studied whether there is a way to prevent a super-intelligent AI from possibly turning on the human manufacturers; to ensure that future computers can not harm their owners. From the new study, it unfortunately appears that it is virtually impossible to keep a super-intelligent AI in line.
‘A super-intelligent machine that controls the world sounds like science fiction. But there are already machines that perform certain important tasks independently, without programmers fully understanding how they learned them. The question therefore arises as to whether it could at some point become uncontrollable and dangerous to humanity, ‘says co-author of the study, Manuel Cebrian, leader of the Digital Mobilization Group at the Center for Humans and Machines, in a university statement.
The international team looked at two different ways to control artificial intelligence. The first restrained the power of superintelligence by embellishing it and preventing it from being connected to the internet. It also could not be connected to other technical devices in the outside world. The problem with this plan is pretty obvious; such a computer could do nothing to really help people.
Being friendly with people does not matter
The second option focused on creating an algorithm that would give the supercomputer ethical principles. This will hopefully force the AI to consider the best interests of mankind.
The study created a theoretical inclusion algorithm that can prevent AI from harming people under any circumstances. In simulations, AI would cease to function if researchers viewed its actions as harmful. Despite the fact that the AI did not achieve world domination, the authors say that it just would not work in the real world.
‘If you break the problem down from theoretical computer science on the basis of basic rules, it turns out that an algorithm that instructs a client not to destroy the world can accidentally stop its own operations. If this were to happen, you would not know if the control algorithm was still analyzing the threat, and if it had stopped containing the harmful AI. This actually makes the inclusion algorithm unusable, ”says Iyad Rahwan, director of the Center for Humans and Machines.
The study concluded that containing artificial intelligence is an incompatible problem. No single computer program can find an infallible way to prevent AI from acting harmful if it wants to. Researchers add that people may not even realize when superintelligent machines are entering the technological world. Are they already here?
The study appears in the Journal of Artificial Intelligence Research.