Google’s Artificial Intelligence researchers at DeepMind have proposed that a ‘kill switch’ be built for AI to prevent machines from outsmarting us and taking over. The idea is to create a way to “repeatedly safely interrupt” an algorithm, according to a new paper created in association with the University of Oxford (Via: Wired).
“Safe interruptibility can be useful to take control of a robot that is misbehaving and may lead to irreversible consequences, or to take it out of a delicate situation, or even to temporarily use it to achieve a task it did not learn to perform or would not normally receive rewards for”.
The paper was penned by Laurent Orseu from DeepMind and Stuart Armstrong from the Future of Humanity Institute and it explains that an interruption policy could act as a safe guard to safely stop an AI machine from performing a task.
The paper also explains that Reinforcement learning algorithms normally work in complex ‘real world’ environments and are unlikely to work as intended each and every time, so a stop button may be necessary for human operators. One example of an algorithm not working as intended is Google’s plan to teach an AI how to not lose at Tetris but instead of playing the game, the AI would pause it to avoid losing.
There is the possibility of an AI learning how to disable its own stop button but to counter that, the hope is to make human interruptions of algorithms not appear as being part of the task at hand. However, it is unclear if all algorithms can be made safely interruptable.
Discuss on our Facebook page, HERE.
KitGuru Says: Given how scary AI could potentially be, it makes sense to implement safe guards like this to keep AI from getting ideas of its own.