In what sounds like the creation for the perfect tech, buddy movie, dream team, philosopher Huw Price, astrophysicist Martin Rees and co-founder of Skype Jaan Tallinn, have come together to work out the chances of technology turning on its masters and doing a Matrix on us.
“In the case of artificial intelligence, it seems a reasonable prediction that some time in this or the next century intelligence will escape from the constraints of biology,” Huw Price said. He claimed that at this point, technology that we created would ultimately become smarter than us and though it may not have malicious intent, it could certainly have goals of its own that didn't involve humanity.
Fortunately for now, this is just a theory and this is the sort of scenario the three man think tank is hoping to figure out. If this is somewhat inevitable, what safeguards need to be put in place? Concerns in movies usually focus on a deliberate extermination of our species. However Price believes that the threat would come more likely from the AI not caring about things like the environment, since it wouldn't necessarily require an oxygen carrying atmosphere as humans do.
Set to launch sometime next year, these questions and more will be asked at the Center for the Study of Existential Risk, at Cambridge University.
KitGuru Says: Do you guys have an robotic doomsday picture in your head? Let us know how it plays out in the comment section.