As much as Stephen Hawking and Elon Musk believe that a killer AI may one day threaten the survival of humanity, that hasn’t stopped some researchers from ploughing ahead to try and make smarter computational systems than ever before. Lately, a team of researchers from the University of Illinois at Chicago and an AI research group in Hungary have used the 2012 version of an MIT open-source system called ConceptNet, to take on a standard IQ test, scoring as highly as a four year old child.
The Wechsler Preschool and Primary Scale of Intelligence (WPPSI-III) test, when given to children, is capable of giving an estimation of cognitive function, as well as test for any strengths and weaknesses in someone’s cognitive thinking. When applied to the digital intelligence, the test was able to benchmark its ability to understand language, as well as process relevant responses.
As with young children, some of the problems that the AI encountered involved interpreting language. In one instance, it took the reference to the tool, a saw, as the past tense of see. When asked what someone would use a saw for, it responsed “an eye is used to see.”
This sort of language processing clearly needs improvement before an AI like this could be considered intelligent, but MIT and the researchers feel that developments of tools like Siri and Cortana show that that road is already being paved.
MIT’s Technology Review commented on the news, stating (via Phys) that: “Taking [these results] at face value, it’s taken 60 years of AI research to build a machine in 2012 that can come anywhere close to matching the common sense reasoning of a four-year old. But the nature of exponential improvements raises the prospect that the next six years might produce similarly dramatic improvements. So a question that we ought to be considering with urgency is: what kind of AI machine might we be grappling with in 2018?”
Discuss on our Facebook page, HERE.
KitGuru Says: As much as I respect the opinions of people like Musk and Hawking, I’m not quite so worried about their ideas of an AI run future. We need to be careful that someone doesn’t program an AI to be malicious, I don’t think one could ever ‘want’ to hurt us. We don’t know what conciousness like that even is, let alone how to make one ourselves.