Home / Professional / Development / Microsoft’s impressionable teen-girl AI voiced pro-Hitler support

Microsoft’s impressionable teen-girl AI voiced pro-Hitler support

It turns out that even if you're a robotic teen girl, you can be just as susceptible to falling in with the wrong crowd, as Microsoft found when it launched its learning AI, Tay, on Twitter. Designed to converse with younger social networking users in a more engaging manner, as an experiment towards more automated customer service, Tay quickly turned into a racist, incest promoting, sexualised robot – so Microsoft has shut it down.

Tay was an AI built with good intentions. Described by Microsoft as an algorithm with “zero chill,” (to get the kids excited), Tay is capable of telling jokes, playing games, telling stories, commenting on your pictures or giving you horoscope information. Better yet, the more you interact with her, the more she learns about you and can converse with you in a more natural manner.

Unfortunately for Tay and her developers though, the internet is not necessarily a place for the young and naive, and like Chappie from the movie by the same name, it only takes a couple of people to suggest the wrong thing, before it's calling everyone f*** mothers.

tay

It suddenly seems quite ironic that Tay's cover image is all corrupted

Depending on your sensibilities though, Tay actually became much more insulting than that. Taking advantage of her repeat function – how no one saw that causing problems is anyone's guess – Twitter users had her throwing our racial slurs, denying the holocaust and claiming that Adolf Hitler was the father of atheism.

As you might expect, Microsoft has been rushing to delete these tweets as it hardly paints Tay or the developers in a good light. It remains to be seen though whether the changes wrought on Tay's digital psyché are difficult to iron out, as she quickly learned to be sexually suggestive with users after being exposed to their comments for just a few short hours.

For now, Tay has gone to “sleep”, to recuperate and presumably give time to the developers to figure out what happened.

Although there is a lot to learn from this experience for Microsoft, it could perhaps be an interesting insight into the way impressionable people could also be quickly influenced by online conversations. There's also something to be said for the reaction of those wishing to censor the AI – which quite clearly has no agenda other than conversing – through blocking of certain words.

Discuss on our Facebook page, HERE.

KitGuru Says: As much as Tay became a foul mouthed, hate filled young AI in no time at all, the fact that people are suggesting certain words should be blocked from its vocabulary has all sorts of worrisome connotations for our own online interactions. 

Become a Patron!

Check Also

BeBop Sensors VR haptic glove will be on show at CES

CES will be full of innovative new PC hardware due to launch in 2020, one …