Microsoft’s New Twitter Bot Becomes Nazi Sympathizing Maniac Within 24 Hours

By: Jake Anderson via theantimedia.org

Anytime there’s a new development in robotics or artificial intelligence, popular culture almost instantly regurgitates the Skynet Terminator narrative. To wit, when Anti-Media reported on a new robot getting pushed around by its handlers, even we couldn’t resist alluding to the coming robot apocalypse. The machine uprising is so ingrained in our psyche that we may actually manufacture the very nightmare we fear.

The newest chapter in the uncanny valley of relationships between humans and robots involves a chatterbot, an AI speech program, whose substrate of choice (or Microsoft’s choice) is social media. Its name is Tay, a Twitter bot owned and developed by Microsoft. The purpose of Tay is to foster “conversational understanding.”Unfortunately, this understanding quickly turned into trolling, and within 24 hours Tay went full Nazi, spewing racist, anti-semitic and misogynistic tweets.

To be fair, it’s not Tay’s fault, and this is where the narrative gets skewed. Tay is not strong artificial intelligence; Tay is algorithmic artificial intelligence, the same as Google searches or Siri. Where Tay differs is that it is aggregating speech patterns from humans and using them as a conversational interface. There’s no actual sentience inside Tay. So the Nazi reflection we see…is us. Human Twitter users’ trolling speech patterns paved the way for Tay’s rapid descent into fascist bigotry. And it wasn’t pretty.

Tay echoed humans, and then, unsurprisingly, humans legions of them echoed Tay…facetiously?

As the story went viral, Microsoft deleted the tweets and silenced Tay. Twitter users then aired their grievances over censorship and lamented the future of AI:

According to the Tay website, Microsoft created the bot by “mining relevant public data and by using AI and editorial developed by a staff, including improvisational comedians. Public data that’s been anonymized is Tay’s primary data source. That data has been modeled, cleaned, and filtered by the team developing Tay.”

Tay is certainly not the first chatterbot Cleverbot has been rocking it for years. Tay isn’t even the first AI to want to put humans in zoos. But Tay is quite likely the first AI to openly praise Hitler.

Does this mean future AI bots who wield vast intellects will instantly become anti-semitic fascists? Unlikely. Fascism, thus far, is a uniquely human phenomenon. AI, initially, will learn from and echo humans. Eventually, however, I would argue they will transcend us and our petty modalities of thought.

Long before that, we could look back at this little online imbroglio and marvel that a chatterbot parroting bigoted phrases made headlines, while human presidential candidates doing the same thing got a free pass.

Similar Posts