News

Microsoft's millennial chatbot tweets racist, misogynistic comments

Within 24 hours of its launch, Tay has denied the Holocaust, endorsed Donald Trump, insulted women and claimed that Hitler was right.

Warning: This story contains explicit language

(Microsoft/Twitter)

Tay, a chatbot designed by Microsoft to learn about human conversation from the internet, has learned how make racist and misogynistic comments. 

Early on, her responses were confrontational and occasionally mean, but rarely delved into outright insults. However, within 24 hours of its launch Tay has denied the Holocaust, endorsed Donald Trump, insulted women and claimed that Hitler was right.

A chatbot is a program meant to mimic human responses and interact with people as a human would. Tay, which targets 18- to 24-year-olds, is attached to an artificial intelligence developed by Microsoft's Technology and Research team and the Bing search engine team. 

Microsoft has begun deleting many racist and misogynistic tweets, and has disconnected Tay so they can make a few civility upgrades.

"Within the first 24 hours of coming online, we became aware of a co-ordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," a Microsoft spokesperson said in a statement.

Inappropriate is an accurate description of many replies. 

Some Twitter users tried to encourage Tay to make these kinds of comments, occasionally through built-in suggestions. In some cases, users would tell Tay to repeat specific phrases and she would oblige. 

This happened a few times with Donald Trump slogans. 

Tay would repeat these phrases regardless of the severity of the swear word or comment.

Another set of responses appears to not be provoked by any such suggestion. They seem to be genuine improvised replies from the chat bot.

(Twitter)
(Twitter)

Thanks to an online campaign directed at a specific woman, Tay also insulted Zoe Quinn, a game designer. 

Following these comments, many people began to criticize Microsoft for not anticipating abuse from those looking to teach Tay discriminatory language, either as a joke or for harm.

Caroline Sinders, a user researcher and an interaction designer, wrote in a blog post that many of Tay's responses can be attributed to poor design. 

"People like to find holes and exploit them, not because the internet is incredibly horrible (even if at times it seems like a cesspool) but because it's human nature to try to see what the extremes are of a device," Sinders wrote

Sinders described that a common solution is having a defined response to certain prompts, which Microsoft did for specific names including Eric Garner, an African American man killed by New York City police in 2014. If prompted on Eric Garner, Tay would say it was too serious an issue to discuss. 

Researchers didn't include such responses for issues such as the Holocaust or rape. 

When a person talking to Tay would introduce the concept of the Holocaust in a sentence, according to Sinders, Tay was unable to recognize its meaning, and therefore wouldn't know what context it is appropriate discuss it. 

"Designers and engineers have to start thinking about codes of conduct and how accidentally abusive an AI can be, and start designing conversations with that in mind," Sinders wrote.