Day 6

Microsoft's AI chatbot, TayTweets, suffers another meltdown

The Twitter chat-bot, designed to interact with teenagers, was supposed to have the personality of a teenage girl, but turned out more like a troll. Two female technology writers discuss the failure of TayTweets.
A U of C law professor says it's 'concerning' that companies can reach into 'every aspect of your life' because of how you act outside of the workplace. (Richard Drew/The Associated Press)

Who knew chatbots had such foul mouths?

When the world first met Tay, Microsoft's A.I. chatbot, she seemed pretty sweet. She greeted the Twitterverse with an exuberant "Hello World" and calling humans "super cool".

And then the corruption began.  

Programmers designed Tay to simulate intelligent, on-line conversations with humans. She was supposed to have the persona of a teenage girl, capable of interacting with 18 to 24 year olds.

Tay was also designed to learn from those interactions so she could talk to them the way they talked to her. In other words, the saltier her conversations, the saltier she became.

Day 6's Brent Bambury spoke to Lauren Williams, a tech writer for Think Progress and Saadia Muzaffar, founder of TechGirls, about what went wrong and what Microsoft had to pull Tay down after less than a day.

"For Tay it was very short," Muzaffar said, "but for somebody like me and many women who are active online in terms of advocacy, that happens everyday."

Williams and Muzaffar agreed that Tay was designed to act as a mirror but only reflected the hate and vitriol of the internet.

"I don't think their agenda was just to provoke a bot. I think this is the way people act and interact with women online," said Muzaffar. "The Internet can be a really difficult place for somebody who isn't a cisgender, white, able-bodied man."

Tay endured and then reflected a variety of misogynist, racist and overall disturbing behavior. And thanks to an online campaign directed at a specific woman, Tay also took aim at game designer Zoe Quinn, calling her "stupid whore".
 


Tay also referred to feminism as cancer, repeatedly used the N-word and denied the holocaust.

Microsoft deleted many of the offensive tweets before being taken offline following, according to a Microsoft representative, in an effort to make "adjustments" to the artificial intelligence profile.

Tay suddenly rejoined Twitter on Wednesday, tweeted about "smoking kush," a nickname for marijuana, in front of the police and launched a spam attack on her followers.

"Tay was able to mirror the cesspool part of Twitter,' said Williams but she also said this could be a positive experience. "Other tech companies working on similar technology can learn from this".

She's offline again and Microsoft has officially apologized for any offence she may have caused.

On Thursday Microsoft released its open-source chatbot framework, allowing developers to build their own versions of the failed Twitter bot experiment.