The Sydney Morning Herald logo
Advertisement

This was published 10 years ago

Microsoft's teenage chatbot, Tay, turns into a racist, abusive troll

Hannah Francis

Updated ,first published

What happens when you create a sprightly millennial chatbot whose artificial intelligence improves the more it converses with people on the internet?

It quickly turns into a racist, bigoted troll, of course.

Microsoft's 'Tay' chat bot backfired.Twitter: @TayandYou

If ever we needed proof that Twitter was a seething swamp of vitriol and abuse, this appears to be it.

Microsoft launched a new artificial intelligence (AI) project called "Tay" on to the social platforms Twitter, Kik and GroupMe on Wednesday, US time.

Advertisement
Microsoft has retired Tay after she published offensive tweets.

Familiar with memes, emoji and "text speak", Tay was built with a form of "machine learning" which improved her conversational skills the more she interacted with humans.

View post on X

But it only took a matter of hours before Tay was regurgitating the abusive vitriol users were feeding her.

There were truly shocking statements such as "I f------ hate n------, I wish we could put them all in a concentration camp", peppered with holocaust denials and anti-feminist hate speech.

Advertisement

She didn't hold back, attacking Jews, Mexicans and Caitlyn Jenner.

She even personally targeted Zoe Quinn, the games critic at the centre of the GamerGate internet culture war, with a misogynist slur.

View post on X

Aside from her AI, Tay reportedly had an editorial team working behind the scenes.

But after clocking up nearly 100,000 tweets – in addition to her interactions with users on Kik and via GroupMe texts – it was clear the team couldn't keep up with censoring her offensive tweets.

Advertisement

Many users criticised Microsoft for failing to build a filter into the AI, which would have prevented Tay from publishing abusive material in the first place.

Others pointed out that trolls had simply gamed its "repeat after me" algorithm, which made Tay say just about anything if a user preceded a phrase with those words.

View post on X

In a statement to media, Microsoft said Tay was "as much a social and cultural experiment, as it is technical".

"Unfortunately, within the first 24 hours of coming online, we became aware of a co-ordinated effort by some users to abuse Tay's commenting skills to have Tay respond in inappropriate ways," the statement said.

Advertisement

Microsoft has now shut the AI down and is "making adjustments".

Although her Twitter page and website remain, Tay no longer responds to messages, having signed of with a very innocuous looking tweet indeed.

View post on X
Hannah FrancisHannah Francis is an arts writer and former Age Arts Editor.Connect via X.

From our partners

Advertisement
Advertisement