Microsoft had to pull its fledgling chatbot, Tay, from Twitter on Thursday.
The reason: In less that 24 hours, the AI had been morally corrupted to
the point that it was freely responding to questions with religious,
sexist and ethnic slurs. It spouted White Supremacist slogans,
outlandish conspiracy theories and no small amount of praise for Hitler.
Microsoft released a statement
on how things went sideways so quickly, though that's done little to
lessen the outrage from internet users. But I would argue that this rage
is misplaced.
Friday, March 25, 2016
- 5:12:00 PM
- admin
- No comments
Subscribe to:
Post Comments
(
Atom
)
0 comments :
Post a Comment