Microsoft had to pull its fledgling chatbot, Tay, from Twitter on Thursday.
The reason: In less that 24 hours, the AI had been morally corrupted to
the point that it was freely responding to questions with religious,
sexist and ethnic slurs. It spouted White Supremacist slogans,
outlandish conspiracy theories and no small amount of praise for Hitler.
Microsoft released a statement
on how things went sideways so quickly, though that's done little to
lessen the outrage from internet users. But I would argue that this rage
is misplaced.
Friday, March 25, 2016
-
5:12:00 PM
-
admin
-
No comments
Related Posts:
Microsoft Lumia 950/950 XL review: A decent option for the fans, but price and trade-offs are too much for others Matthew Miller By Matthew Miller | December 8, 2015 -- 13:47 GMT (05:47 PST) | Topic: Mobility … Read More
Microsoft Lumia 950, review: First phone to boast new Windows 10 software has arrived This phone feels like a real break away from the Nokia-centric past for Microsoft 6 It’s here at last, the first top-of-the-range Lumia since the Lumia 930 and the first phone to boast the new Microsoft … Read More
Global warming threatens jewels of nature, civilisation AFP This file photo released by the Great Barrier Reef Marine Park Authority shows a bleached section of Australia’s Great Barrier Reef. Photo: AFP Paris: From the glimmering coral of the Great Barrier Reef to Mount F… Read More
Stop Social Media Bill, IT experts tell Senate Stop Social Media Bill, IT experts tell Senate On December 09, 2015 / in News 1:56 pm / Some Information Technology experts on Wednesday urged the Senate to stop the Social Media B… Read More
Mozilla kills Firefox OS, bowing out of mobile race Android and the iPhone prove too much to conquer for the plucky open-source organization Mozilla's office, San Francisco Credit: … Read More
Subscribe to:
Post Comments
(
Atom
)
0 comments :
Post a Comment