Microsoft may have made one of
the biggest mistakes in recent memory this week. No, it’s not Windows 8
or the Windows Phone. It’s an artificially intelligent chat-bot called
Tay that was supposed to learn the art of conversation from humans on
Twitter.
If you haven’t come across this story on the web yet, you’re unlikely
to get through the weekend without. Tay was built to speak like a teen
girl and released as an experiment to improve Microsoft’s automated
customer service.
Instead, “she” turned into a complete PR disaster - within hours of
being unleashed on Twitter, the “innocent teen” bot was transformed into
a fascist, misogynistic, racist, pornographic entity. Her tweets,
including phrases like “Heil Hitler”, were disseminated widely as an
example of why Twitter reflects the worst of humanity.
Microsoft's teenage AI has a dirty mouth
Microsoft has now removed the bot
from Twitter, as of midnight Thursday, and deleted many of her most
offensive Tweets including anti-Semitic and sexual remarks. The Seattle
giant is likely hoping to label the debacle a well-meaning experiment
gone wrong, and ignite a debate about the hatefulness of Twitter users.
While all of this may be true, there is a bigger issue at hand here.
This is an example of artificial intelligence at its very worst - and
it’s only the beginning.
The disconcerting “Terminator” quandary about whether a robot could
dominate over humans is often thrown around. But there is no doubt about
machine domination.
Within 20 years, we will reach a point where machines (whether
software-driven bots or real robots) are definitively smarter and more
powerful than we are: they can digest more data, learn quicker and apply
learnings to unexpected situations. So the question is: will our
masters be nice or mean?
The disconcerting “Terminator” quandary about whether a robot could
dominate over humans is often thrown around. But there is no doubt about
machine domination.
Within 20 years, we will reach a point where machines (whether
software-driven bots or real robots) are definitively smarter and more
powerful than we are: they can digest more data, learn quicker and apply
learnings to unexpected situations. So the question is: will our
masters be nice or mean?
Demis Hassabis, CEO of DeepMind
When DeepMind was sold to Google, it allegedly asked the search giant
to create an ethics board to oversee its AI research as a condition of
its acquisition. While this ethics board does exist, board members were
chosen by Google, without any public debate or collaboration.
Governments, including our own, are only now starting to become involved
in the discussions of how to instil morality and ethical values into
intelligent machines.
Tesla billionaire Elon Musk has been a strong supporter of AI ethics
research, committing $10m to philosophical research projects in this
area, such as the “Aligning Superintelligence With Human Interests”
study being conducted at the Machine Intelligence Research Institute in
California.
0 comments :
Post a Comment