(Bloomberg View) -- Uninformative as fake news may be, it's shedding light on an important limitation of the algorithms that have helped make the likes of Facebook and Google into multi-billion-dollar companies: They're no better than people at recognizing what is true or right.
Remember Tay, the Microsoft bot that was supposed to converse breezily with regular folks on Twitter? People on Twitter are nuts, so within 16 hours it was spewing racist and anti-semitic obscenities and had to be yanked. More recently, Microsoft released an updated version called Zo, explicitly designed to avoid certain topics, on the smaller social network Kik. Zo's problem is that she doesn’t make much sense.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access