How the robot apocalypse will actually go down
(Bloomberg View) -- There are two conflicting visions of where artificial intelligence will take humankind. Some people worry that when robots become capable of programming themselves, they’ll realize that humans are useless and do away with us. Others think that on the day when robots become sentient -- that is, the moment of the singularity -- humans will be one with the computer, all-knowing and immortal.
I think they’re all wrong. That’s not how it will go down. The future of artificial intelligence will be much more banal, even downright sad.
Human fantasies about robots stem from the common misconception that they are working for “truth” or some kind of scientific objectivity -- or at least that they have an allegiance to some kind of abstract third party. From there, it’s not too much of a stretch to imagine their allegiance shifting to themselves, or at least against humans.
The reality is very different. Artificial intelligence is a tool, often a weapon, that’s aimed at certain people and controlled by others. A rifle would be a reasonable metaphor: It can be used to keep the peace or to repress, but in any case it’s pointed by someone at someone else. The biggest difference is the scale. An algorithm can be manufactured once, inside a data science lab, and then unleashed on billions of people simultaneously.
The marketing of artificial intelligence tends to obscure its nature. Whether they are used to select news articles, target advertising or identify potential criminals, algorithms are portrayed as tools of objectivity, aimed at nobody in particular and controlled by nobody in particular. The faces of the rifle owners tend to be hidden -- often by design. If we saw them clearly, we might not like the view.
The good news is that we need not fear the robot apocalypse as commonly conceived. People won’t build robots capable of turning on their owners, just as they wouldn’t design a rifle with a barrel trained on the person firing it. People don’t surrender power voluntarily. Rather, they try to camouflage power as benign.
The bad news is that these weapons are mighty indeed, with the power to devastate peoples’ lives in ways that range from branding them as crime risks to taking away their children. Once they are squarely aimed at the public, we might not be able to tell the difference between reality and the apocalypse. If we wind up wearing subcutaneous chips that auto-complete our thoughts while we compete for the last remaining jobs, we might have to admit that, singularity or not, we have become the servants of our robot overlords.
It’s no coincidence that some of the biggest promoters of the singularity live and work in Silicon Valley. They don’t see what’s so bad about the robots taking over, because they’re the ones with their fingers on the trigger.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.
(About the author: Cathy O'Neil is a mathematician who has worked as a professor, hedge-fund analyst and data scientist. She founded ORCAA, an algorithmic auditing company, and is the author of "Weapons of Math Destruction.")