Skeptical of AI's future? You can blame the media
(Bloomberg Opinion) --Artificial intelligence is going to change the world profoundly, although exactly how is still unclear. The CEO of one AI company recently declared that “working for a living will become obsolete” as smart robots begin providing everything we need from self-driving cars to health care. That’s a little hard to believe. But business leaders think AI could soon reduce the human workforce by as much as 99 percent in certain sectors. Good or bad, AI is fast becoming a reality.
Unfortunately, we seem to be sleepwalking into our AI future without talking about what we want from it, or how to make sure it is used responsibly. Part of the blame lies with the news media’s coverage of AI. Recent studies find that media treatment of AI mostly follows industry announcements and new product launches, helping to purvey the industry’s self-interested view of AI’s value and desirability. The public, by contrast, seems to be more cautious — and overwhelmingly in favor of close management of AI, preferably not by tech companies themselves.
Those media habits seem to be tilting public discussion of AI toward the private interests of the tech industry. In his recent book, “Future Politics: Living Together in a World Transformed by Tech,” British lawyer Jamie Susskind writes that we risk becoming increasingly controlled, almost without noticing, by digital systems we don’t understand, our lives and our political, social and legal realities deeply influenced by those who control the digital systems for their own purposes.
Of course, the news media — under increasing commercial pressure in recent years — have cut staff, especially in reporting on technology and science, and so now rely more on industry-provided news releases. These are, by nature, designed to persuade rather than educate.
Researchers at the Reuters Institute for the Study of Journalism examined 760 articles representing eight months of reporting on AI by six mainstream U.K. news outlets, including the BBC, the Daily Mail, the Telegraph and the Guardian. They found that these discussions habitually presented AI technology in a positive light — as solutions to problems in providing health care, cheaper and more efficient transport, or better business management. They rarely discussed alternatives to the AI-based solutions or examined how effective AI approaches might be in comparison with others.
This dominant framing isn’t surprising given that nearly 60 percent of the articles were pegged to industry events: a CEO’s speech, the launch of some new product or research initiative, or news about startups, buyouts or conferences. They were much less likely to quote academics and government sources, who might offer more independent points of view.
Encouragingly, the public seems to show more balanced views — openness to AI mixed with caution and a strong desire for careful oversight. In another study, researchers at Oxford University looked at Americans’ attitudes toward AI, which were gathered in a survey by the Center for the Governance of AI. Americans were especially concerned with preventing violations of privacy and civil liberties by AI-assisted surveillance, avoiding AI weaponization of fake news and other harmful online content, and stopping AI-assisted cyberattacks.
The overwhelming majority of Americans expressed the view that robots and AI should be carefully managed — views similar to those found in European surveys. And Americans have markedly different levels of trust in who might carry out such management in the interest of the public. Most trusted were university researchers and the U.S. military, followed by scientific organizations, with government and tech companies far down the list, Facebook Inc. coming last of all.
The media has a responsibility to help us avoid this disaster, in part by paying less attention to industry announcements and more to those working to promote public action and collective decision-making. We should learn from our experience of the internet, where concerns about security only came as an afterthought — an oversight we’re still paying for in fake news, hacking, spyware and identity theft.
We’ll regret it if we leave our AI future to the tech companies themselves.