Take Elon Musk seriously on the Russian AI threat
(Bloomberg View) -- Elon Musk is worried about governments, specifically the Russian one, competing for artificial intelligence superiority and sparking World War III. That shocking statement was made all the more shocking by the low expectations the world seems to have for Russia, which U.S. Senator John McCain dismissed just a few years ago as a "gas station masquerading as a country."
Recent remarks by Russian President Vladimir Putin grabbed Musk's attention. Speaking to schoolchildren about AI on Sept. 1, Putin declared, "Whoever becomes the leader in this area will rule the world." Musk's response emerged on Twitter: " It begins..."
Actually, Russia isn't "beginning" anything when it comes to AI. It's just that its progress in the field has been somewhat below the radar: We are used to discussing AI in the context of major Silicon Valley companies' or top U.S. universities' advances, and while Russians work there, the top names are not Russian. Nor are the top thinkers and investors in the field. IBM's recent list of "AI influencers" only includes one native Russian speaker, University of Lousiville's Roman Yampolskiy -- who, like Musk, worries about a possible "AI apocalypse" -- and he's originally from Latvia, not Russia.
Russia is awful at commercializing and promoting technological advances. Prisma, an AI application that literally redrew photos to make them look like paintings by a number of famous artists, took post-Soviet countries by storm last year and won some interest in the U.S. but failed to become a global phenomenon.
Other Russian AI startups are only known to experts, and while large Russian information technology companies such as Yandex and Mail.ru Group have invested a lot of resources in AI research and built products using neural networks (Yandex search, more popular than Google in Russia, is powered by proprietary neural tech), these achievements are overshadowed by those of bigger Western rivals. Even Russian venture capitalists appear to be looking for AI opportunities outside the home country.
And yet there's plenty of AI research going on in Russia. The Moscow Engineering Physics Institute's Alexei Samsonovich's is engaged in a quest for "emotionally intelligent" AI and the government is applying the technology to intitatives on electronic government and the military. Russia ranks fourth in the world, after the U.S., China and India and ahead of the U.K., in terms of the number of people who use Kaggle -- the crowdsourcing platform used by most AI researchers, which was acquired by Google this year.
The first loan the BRICS Development Bank -- a financial institution set up jointly by Brazil, Russia, India, China and South Africa -- has approved for Russia is meant to fund a project that includes the use of AI in Russian courts to automate trial records using speech recognition.
State-sponsored media reports on the potential military uses of AI have picked up in recent months. They include an AI system to help pilots fly fighter planes, a project by St. Petersburg-based Kronstadt Group to equip drones with artificial intelligence, a similar effort for missiles by the Tactical Missiles Corporation, and a Kalashnikov combat module using neural networks. The details of these efforts are not public, and the agencies may be exaggerating their importance for propaganda effect. But Russia is known to be experimenting with network-centric warfare -- including during its Syrian military operation -- so AI implementations are a logical step.
It's likely that, as in Soviet times, the military applications of AI in Russia are outpacing consumer ones. With guaranteed government financing, they face fewer constraints than Russian private companies or academic researchers do, given the Silicon-Valley-centric nature of the business.
Musk is right to suggest that China is not the only potential U.S. rival in the kind of artificial intelligence that will soon do far more sinister things than those for which we use Siri. Given its number of Kaggle users, India shouldn't be too far behind. Last month's call by Musk and a group of AI researchers for a global ban on robotic weapons is timely but probably unworkable: The use of ostensibly conventional but actually autonomous weaponry is far more difficult to detect, making a prohibition on them harder to enforce than existing bans on chemical and biological weapons, or even than restrictions on various forms of cyberwarfare would have been. Besides, countries such as Russia, China and India will be wary of any such regulatory efforts initiated in the West.
AI is far more dangerous as part of weapons than as a potential replacement of the human brain in civilian applications. Nations will be killing with AI long before the technology can cause mass unemployment. In this sense, Musk's alarmism -- and Putin's words about AI-based global dominance -- should be taken seriously.