Artificial intelligence is a continuation of the ongoing drive toward automation to make our lives easier and has become a part of our everyday lives in the form of social networks, Internet searches, Siri, Alexa, Google and other solutions. Without a doubt, we are living in the Age of AI, and the technology is advancing rapidly and has created fundamental and beneficial changes in human life.

In spite of this, in some circles, the discussion around AI has centered on the negative, with Elon Musk, Stephen Hawking and others touting the technology as the greatest danger to humankind.

There are two main fears surrounding AI: That it may be used for malicious purposes and that it could enable robots, computers and other technology to make significant changes to the world at large at humankind’s expense. These doomsday scenarios have played out in the “Terminator” and other films in which technology pushes aside – and even does harm to – the humans who designed and deployed it.

While some of these sci-fi scenarios could technically become reality, the fault cannot be placed on AI technology because it cannot be malicious in and of itself. In order for AI to be “good” or “bad,” some help is required.

In general, AI can only work toward a goal that it’s programmed or “trained” to accomplish. Narrow AI is designed to perform a specific task, such as facial recognition or motion detection. General AI, which is a goal many scientists are seeking to achieve, has more general, less defined parameters and can accomplish a much wider array of tasks.

While narrow AI may outperform humans at a specific task, such as playing chess or analyzing X-rays, general AI could conceivably outperform humans at virtually any cognitive task. This is the basis for much of the fear, but placing blame on AI is misguided at best.

With any technology, the defining factor that makes it inherently good or bad is the same: Humans – and their motivation in deploying that technology.

For example, while the main use for cars is to move people from one place to another, thousands of people a year are killed in drunk-driving accidents caused by people exercising poor judgment in getting behind the wheel. Vehicles can also be weaponized, a worst-case scenario we’ve seen in recent months. In none of these cases are vehicles to blame, as evidenced by the lack of a cry to ban cars.

The same can be said for AI. Is it the technology’s fault if someone with malicious intent builds a destructive killer robot? Decidedly not. That responsibility falls squarely on the human who is wielding AI for nefarious purposes. Could AI be deployed in a destructive manner? Absolutely, and when all is said and done, there is a good chance this could actually happen. This is precisely why it’s time to take action to place controls on AI development to address the intent of the minority of people who might use the technology for malicious purposes.

With many technologies and tools, there are rules governing their usage because, unfortunately, humans can’t always be trusted to make the best choices. As with guns, nuclear weapons and other potentially dangerous technologies – depending on the intent of the user – AI must also be subject to rules governing its use. There need to be guidelines and/or controls over AI development, and they have to be developed and implemented on a global level to be effective.

It is helpful to put AI in perspective using the analogy of nuclear technology, which offers a number of potentially positive uses, including generating electricity and powering submarines and aircraft carriers. But from the beginning, it quickly became apparent that nuclear energy could also be used for more destructive purposes in the form of weapons.

Scientists first harnessed this potential in the 1940s and within a decade, the U.S. and other countries were amassing arsenals of nuclear weapons. By the 1970s, however, several treaties and regulations were put in place designed to curb their development, production and proliferation – essentially creating guidelines for dealing with what became an unspeakably destructive human creation.

In the development and evolution of AI, we are essentially living in the 1950s. Companies are increasingly turning to the technology in an effort to streamline processes, address labor shortages, protect people, places and things, and generally make people’s lives easier.

Rather than waiting two decades to begin putting curbs in place, we have an opportunity to take action now on a global level in the form of rules and guidelines that will at least limit the use of AI for malicious purposes, and both history and current events have demonstrated that there is no shortage of bad actors in the world.

Despite the doom and gloom spouted by some, AI is an exciting technology that allows us to accomplish and create things that were not possible five to 10 years ago. Embracing AI will require us to remember that the technology is neither good nor bad; those labels are reserved for the humans who develop and use it.

There are many incredible benefits and countless positive uses for the technology, which cannot act with malice without human intervention and/or assistance. Therefore, it is up to humans to ensure AI is deployed responsibly. This should be the focus of the conversation around AI as we work to determine how the technology can be deployed to provide the greatest benefit to the greatest number of people.

The Age of AI is upon us, and it would be impossible to put the genie back into the bottle. So rather than heralding AI as the most humanity’s greatest risk or the biggest danger to human existence, Musk, Hawking and other thought leaders should be put their energy into the more productive process of developing the rules and guidelines necessary to drive human responsibility and make their doomsday scenarios less and less likely.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access

Steve Reinharz

Steve Reinharz

Steve Reinharz is president and chief executive officer at Robotic Assistance Devices.