© 2019 SourceMedia. All rights reserved.

Will artificial Intelligence really spell the end of the human race?

Artificial intelligence – or AI – is a term that we are becoming increasingly used to hearing. From our mobile phone’s virtual assistant to our interactions with companies via chatbots and online tools such as facial recognition, tailored news feeds and recommended products, AI, we are being told, is becoming increasingly prevalent within our everyday lives.

It is therefore somewhat concerning that some well-respected scientists and technologists are warning that the development of AI could have dire consequences.

As early as 2014, Stephen Hawking was warning that “the development of full artificial intelligence could spell the end of the human race.” Meanwhile technologist Elon Musk in 2017 stated that “AI is a fundamental risk to the existence of human civilization.”

Musk has also repeated such warnings more recently, stating that the development of AI “poses a greater risk than nuclear weapons.” These are concerning warnings for the future development of AI that we should not ignore.

For the majority of us, using AI tools such as our virtual assistants or allowing our email to be filtered automatically for spam does not seem to pose a risk that will result in the destruction of the human race. In reality, however, we are currently seeing only very early forms of AI and in fact what is better defined as “machine learning.”

Machine learning is a form of AI in which a computer is programmed to be able to learn from data. In the case of machine learning, algorithms allow computers to learn from, and make predictions based on the data they receive.

SoftBank's Pepper Robot Unboxed

Where the fears of the future of AI are more prevalent is where more advanced AI is being developed.

For example, driverless cars are currently being tested around the world and require more complex algorithms than some of the tools we currently use. Driverless cars work on the basis that they are programmed to react to different scenarios and situations. Human testing is then used to deal with incorrect action and allow the car to learn from its mistakes.

Even though the training and testing is extensive, it is difficult to imagine that there will not be situations in the future in which more ethical or moral decisions have to be made in order to take the right course of action, something the car is unlikely to be equipped to deal with. While there may therefore be some issues left to resolve, the driverless car is something which is conceivable to most people and this is also something which most of us would not perceive as being a threat to human existence.

What we need to consider in order to better understand the context in which Hawking and Musk have given their warnings relates to a more advanced form of AI.

For example, neuroevolution is a form of AI in which evolutionary algorithms are used to create artificial neural networks. In this type of AI, systems are able to evolve, mutate and develop themselves in much the same way as the human brain. Here AI could therefore easily evolve into something that has greater cognitive abilities than the human brain. This therefore is perhaps more concerning as the end point of this development is unknown.

Given Elon Musk’s warnings, it appears paradoxical that the AI company in which he has invested is currently working on AI which seeks to replicate, and even surpass, the abilities of the human brain. OpenAI claims that this project will “ultimately provide everyone with the economic freedom to pursue whatever they find most fulfilling.”

While for most of us, the concept of AI which replicates and exceeds the capacity of the human brain is hard to conceptualize, it is argued that such power would be able to be utilised to tackle climate change, improve healthcare for all and to personalize education.

These issues are important and anything that can be done to resolve them should be explored. It is, however, the unknown consequences of such powerful AI that needs to be understood and controlled.

As an open letter from the Future Life Institute, started in 2015 and signed by over 8000 technologists, scientists and others including Bill Gates and Steve Wozniak, states: “because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.”

This is perhaps the point that Stephen Hawking, Elon Musk and others like them are trying to make. As Stephen Hawking stated: “In short, the rise of powerful AI will either be the best, or the worst thing, ever to happen to humanity. (The problem is) we don’t know which yet.”

For reprint and licensing requests for this article, click here.