© 2019 SourceMedia. All rights reserved.

Artificial general intelligence: Dream goal, nightmare scenario or fantasy?

(Editor's note: This is the first of a five-part series on the nature of artificial intelligence and how organizations might get from the specific artificial intelligence that has been so successful in recent years to general artificial intelligence.)

The quest for artificial general intelligence is the holy grail of artificial intelligence research, and, arguably, just as difficult to find. It may be a myth.

Researchers do not completely agree on what they mean by artificial general intelligence, but at its core, it is a system that has a reasonable level of autonomous self-control, solves a variety of complex problems in multiple contexts and can learn to solve new problems for which it was not specifically programmed or trained.

The prospect of an artificial general intelligence is a dream for some people, but a nightmare for others. Elon Musk and Stephen Hawking are among those emphasizing the dangers of artificial intelligence. Both of them have signed on to the Asilomar Principals, meant to protect future generations from “bad” artificial intelligence. Most of these principals are either innocuous or barely controversial. The more troubling ones are those that concern “Longer Term Issues.” For example:

21) Risks: Risks posed by AI systems, especially catastrophic or existential risks, must be subject to planning and mitigation efforts commensurate with their expected impact.

22) Recursive Self-Improvement: AI systems designed to recursively self-improve or self-replicate in a manner that could lead to rapidly increasing quality or quantity must be subject to strict safety and control measures.

23) Common Good: Superintelligence should only be developed in the service of widely shared ethical ideals, and for the benefit of all humanity rather than one state or organization.

These three principals refer to what I call the Skynet Scenario. In the fictional world of the Terminator movie series, the Skynet computer network was activated on August 4, 1997, learned at a geometrically improving rate, and became self-aware by August 29, 1997 at 2:14 AM EDT. Following its “mission,” it resisted any attempt to disable it. It then determined that humans are a risk to its mission and it therefore sought to destroy humanity. Although Skynet is purely fictional, the scenario represents a common fear of many people, even those who should presumably know better.

artificial general intelligence.jpg

A strong proponent of the Skynet scenario as a serious concern is the University of Oxford philosopher, Nick Bostrom, particularly in his book SuperIntelligence. In Bostrom’s view, a superintelligence is an artificial general intelligence that greatly exceeds the cognitive performance of humans.

Among its suprahuman abilities is the capability to improve itself, resulting in it continuing to improve at an increasingly rapid rate, reaching a kind of “intelligence explosion” or “technological singularity,” at which point, humans are likely to be only an afterthought. The invention of a computer system capable of achieving superintelligence is probably the last invention humans will ever make.

Naturally, any use of artificial intelligence that could spell the doom of humanity is something to be concerned about, but as I will argue in later sections, the prospects of that occurring in real life are nil. The current paradigm for machine learning cannot produce what Bostrom fears, and if some future computational paradigm could produce it, it would not be capable of explosive rates of learning such an explosion depends on more than just computational capacity.

Even a computer that had access to all known facts would not become a superintelligence. Consequential learning, particularly learning things that could affect the future of mankind requires interaction with a world, which imposes its own limits on how fast new facts can be identified. Computers can compute at ever-faster rates (limited ultimately by the speed of light), but the world presents experiences at a limited rate.

Ultimately learning must be limited by the speed at which the system can get experience to learn from. Additional computational capacity cannot overcome these inherent limits any more than the observation that nine women cannot make a baby in one month.

On the other hand, not all of the consequences of machine learning lead to dire situations. The machine learning tools that are available today are well suited to learning to perform specialized tasks, many of which can be highly consequential, such as autonomous driving. Artificial intelligence may be useful for recording and analyzing patient records for example. It may be very effective at predicting who will be re-admitted soon after a hospital stay, but it is unlikely to replace the profession of nursing any time soon.

As the population of the US and other countries continues to age, artificial intelligence agents may be increasingly important to caring for the elderly. Conversational agents may provide companionship, may diagnosis a number of health situations, may call for help after a fall. Autonomous vehicles may drive patients to their appointments. These developments are likely to have some impacts on employment patterns, but, as in caring for the elderly, they may greatly improve the quality of some people’s lives.

Building a generally intelligent agent is beyond the scope of current approaches. Significantly new kinds of research are required. Current methods are more or less suited to one kind of problem, but that is not the only kind of problem that must be addressed to achieve general intelligence. At this point, we do not even know what general intelligence would look like. It does not have to be the same as human intelligence, but we do not have an adequate understanding of human intelligence either.

Current approaches depend on massive amounts of computing power, but more importantly, on massive amounts of data. How is it that a baby can learn to recognize her mother’s face within a few hours after birth, but a neural network can require thousands or even millions of different pictures to identify cats? New kinds of logic and representational mechanisms appear also to be needed to deal with the complexity of “facts” in the world, for example, to be able to cope with contradictions.

Transferring learning from one task to another has begun to receive some attention, but the patterns by which it is achieved depend on a great deal of expertise—by the human designers of the system.

It is not clear that the field of artificial intelligence is even aware of many of these issues, let alone working to address them. Unless we understand what it is we are trying to build, predictions like the claim that we will achieve general intelligence within 10 years, 20 years, or four centuries are simply meaningless speculation. Unless we resolve them, the development of artificial general intelligence is impossible.

Stay tuned for the next piece in my series that takes a deeper dive into what exactly “intelligence” is.

For reprint and licensing requests for this article, click here.