© 2019 SourceMedia. All rights reserved.

A how-to guide to building an artificial general intelligence

(Editor’s note: This is the fifth and final installment of a five-part series on building an artificial general intelligence.)

Special purpose artificial intelligence relies on people to structure the problem so that a machine can solve it with the tools available in the present paradigm—essentially curve fitting. This process of structuring the problems for machines means that people impose hidden assumptions on situations that may be difficult to articulate and are unlikely to be available to the machine without human help.

Even the hobbits and orcs problem, described in an earlier section, depends on hidden assumptions for its solution. When giving the problem to people to solve, we can be pretty confident what assumptions most of them will make. When giving the problem to a machine learning system, those assumptions are made by the person designing the system. Without them, the problem would be difficult, or impossible, to solve. But still, it is unclear how a more autonomous system would know what assumptions to make.

Generally, the assumptions fall under the heading of common sense, which tells us that the boat will float and will not move from one shore to the other without someone rowing it. Common sense tells us that the color of the boat is irrelevant to solving the problem.

But there are other assumptions that may not be so common sensical. For example, if there is an island in the middle of the stream, then the solution is entirely different, compared to the more typical assumption that there is no island. If the hobbits are too small to row the boat, then again, the solution changes dramatically. We assume that hobbits and orcs are immutable, but an earlier version of this problem referred to cannibals and missionaries.

One solution, under those circumstances, would be for the missionaries to convert the cannibals so that they were no longer interested in eating the missionaries. We assume that the characters cannot wade or swim across the river, but how do we know that they cannot? We assume that there is not a rope long enough to pull the boat back from the opposite shore.

art gen intell.png

Hidden assumptions are just that because the people making them and evaluating the solutions are similar enough in background that they make the same assumptions. It is only when people make different assumptions that we can have a breakdown in communication.

For example, in one well-known anthropological study, Joe Glick asked adult members of the Kpelle tribe in Liberia to sort items into categories. He presented them with five types of food, five types of clothing and five types of cooking utensils. He expected them to sort them into these same three groups, but instead, they sorted them into 10 functional categories. Potato and hoe went together, as did orange and knife. They explained that a knife goes with an orange because it cuts it. They said that a wise man would do it that way.

Finally, Glick asked them how a fool would organize the items and they sorted the items according to the categories Glick expected. There was a mismatch between the assumptions of the Kpelle and assumptions of the American. Kpelle computer scientists might build machine learning systems that were significantly different from those built by Americans.

An artificial general intelligence agent will have to make similar kinds of assumptions, not just in familiar contexts, but across a wide variety of contexts. We will have to find methods for selecting and representing these assumptions.

One hypothetical approach to artificial general intelligence would be to build a solution that combines all of the known special purpose machine learning systems into one global system. With all of these approaches to choose from, a system could select the right approach to apply to any situation.

There are a number of problems with this approach, but even if they could be worked out, it still would only apply solutions that the machine knew about. It could set parameter values that would select among the available solutions, but, still, it could only select among the approaches that were supplied to it. It would be hostage to the completeness of the list of alternative problem-solving strategies.

We usually reserve notions of brilliance for people who create new approaches to problems. Einstein is not revered for his ability to solve equations, but for his ability to create new theories that solved previously resistant problems and made predictions about future observations. He did not just follow rules, he made new rules that let us think about the universe in different ways.

An artificial general intelligence will have to do the same kind of thing—invent new solutions to unanticipated problems. We do not know a whole lot about how people approach novel problems. Some of our best information comes from mathematicians.

Poincare is well known, among other things, for his description of how the solution to certain mathematical problems occurred to him when he was not actively thinking about the problem. On a trip, it spontaneously occurred to him that one kind of function that he was thinking about was actually an example of another kind of function that he knew about. Once having had the thought that these functions were identical to non-Euclidean Geometry, he was easily able to verify that idea when he eventually had the time to focus on it again.

If we take seriously his description of what he was thinking, he spent some time comparing some of the mathematical objects that he thought might be relevant a few days before his trip.

“One evening,” Poincare said, “contrary to my custom, I drank black coffee and could not sleep. Ideas rose in crowds; I felt them collide until pairs interlocked, so to speak, making a stable combination.”

What’s interesting here is that he compared presumably many kinds of mathematical objects, had some basis implicitly in mind for assessing this comparison, but did not find any, the evening he drank the coffee, to solve the problem. Presumably, some of these connections led eventually to his famous insight.

Analogies, metaphors, and similes appear to play an important role in coming up with new representations, but the mechanism by which this occurs is not entirely clear. These comparisons are not random, nor, apparently, are they deliberately ordered. Still, they provide a means for generating new representations from known ones and these new ones provide the insight needed to solve difficult problems.

A hint of how this kind of analogical reasoning might be accomplished by an artificial intelligent agent is also derived from mathematical thinking. Grothendieck topoi provides a sort of mathematical bridge for transferring knowledge from one mathematical theory to another. At this point, the relevance of topoi is nothing more than a vague hunch, but they are claimed to be useful to fields beyond mathematics, including physics, computer science and linguistics. Even if topoi are found to be inadequate, something like them will be needed by an artificial general intelligence.

Artificial general intelligence depends on the creation of novel forms of representation, not just selection among given representations, as is the core of the present paradigm. The current paradigm is antithetical to creation, so it is very difficult to see how artificial general intelligence could ever be accommodated by the current paradigm. Instead, artificial general intelligence will require a substantial change in paradigm. It will require mechanisms that are not now being contemplated. It will require the ability to address new kinds of problems in new ways.

Transfer of learning from one task to another and mapping of one set of representations to another will be additional critical features. It is not that computers cannot do those things, it is that computers with the current set of tools cannot do them. We need new tools.

Previous installments of this series can be found below:
Part One: Opinion Artificial general intelligence: Dream goal, nightmare scenario or fantasy?
Part Two: Opinion Building an artificial general intelligence begins by asking 'what is intelligence?'
Part Three: Opinion Building an artificial general intelligence: The current paradigm of AI
Part Four: Opinion Overcoming the obstacles to achieving artificial general intelligence

For reprint and licensing requests for this article, click here.