© 2019 SourceMedia. All rights reserved.

How do we teach a computer to think? Teach it to talk

In the most simplistic definition, artificial intelligence is a computer capable of thought. Through decades of research and experimentation with AI, researchers have learned that in order to teach a computer to think, we have to teach it how to talk… kind of.

For most of us, thought and language are intertwined at the most basic level. As humans we communicate thoughts, feelings and experiences mostly through language. This means that teaching, learning and comprehension are all also heavily reliant on language.

While a Phd in linguistics might not be the first skills that comes to mind when thinking about AI researchers, an understanding of language semantics might be one of the most important elements in advancing the field. Computers generate outputs by doing computations with unambiguous digits – 0s and 1s – which makes their “thinking” powerful and not prone to error, but rigid. Intelligence that approaches anything human must be flexible. This is therefore difficult for a computer to comprehend, and human language provides the richness to help close the gap.

computer talk.jpg
Rows of colored high end data cables are seen feeding into computer servers inside a comms room at a office in London, U.K., on Tuesday, Dec. 23, 2014. Vodafone Group Plc will ask telecommunications regulator Ofcom to guarantee that U.K. wireless carriers, which rely on BT's fiber network to transmit voice and data traffic across the country, are treated fairly when BT sets prices and connects their broadcasting towers. Photographer: Simon Dawson/Bloomberg

The most forward-looking organizations working on AI are already employing computational linguists to help computers work through a better understanding of language.

Computational Linguistics

Computational linguistics is an intimidating term for a scientific and engineering discipline central to the development of AI. In a nutshell, computational linguistics is the intersection between the art of language and the science of computation.

Computational linguistics works to model language so that it can be translated into a computer program and “understand” real language data in a human-like way.

While the art and science of teaching a computer to think may be complex, on a consumer level, people experience the interaction between machines and language every day. Think about Google guessing your search term after a few characters, or Alexa deciphering your context and meaning based off your conversation.

The enabling technology that allows computers to understand human language is called natural language processing (NLP). It is critical to many of the most common applications of AI today.

Natural Language Processing

Predictive text on your phone, Google’s famous “did you mean XYZ”, and the customer support chatbot at your bank are all examples of NLP. However, NLP does much more than generate lists of similar words. It gives computers the ability to analyze, understand and derive meaning from human language in an intelligent and useful way.

So how does it work? Computers are excellent at analyzing and understanding structured and organized data – basically information formatted in highly structured ways such as spreadsheets or databases. As any elementary school grammarian would tell us, language has defined grammatical rules and structure. Computer programs and algorithms can understand these rules and apply them to data.

But language also has nuance. Sarcasm, context and slang have a huge impact on meaning. Language combines the structured data of grammar and syntax, with unstructured data and nuance. For example, when sportscasters say “Golden State destroyed Portland last night,” they are referring to a basketball game, not a military conflict. Recognizing the distinction requires understanding not just the words but the context.

NLP systems use algorithms to apply grammatical rules, reduce words and sentences to their basic structure and look for patterns to establish context. With context, computers can understand that a single word can have different meanings depending on usage. For example, the word “destroy” in the context of a basketball game means to defeat through superior play not eliminate violently.

Teaching computers to talk

Early approaches to NLP were highly rules-based – where machine learning (ML) algorithms were trained to look for specific words and phrases in text and to then give specific responses when those phrases appeared.

Think of a customer chatbot trained on all of the past chat conversations between users and representatives. The idea is that with enough data, the computer can access the largest FAQ database possible and identify the proper response to a question based on previous answers.

However, these systems experienced a range of problems and could hardly be called AI. Even an enormous FAQ database couldn’t anticipate every possible question, and simple rules-based ML programs, baffled by new topics and nuance, either failed to answer questions or gave unhelpful answers.

New generations of NLP models are based on deep learning, a subset of ML that looks for trends and patterns within unstructured data using neural networks – algorithms modeled after the human brain – to improve a computer’s understanding. Like the human brain, these models analyze data at a base level and then “learn” by recognizing patterns and relationships.

Researchers are a long way from creating an artificial brain, and this form of NLP requires massive amounts of information to train the system to identify the correlations and context within language, but the approach can produce highly accurate results and continually improves over time as more data becomes available.

As any Google Translate user knows, the service was not always as accurate as it is today. However, the latest version based on deep learning is impressive.

And this isn’t the only advance happening in the computational linguistics space, which is unveiling compelling outcomes.

A new study just released by the Medical University of South Carolina and published in the BioMed Central Medical Informatics and Decision Making shows some promising results in using NLP to more proactively identify social isolation amongst cancer patients. Knowing that social isolation is both an increasingly important determinant in health outcomes – and one which has historically not been recorded in electronic health records – the team used NLP to look through electronic health records (EHRs) of patients and examine the contextual language to determine those who may be socially isolated and amend treatment plans accordingly.

As researchers continue to improve on computers’ ability to process and understand language, the benefits for consumers, enterprises and technology are clear. What is more, as these technologies become more advanced and readily available, they also become more democratized.

Better customer services, improved document and information management, even ideas once thought science fiction – think universal translation – may one day be possible.

For reprint and licensing requests for this article, click here.