Understanding the trends driving natural language processing adoption

Artificial Intelligence innovation is proceeding apace in two separate trajectories. The first involves computer vision and, as of yet, is not quite accessible to most organizations. The second revolves around natural language processing, which has quietly become embedded in everything from text analytics to speech recognition systems, and is one of the most utilitarian tools for the enterprise today.

Natural language technologies (also including natural language understanding and natural language generation) typify AI’s promise by combining the virtues of its statistical and knowledge sides. Therefore, it’s extremely significant that NLP’s “primary trend over the last 10 years has been moving along the spectrum of rules based models to now learned models,” reflected Jared Peterson, SAS director of analytics R&D.

Contemporary developments in NLP not only include deploying neural networks with less quantities of training data than before, but also deploying those models alongside conventional rules-based ones for more accurate text analytics, conversational AI, sentiment analysis and a host of other use cases demonstrating the overarching worth of these technologies.

NLP and Text Analytics

NLP is still considered the umbrella term for the aforementioned natural language technologies, natural language interaction and what many refer to as conversational AI. However, it also refers to a number of distinct steps that, while traditionally related expressly to text analytics, are also applicable to speech recognition. These include:

  • Parsing: This process is arguably the foundation of NLP and involves three components: tokenization, tagging and limitization (also known as stemming). Tokenization requires splitting text or words “into their own spaces,” Peterson explains. Tagging is the process by which parts of speech are assigned to different words. The limitization concept is similar to that of taxonomies, in that “you want to be able to map a word back to its root form so you can catch variation across terms, or you also see that in spelling corrections,” Peterson says.
  • Tokenization: Gleaning which letters, characters, or syllables form individual words—or which words are part of the same sentence—is not always straightforward. For instance, it may seem sensible to split sentences between periods, but abbreviations and dates can make this practice inaccurate. Tokenization expressly denotes intelligently splitting content so it’s appropriate for words or sentences; tokens are the individual terms or sentences.
  • Entity Extraction: Entity extraction is also termed concept extraction or name identity recognition. Common examples of entities include names, places and things. For example, NLP systems can extract entities to understand Cary is a term denoting a person’s name versus a town in North Carolina. Entity extraction requires assigning tokens to entities.
  • Classifications: Sometimes referenced as categorizations, classifications invoke the process of labeling documents according to type. For example, certain documents may be labeled as relevant to regulatory compliance. According to Peterson, classification directly pertains to sentiment analysis in that it’s “just a very specific classification problem, where you’re trying to say something is positive, negative, or neutral.”

Natural Language Understanding

NLU can be considered a specific type of classification focused on the actual semantic understanding of speech or text. There are numerous ways of asking the same question, each of which involves different words, phrasing, and sentence structure.

NLU is responsible for comprehending the underlying intention of such a question, in part based on the fundamentals provided by NLP, and “expressing that into a kind of canonical form,” Peterson explains. “The reason you want to be able to do that is because if you want to build some kind of programmatic flow around how you’re interacting with a user, you don’t want to have to codify all that variation once you get beyond the question.”

Although NLU maps such variation into a single intent, its reliance on NLP for doing so is readily apparent since some form of parsing is necessary. In this respect, NLU is rightfully considered a subset of NLP.

Natural Language Generation

Although NLG is just as reliant on NLP as NLU is, NLG is antipodal to NLP in that instead of being used to understand what’s written or said, it’s used to produce natural language responses. When combined with NLU and intelligent, responsive systems (which may involve speech recognition), NLG is the critical element underpinning conversational AI or Natural Language Interaction.

“Once you understand [a user’s question], you have to go do something [to answer it],” Peterson explains. “[Usually] that’s analytics. When I want to send the data back to the user I’ve got to create some text to answer back to them. That’s where Natural Language Generation comes into play.”

Cogent NLG use cases not only involve summaries of explanations for statistical analytics and business intelligence results, but also for issuing explainability for machine learning models results otherwise deemed ‘black box.’ In this case, NLG bridges interpretability (the statistical meaning of the results of analytics models and how they were mathematically derived) and explainability.

“To give somebody who’s not a data scientist or an expert in whatever the product is an approachable explanation is a huge win,” Peterson noted. NLG is becoming more commonly accessed through cloud services.

Neural Networks, Rules Hybrids

As Peterson mentioned, the most prominent development within NLP is the motion towards utilizing dynamic cognitive computing models instead of rules based ones. Oftentimes viewed as labor-intensive and time consuming, rules based approaches stem from AI’s knowledge base and necessitate exhaustive lists of vocabularies, entities, attributes, and their permutations—all defined upfront before text analytics are possible.

The chief benefit of supplanting this methodology with that of deep neural networks is the flexibility of the ensuing text analytics platform and, when properly implemented, the reduced preparation time required.

An important trend gaining credence throughout the data sphere, however, is to utilize both neural networks and rules based models. The hybrid approach is advantageous when, for a particular domain, there’s not enough available training data, so modelers start with rules before they “flip over,” as Peterson termed it, to using learned models. It’s also viable to begin with neural networks to train text analytics solutions before implementing rules for fringe use cases on which models don’t perform well.

Peterson provided a cursory overview of the positives and negatives of each approach, implying how they counterbalance one another for improved results: “Historically, the performance score time of rules based models was faster. Another positive was they don’t require upfront data. As you get on the learned side you no longer have to write rules, but you now need training data.”

Training Data Necessities

If the overall shift in NLP is incorporating cognitive computing models alongside what was once exclusively the realm of rules based ones, the most compelling development within that transition is the bevy of creative approaches for overcoming training data shortages. The sheer quantities and specificity of annotated training data requirements is still the principal shortfall of leveraging machine learning, and deep neural networks, in particular. Peterson described the ability to find training data sets for domain specific problems in text and speech as a “non-trivial problem.”

Nonetheless, a number of approaches have recently emerged—some just within the past year—specifically designed to alleviate this difficulty for NLP. According to Peterson, one of the points of commonality between these techniques is they involve “not having to have as huge of a volume of training data, but being able to kind of refine them using domain specific [information].”

Various methods include:

  • BERT: Bidirectional encoder representations from transformers (BERT) relies on transformers—which pertain to the notion of attention—to pre-train models on text while looking at it in multiple directions, as opposed to sequentially from left to right. The result is not only less training data quantities required, but also a far greater understanding of the context and meaning of text.
  • ULMFiT: Universal language model fine-tuning (ULMFiT) is a transfer learning technique specifically designed to drastically decrease training data quantities for accurate NLP. In general, transfer learning applies knowledge from a general model or corpus of information to a specific one. Although applicable to most NLP tasks, ULMFiT has demonstrated particular efficacy on classification problems.
  • ELMo: Embeddings from language models (ELMo) specializes in understanding the context of text for various NLP problems. It relies on elements of long short term memory (LSTM), is also bi-directional, and is pre-trained on a large text corpus yet applicable to highly domain specific tasks.

Practical Utility

The pragmatic use of these different NLP vectors is considerable. These developments enable organizations with large amounts of unstructured text or spoken data to overcome dark data issues and effectively mine them for insight. They’re also key for enabling BI users to issue questions and get answers in natural language, a development that’s increasingly involving speech.

Collectively, these trends make text analytics more accessible. What’s truly notable about them, however, is they involve multiple dimensions of AI (its statistical and knowledge base), hinting at the overall effect these techniques will have on the coming decade when used together.

For reprint and licensing requests for this article, click here.