© 2019 SourceMedia. All rights reserved.

Where is this online lender using AI? Everywhere

The online lender Enova, which has been using artificial intelligence to make credit decisions for years, has lately been expanding its use of AI to handle additional tasks: spotting fraud, determining who should receive product offers, and projecting possible losses once a loan is booked.

Most recently, it has also been been having AI scour paper documents to uncover false information, verify income and employment, and conduct know-your-customer checks.

It also sponsored a survey, conducted by Harvard Business Review and set to be released Thursday, that benchmarks how businesses are using AI. Generally speaking, it finds companies are adopting AI slowly: Though 68% of executives say AI will be a competitive differentiator within the next year and 64% are investigating or piloting AI projects, only 15% of organizations have AI-powered analytics in place because of technical and cultural challenges.

AI in the back office

In its back office, Enova has been training AI engines to review documents such as bank statements and pay stubs and automating decisions based on the information in those documents.

“There tends to be sort of a blind spot with low-hanging fruit, operational tasks,” said Joe DeCosmo, chief analytics officer at Enova. A lot of companies focus on customer-facing AI-based virtual assistants, he said, but Enova has not built a chatbot yet.

chart where is your business.jpeg

“What we're doing first is using AI to standardize and automate a lot of the back-office work that customer service reps do,” DeCosmo said. “If we can verify employment and income 30% of the time using our AI, that frees up our customer service teams to work on more valuable tasks in the customer journey. If more companies took that approach focused on back office first for these operational tasks, they would realize value from AI more quickly.”

This use case is less likely to make headlines than chatbots, he acknowledged.

First the AI assesses the documents — bank statements, identity documents, pay stubs, utility bills and the like — for validity, looking for signs of fraud or counterfeiting. If the document is found to be valid, data will be extracted from it and used in decision-making.

DeCosmo is confident that a machine can validate documents better than a human can.

“The fraudsters have gotten very sophisticated, so it's becoming increasingly difficult for the human eye to tell the difference between a counterfeit or a valid bank statement without spending an inordinate amount of time with that bank statement,” he said.

Enova has tested cases where two statements from the same bank looked identical, but on one the transactions do not add up to the sum at the top. Some of these were three- or four-page bank statements.

“To ask people in a contact center to go through that level of rigor on an individual bank statement, you're expecting a lot of those individuals and the workload that they have,” DeCosmo said. “Yet you can design AI to do that in sub-seconds. We are very bullish on the ability of AI to outperform human interrogation and human review.”

Enova typically builds some of its own technology and works with partners for specific components, such as performing optical character recognition on documents.

“That allows us to constantly improve our AI but also manage costs more effectively because we can waterfall things,” DeCosmo said. Internal technology might handle about 70% of the work, and the remaining most difficult cases might be shifted to third-party software.

AI in lending decisions

Though many in the financial industry question the use of AI in lending decisions, wondering if bias could creep in, if certain people might fall between the cracks, or if it is a black box, Enova has put the technology to the test and found that it works.

“It’s pretty clear that machine-learning algorithms are unambiguously better than traditional scoring models,” DeCosmo said.

Using AI in lending requires changing the way loan decisions are handled, including creating adverse action and reason codes. The models are not as simple to interpret as a traditional scorecard.

“But it's worth the investment for sure,” he said.

To the question of bias, DeCosmo says it is easier to control bias with an automated model than with a manual process.

“You can control what factors go into that model,” he said. “And when you build it, you can test it for bias on a current portfolio of loans. And then you can monitor it aggressively going forward and monitor your outcomes. If you do those three things, you are not going to get into a situation where bias gets baked into the process because you're going to identify and remediate.”

The analysts who build Enova’s models get alerts if there start to be biased outcomes, and they will fix the models.

“The myth perception in the general press and public is that this is set it and forget it,” DeCosmo said. “If you do that, you're certainly at the risk of getting some bias baked in and scaled up. That risk is real. But you can mitigate that through your monitoring and management of the models.”

Enova offers some of this AI-based loan underwriting technology, packaged as Enova Decisions, to other companies.

Why are banks slow to adopt AI?

The Harvard Business Review study cites several obstacles to the adoption of AI in companies, such as the difficulty of creating a culture in which data and analytics are routinely used when making business decisions, hardships in aligning technology with the overall business strategy, insufficient funding, outdated technology and cybersecurity concerns.

"While senior executives have big aspirations for AI, it's not surprising adoption has been slow, especially in the financial services industry, where organizations face regulation and compliance concerns larger than other industries," said Alex Clemente, managing director of Harvard Business Review Analytic Services. "As banks undergo digital transformation, operational decisions are a smart place to start because applications like fraud detection and assessing credit risk may offer fast and impactful returns."

In DeCosmo’s view, a large hurdle in financial services is that there’s legitimate risk to implementing AI.

“It's not like Amazon making product recommendations, where there's almost no risk to using AI for that decision,” he said. “With a financial institution, there's customer impact risk if your AI is wrong, there's a compliance and supervisory risk. I think that's why it's been adapted more slowly in the more traditional financial services space.”

Technology hurdles are also a factor, he said.

“Adopting AI and being able to operate it at scale is a hard technological challenge,” DeCosmo said. “And many of these companies’ systems aren’t up to it.”

Julien Courbe, lead partner for the Financial Services Advisory practice at PwC, also sees technology and talent shortages getting in the way of financial institutions that want to implement AI.

"It is not just about the technology or training the workforce on how to use the technology — this is the starting point," he said. "It is more about a mindset change where the workforce is engaged and excited to think through and unlock how the technology can transform the business."

The most common remedy for the challenge of culture change is rank-and-file success with the technology, he said.

"We most often see success when the workforce, on their own, begins to rethink how processes and the business can be transformed," Courbe said. "Whether it is automation of routine tasks, reporting, or advanced uses of AI to shape business decisions, the transformation of these processes should come from the workforce that knows these functions the best."

For reprint and licensing requests for this article, click here.