© 2019 SourceMedia. All rights reserved.

How to overcome bias by artificial intelligence in lending decisions

It seems that beyond all the hype AI applications in lending do speed up and automate decision-making.

Indeed, a couple of months ago Upstart, an AI-leveraging fintech startup, announced that it had raised a total of $160 million since inception. It also inked deals with the First National Bank of Omaha and the First Federal Bank of Kansas City.

Upstart won recognition due to its innovative approach toward lending. The platform identifies who should get a loan and of what amount using AI trained with the so-called ‘alternative data’. Such alternative data can include information on an applicant’s purchases, type of phone, favorite games, and social media friends’ average credit score.

However, the use of alternative data in lending is still far from making the process faster, fairer, and wholly GDPR-compliant. Besides, it's not an absolute novelty.

Early credit agencies hired specialists to dig into local gossip on their customers, while back in 1935 neighborhoods in the U.S. got classified according to their collective creditworthiness. In a more recent case from 2002, a Canadian Tire executive analyzed last year’s transactional data to discover that customers buying roof cleaning tools were more financially reliable than those purchasing cheap motor oil.

AI bias.png

There's one significant difference to the past and the present, however. Earlier, it was a human who collected and processed both alternative and traditional data, including debt-to-income, loan-to-value, and individual credit history. Now, the algorithm is stepping forward as many believe it to be more objective as well as faster.

What gives cause for concern, though, is that AI can turn out to be no less biased than humans. Heads up: if we don’t control how the algorithm self-learns, AI can go even more one-sided.

Where AI bias creeps in

Generally, AI bias doesn’t happen by accident—people who train the algorithm make it subjective. Influenced by some personal, cultural, educational, and location-specific factors, even the best algorithm trainers might use inherently prejudiced input data.

If not detected timely, it can result in biased decisions, which will only aggravate with time. That's because the algorithm takes its new decisions based on the previous ones. Evolving on its own, it ends up being much more complex than in the beginning of its operation (the classical snowball effect). In plain words, it continuously learns by itself, whether the educational material is correct or not.

Now, let’s look at how exactly AI might discriminate in the lending decisions it makes. Looking at the examples below, you'll easily follow the key idea: AI bias often goes back to human prejudice.

AI can discriminate based on gender

While there are traditionally more men in senior and higher-paid positions, women continue facing the so-called ‘glass ceiling’ and pay gap problems. As a result, even though women on average tend to be better savers and payers, female entrepreneurs continue receiving fewer and smaller business loans compared to men.

The use of AI might only worsen the tendency, since the sexist input data can lead to a spate of loan denials among women. Relying on misrepresentational statistics, AI algorithms might rather favor a male applicant over a female one even if all other parameters are relatively similar.

AI can discriminate based on race

Sounds harsh, but black applicants are twice as likely to be refused mortgage as white ones. If the input data used for the algorithm learning reflects such a racial disparity, it can put it into practice pretty fast and start causing more and more denials.

Alternative data can also become the source of AI ‘racism’. Consider the algorithm using the seemingly neutral information on an applicant’s prior fines and arrests. The truth is, such information is not neutral. According to The Washington Post, African-Americans become policing targets much more frequently than white population, and in many cases baselessly.

The same goes for some other types of data. Racial minorities face inequality in income, occupation, and neighborhoods they live in. All of these metrics might become solid reasons for AI to say ‘no’ to a non-white applicant.

AI can discriminate based on age

The longer a credit history, the more we know about a particular person’s creditworthiness. Older people typically have larger credit histories, as there are more financial transactions behind their backs.

The young generation, on the contrary, has less data about their operations, which can become an unfair reason for a credit denial.

AI can discriminate based on education

Consider an AI lending algorithm that analyzes an applicant’s grammar and spelling while making credit decisions. An algorithm might ‘learn’ that bad spelling habits or constant typos point to poor education and, consequently, bad creditworthiness.

In the long run, the algorithm can start avoiding qualifying individuals with writing difficulties or disorders even if those have nothing to do with such people’s ability to pay bills.

Tackling prejudice in lending

Overall, in order to make AI-run loan processes free of bias, it's crucial to make the input data clean from any possible human prejudice, from misogyny and racism to ageism.

To make training data more neutral, organizations should form more diverse AI development teams of both lenders and data scientists, where the former can inform engineers on the specifics of their job. What's more, such financial organizations should train everyone involved in making decisions with AI to adhere and enforce fair and non-discriminatory practices in their work. Otherwise, without taking measures to ensure diversity and inclusivity, lending businesses risk to generate AI algorithms that can severely violate anti-discrimination and fair-lending laws.

Another step toward a fairer AI is to make sure that there are no lending decisions made solely by the algorithm; a human supervisor should assess these decisions before they make a real-life impact. Article 22 of the GDPR stands with it, claiming that people should not be subjected to purely automated decision-making, specifically if this can have a legal effect.

The truth is, it is easier said than done. However, if not addressed, the problem of unintentional AI bias might put lending businesses in a tough spot no less than any intentional act of bias, and only through collective effort of data scientists and lending professionals can we avert imminent risks.

For reprint and licensing requests for this article, click here.