© 2019 SourceMedia. All rights reserved.

Mitigating the artifacts of machine learning bias

Implementing artificial intelligence to better business is now an explicitly fixed piece of advice IT providers are sharing with their enterprise customers. But as more departments of business adopt machine learning, we’re finding that ethical decision-making is at risk due to bias within the algorithm involved.

Additionally, there are times when AI models are plugged into a programmatic function, which could lead to the automation of bias. This, as a result, compounds the problem and affects more people. To mitigate the growing ripples of biased machine learning, we must understand the sources of bias and why some bias must be addressed by business culture rather than the algorithm itself.

A different way to think about bias in machine learning is in terms of fact versus artifact. In essence, this comes down to subtle, yet mechanistic, problems of algorithm and procedure versus problems that reflect a very real problem of society.

Namely, artifacts are “noise” observations that arise from the procedures themselves, rather than being inherent in the data we intend to observe. This is an output that data scientists need to identify and mitigate before it affects machine learning pipelines. If artifacts aren’t proactively sought out, then at best results may be misleading.

At worst, results may be actively creating or reinforcing harmful, dangerous stereotypes. Artifacts are failures of the system. For safety and success, we should all agree that we must eliminate or mitigate artifactual errors. While we assume that artifacts are unintentional, introduction of misrepresentative data is a real possibility and should also be guarded against. Fortunately, some of the same techniques can mitigate intentional misrepresentation as well.

Facts are observations that are actually represented in the data - whether we like them or not. Factual biases can reflect reality in all its unfairness. Recently reported issues with COMPAS and PREDPOL involved AI using real data but reinforcing a societal bias that tends to accord more incidence of crimes to minority offenders. On the other end of the spectrum, AI-powered searches for CEO pictures return 89 percent male images. This is not because the AI is gender biased, but rather because society itself is.

Factual bias is a stickier subject compared to artifactual because it not only reflects a reality that we find unpleasant, but reinforces it. As the creators of the AIs, and the creators – intentional or not – of the biased conditions sometimes found in reality, it is incumbent upon us to balance the realities of what is, the ephemeral state of what should be, and the future state of what might be.

Guarding against bias and deciding on mitigations is considered a data science problem, but that simply shift our responsibility onto the data science community.

Bias in AI is only a new and very visible manifestation of a societal problem. Therefore, while data science should be involved, it is critical that diverse leadership representation take an active role in making decisions about bias.

1494253423_PASSWORDS
Coaxial cables feed into a server inside a comms room at an office in London, U.K., on Friday, Oct. 16, 2015. A group of Russian hackers infiltrated the servers of Dow Jones & Co., owner of the Wall Street Journal and several other news publications, and stole information to trade on before it became public, according to four people familiar with the matter. Photographer: Chris Ratcliffe/Bloomberg

These groups must work together to decide what bias represents, how best to mitigate it, and how to monitor the biases we find. We have an opportunity to do more than acknowledge our problems, we have an opportunity to monitor, communicate and correct them.

The four focus areas to help enterprises mitigate bias

People – Many of the sources of bias require at least some data science knowledge to address. Many others can be understood sufficiently for laypersons to advise on whether the outcomes make sense. Diversity of observers can be a strong mitigation for bias and help identify the source. Even artifactual bias is often mitigated by diversity of thought.

Model – Every model doesn’t work for every data set. This is very much a data science problem. Many existing models are well documented, and the documentation can be understood. As non-data science participants, we have a responsibility to ask questions of the data scientists and ensure the best fit models and algorithms are in use.

Training set – Generating the training set is often the first place where bias creeps in.Mitigation methods can be as bad as the bias itself from a social viewpoint. Some alternatives are bad both socially and computationally.

Monitoring – Implementing a protocol that monitors and provides feedback on machine learning outcomes is essential. In addition to monitoring performance from functional and financial perspectives, it is also important to monitor for fairness.

If a machine learning initiative has all the above working in conjunction with one another, outcomes will have lower risks of bias, resulting in better outcomes.

The outcomes we seek may be different for different applications, but we should be aware of bias and understand its sources and consequences. We cannot make the blanket assertion that all bias is bad, but we should be very careful when we assume there is no bias.

As AI is now considered vital to business, we must consider how bias impacts outcomes. We can think about bias by distinguishing between artifactual bias, bias arising from algorithms or procedures, and factual bias, which is bias reflecting the way things are. By implementing a proactive program of the right people, right model, right training set and right monitoring, we can ultimately begin to achieve the right outcomes.

For reprint and licensing requests for this article, click here.