© 2019 SourceMedia. All rights reserved.

5 principles to eliminate bias in artificial intelligence programs

Today, algorithms that are used to make predictions are everywhere. Netflix recommends new movies and shows based on what you’ve previously watched, Spotify creates playlists with songs similar to what you’ve been playing on repeat, and retail brands spam your email with items that can be worn with the new jacket you just ordered.

The assumption today is, “with enough data, anything is predictable.” However, over the last few years, news headlines have repeatedly illustrated the influence that such algorithms can have on human well-being, along with the inherent biases that affect many of them.

With this in mind, before rushing to embrace artificial intelligence (AI) algorithms, it’s crucial that we ensure they promote justice and fairness rather than reinforcing already existing inequalities. This is why IBM and IEEE have recently helped to solidify an ethical approach to AI.

To help those struggling to eliminate such biases, below are five principles that bear special attention as we seek to use technology wisely with an identity-centric mindset.

1. Well-Being

The first principle to taking an ethical approach surrounds how AI must put human well-being first. However, while this sounds great, it’s hard to define. There is a universal truth that grounds well-being, but much of it is organizationally and culturally dependent. On top of this, difficult decisions must be made to protect one class, which might negatively impact another.

To help with these difficulties, try leveraging a tool known as an ethics canvas, so that these choices and outcomes can be thoroughly explored and documented, which will allow organizations to better understand the moral implications of their actions in a clear way.

2. Accountability

After ethical choices have been laid out using the ethics canvas as noted above, organizations must ensure that they are accountable for meeting the documented ethical standard. This is a two-step process.

AI pro.png

First, all ethical decisions must be documented as solutions are designed and architected as this encourages designers, architects and implementers to make choices that are ethical which can be supported by documentation around why each choice was made.

Next, a feedback loop must be built into the solution so that end users have a method for holding organizations accountable as well.

3. Transparency

It’s essential that the reasons for AI decisions are transparent to end users so that they can hold organizations accountable. This entails not only answering the question of “why,” but breaking down the “how” in a simple way.

Keeping in mind that AI is the topic at hand, it’s critical to use language that is easy for the end user to understand, as technical jargon can quickly become complex. One way to do this is to use open source tools such as LIME which can help identify the reasoning and essential factors for why a particular solution was chosen as the right one.

This transparency is more than just disclosure for disclosure’s sake. Through clear communication of its reasoning process, it builds trust in AI itself.

4. Fairness

Although a tricky prospect in the best of circumstances, for AI to promote fairness it must expose its own biases. For example, hiring or salary determination algorithms use historical data, which tend to depress the value of women in the marketplace. These data sets reflect past cultural trends in which fewer women advanced as far in their careers as their male colleagues, due to past cultural stereotypes or familial roles.

In this example, the data becomes a self-fulfilling prophecy that locks women into old patterns. When disenfranchised groups lack a voice in the formation of AI systems, biases in the data remain uncorrected and the result is likely to perpetuate the bias.

5. User Data Rights

The other principles of ethics fall under user data rights. Rather than a nice to have, control over the data that makes up your identity is a fundamental human right. Technology that supports data user rights – user-managed access and consent management, data anonymization and pseudonymization – should be second nature for identity professionals. These techniques and tactics provide the firm grounding on which the ethical standards can be fulfilled.

Ultimately, for an ethics standard to be practical it must have some form of measurement. While no organization or individual will ever be one hundred percent ethical, it is crucial to conduct self-evaluations regularly to measure progress according to the ethical standard.

By continuously improving – being more aware of the impact of AI on human well-being, working on accountability internally and externally, increasing transparency into our decision making, seeking out other voices that may go unheard and using standards to improve how we protect user data rights – we can eliminate the biases to ensure AI is operating ethically, and that ultimately, we are too.

For reprint and licensing requests for this article, click here.