© 2019 SourceMedia. All rights reserved.

5 steps to incorporate ethics into your artificial intelligence strategy

By 2022, nearly a third of consumers will rely on artificial intelligence to decide what they eat, what they wear or where they live. To keep up with consumer demands, enterprises are adopting AI at a rapid pace, with industries from finance to healthcare embracing the transformational nature of this technology.

Yet as virtual assistants become smarter and robots sound more like humans, IT leaders will become responsible for drawing ethical boundaries around the use of this technology. While many frameworks exist for creating an ethical AI strategy, these principles are not set in stone. Rather, formulating an ethical strategy requires a more individualistic, question-based approach.

Asking the right questions will help IT leaders to determine their own set of principles for using AI solutions ethically. In the context of the five most common AI ethics guidelines, here are five lines of questioning that IT leaders should consider when determining an AI ethics strategy.

No.1: Ensure AI is Human-Centric and Socially Beneficial

AI systems often have the ability to make decisions and take action, but ultimately, a human must be in charge. However, it’s impossible for every action to be validated by a human being. IT leaders should examine how human centricity can scale. In what context is human input necessary for every action? When is it sufficient to have a control that allows humans to override AI decisions?

Along these same lines, AI systems should positively contribute to the greater good of human society. Some of the applications of this principle are obvious -- AI shouldn’t be used to generate fake news or create autonomous killer drones, for instance. But in some cases, the distinction is less clear. If AI is replacing people’s jobs, is that harming humans or enabling societal progress? IT leaders must consider these questions to determine the edge of their ethical boundaries.

No.2: Train AI Systems to be Fair and Unbiased

The age-old question of fairness persists: Does fairness mean to treat everyone the same, or to treat people based on their specific situation? In order to maintain fairness, AI systems should not discriminate. However, this can be at odds with treating people based on their specific situation.

artificial general intelligence.jpg

AI systems don’t make the same decision every time; they are trained to make choices based on situational context. Therefore, the fairness principle requires IT leaders to ask what information can be used by the system, versus what information is considered undesirable bias.

No.3: Rely on a Transparent and Explainable AI Strategy

The Turing Test was originally designed to determine whether a computer’s behavior could be indistinguishable from a human’s. Now, the test has been reversed: How can we be sure we are talking to a machine? A commonly agreed-upon ethical principle is that users should be fully aware of when they are interacting with AI-enabled systems. After all, legislation will require that 100% of conversational assistant applications, which use speech or text, identify themselves as being nonhuman entities by 2021.

A more challenging application of this principle is the idea that users should be able to understand how an AI system came to a decision. This is known as explainable AI (XAI), and its goal is to create trust. Yet, there are limits to transparency. Algorithms represent intellectual property (IP), and that IP needs to be protected. Although XAI is important to show why an algorithm took a certain decision, it may not be needed for full transparency. AI leaders need to decide where IP protection outweighs the need for explainability.

No.4: Prioritize Data Security and Safe Operation

AI systems require data for operation, and in turn, they collect data through the machine learning process. IT leaders are responsible for ensuring that this data is stored securely. This requires more than just anonymization. By adopting the principles of “privacy by design,” IT leaders can ensure that data collected is masked, encrypted and used only as intended.

Data should also be used proportionally. While AI technology may have the capacity to collect vast amounts of data, IT leaders need to ask what is necessary and what is useful. For example, if your system is being used to count the number of people in a certain space, it’s not necessary to deploy facial recognition software -- simpler data will suffice. Yet, IT leaders should consider where to leave the door open for data collection that will allow the technology to progress.

No.5: Hold Leadership Accountable for AI Ethics

There is not yet an effective way to build ethics into AI solutions. Currently, the responsibility for ethics is in the hands of developers and their leadership. Even if the behavior of an AI-enabled system departs from their original intentions, the developers and their leadership still remain ethically accountable for its behavior.

As much as this sounds like a clear guideline, it becomes more complex when AI-enabled systems start to interact. In those cases, being accountable must shift to taking responsibility. Who in the network of systems will take on the responsibility to monitor and coordinate the behavior of the combined systems?

This accountability, and even liability, should be taken very seriously. Otherwise, there is a risk of true legal consequences based on the behavior of AI-enabled systems. An AI governance program can help manage this accountability and determine who owns the responsibility for monitoring the behavior of these systems.

As the capabilities and potential implementations of AI grow, ethics will continue to be a key concern for business leaders, regulators and consumers. By prioritizing ethical principles in the early stages of AI implementation and asking the right questions around ethical boundaries, IT leaders can ensure that their organization is prepared for the future of AI in the enterprise.

For reprint and licensing requests for this article, click here.