As enterprises use technology to become more agile and competitive, artificial intelligence is quickly gaining a multitude of use cases. Artificial intelligence can take on a range of tasks – from automating routine business processes to handling complex customer service queries – freeing us to tackle more thoughtful work.

Accenture research suggests that impact of AI technologies on business is projected to increase labor productivity by up to 40 percent and enable people to make more efficient use of their time. Soon, AI applications will be far-reaching, affecting every aspect of our lives.

With the newest industrial and social revolution underway, it’s essential that we work to ensure that AI is deployed responsibly. Responsible AI is both an opportunity and a duty for business, government and technology leaders around the world. It places ethics at the heart of AI development to ensure that AI solutions deliver growth in a way that empowers and benefits humans, is transparent and fully compliant with the rules and regulations of the jurisdictions in which they are deployed.

The effective training and validation of AI systems will become increasingly important in achieving these goals. We have already seen what happens when AI applications are not sufficiently trained or tested: self-driving cars cause accidents, AI bots develop gender and racial bias and insensitivity, and rogue algorithms create chaos. As AI deployments become widespread, flaws such as these can adversely affect not just business performance, but also reputation, compliance and human life itself.

AI requires a new testing paradigm

To ensure valid, reliable, safe and ethical AI decision-making, we therefore need to develop robust approaches to teaching and training AI applications. This calls for a new testing regime, tailor-made for AI applications, that ensures adequate transparency in the decisioning mechanism in a way that users can understand; and provides an assurance of fairness and non-discrimination in the decision process.

The key challenge in developing such a testing regime is that AI software has many moving parts. When testing AI applications, engineers must consider many variables which include processing unstructured data, managing the variety and veracity of data, the choice of algorithms, evaluating the accuracy and performance of the learning models and ensuring ethical and unbiased decisioning by the new system along with regulatory and compliance adherence. New testing and monitoring processes which account for the data-dependent nature of these systems also need to be developed.

The Teach and Test framework

One way to break down the development and validation requirements for AI is to divide the work into two stages. The first stage is the ‘Teach’ stage, where the system is trained to produce a set of outputs by learning patterns in training data (usually historic data) through various algorithms. The best performing model is then selected to be deployed in production.

Second is the ‘Test’ stage. This comprises validation of the system outputs both on test data and in production to check accuracy and performance.

The benefit of teach and test frameworks is that if developed in the right way they can not only help ensure AI systems are trained to be effective in meeting a business goal – whether that’s creating a compelling customer experience or improving business process operations – but that the data used to train and retrain the system is unbiased. In addition, it enables the system’s ability to provide an explanation of the outcome based on logic.

For example, our own teach and test framework has been applied to the creation of a sentiment analysis solution to analyze real time sentiments from social media, news and other such sources to capture how a brand is doing on different aspects of its service. The framework helped us develop unbiased training data so the solution has the right balance of different sentiments in the given domain. This also resulted in 50 percent acceleration of the model training time.

Artificial intelligence will fundamentally and profoundly change the way we work and live. At Accenture, we view AI as the most important trend we’ve seen to date in the information age. But if AI is to be successful it needs to work for everyone: businesses, governments and the public.

Taking a Responsible AI approach translates into architecting and implementing solutions that put people at the center, and that in turn involves building a rigorous Teach and Test approach to systems development. AI will be what we make it; so, let’s ensure it is delivers for everyone from the outset.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access