Five shifting strategies that will impact AI and RPA project success
For artificial intelligence and robotic process automation users, 2020 offers tremendous potential to move their process automation efforts forward and make a big impact in their organizations. But it won’t all be smooth sailing.
I believe there are some fundamental shifts happening in the market that will have a big influence (positive and negative) on users’ success—from the risk of AI-only projects and RPA’s adoption problem to the growing requirement for explainability in AI models and the rise of Automation Centers of Excellence.
Here are five big shifts I am watching in 2020.
1. AI-Only Projects Will Continue to Stall
Too many AI initiatives are being implemented without clear business goals/outcomes and as a result they will continue to stall in the enterprise. According to Gartner, only 15-20% of enterprise AI projects are ever designated for production use, and only about half of those projects actually make it into production.
The reasons vary as projects drift and get pushed aside due to changing priorities, but the most common issue is that enterprise users underestimate the business ecosystem in which AI has to run. There are many considerations: process, infrastructure, organizational structure, business sponsorship – that impact how AI will run (or not), but the reality is that too many AI-projects start as data science projects—entirely disconnected from the business, with no KPIs in mind.
Experimentation is important, but without a defined business outcome, projects are doomed. In 2020, users must ask themselves, why are we using AI? What are we trying to accomplish? What business outcome will these AI project achieve?
2. RPA at an Inflection Point
RPA has undoubtably been one of the hottest areas of tech in the last few years. Its process automation and cost efficiency benefits are impossible to ignore for any company looking for productivity gains. And it will continue to grow rapidly as it gains new users in small and mid-size enterprises. In fact, most of the inquiry calls Gartner takes from users are what they refer to as ‘RPA 101” calls—new users just getting started. That speaks to RPA’s continued growth potential.
On the other hand, existing users trying to expand their use of RPA are discovering real limits – both on their own and in the technology. According to Gartner, only a tiny percentage of RPA users have effectively expanded their implementations across the enterprise. By default, it requires business process re-engineering and that typically takes more time than users think.
RPA also has some built-in technological limitations. It’s great with repetitive, deterministic businesses processes involving structured data -- where there is no judgment involved. Tell it exactly what you need it to do and it can do it better, faster and cheaper than a human. However, it cannot make judgements about information or learn and improve with experience.
As a result, RPA is ineffective with workflows that require some level of cognitive ability. And these types of workflows make up the majority for many enterprises today; e.g., contract analytics, audit planning and reporting, RFP analysis and composition, sales opportunity workflow automation, customer support analysis and automation, appraisal and claims analysis, etc.
The ability of RPA providers to add these types of cognitive capabilities—the ability to understand the text, images, documents and other unstructured data that are fundamental to all these types of business processes—either through development, partnership, or acquisition, may determine its real potential to transform the enterprise.
3. Explainability Becomes a New Standard for AI-Powered initiatives
In theory, RPA, AI, Natural Language Processing (NLP) and Intelligent Process Automation (IPA) technologies can all add tremendous value by automating mundane tasks and allowing people to focus on more important things.
Unfortunately, there have been a few problems realizing those benefits in practice:
- Many view (and have experienced) AI that is so complex, and NLP capabilities that are so cutting edge, that non-technical people will never be able to understand them.
- Most solutions have been integrated behind the scenes so that you don't have to worry about how it works. But if you want to change anything, too bad.
If the AI algorithm makes a decision to do something or to take some action, business users (and in some cases, regulatory bodies) want to know how it reached that conclusion.
It’s less about how an algorithm works and more about why it makes decisions the way it does. The business wants an audit trail.
In our work with big enterprises, we find that about 80% of AI errors can y be tied back directly to bad training data. The challenge is finding the bad training data. So, one element that is really key is to tie every prediction that your algorithm makes back to training data. That way you can figure out why it’s doing what it's doing.
The other driver for explainability is to prevent unintended bias. Most AI models are trained on some historical behavior or reference. The ability to understand how an algorithm is making decisions can surface potential problems.
I expect explainability to become a minimum requirement built into most AI RFPs in 2020.
4. Scalable Decisioning – A New Imperative
One of the significant opportunities in process automation is the ability to speed accurate decision-making to accelerate cycle time – a common thread of most digital transformation efforts. But to date, this has been an area where business process management (BPM) and RPA have stalled – due to the limitations of hard-coded rules-engines and taxonomies.
If the decision is not a pre-wired, black and white one, defined by pre-existing conditions, these technologies cannot make one.
New applications of AI and deep learning are changing this though. By adding the cognitive ability of deep learning to process automation, users are gaining the ability to automate a much larger percentage of business process decisions and minimize the manual intervention (human-in-the-loop) required in those processes.
We are already seeing these capabilities being applied to a number of enterprise use cases such as contract analytics, regulatory compliance, customer support analysis, insurance claims analysis, and more. In legal departments, for example, contracts that are out of compliance, can easily be automatically flagged for manual review, vs. having a highly trained legal resource comb through volumes of pages to determine which contracts are in and out of compliance.
The ability to scale decision-making like this frees up important resources to focus their time on higher value activities while accelerating cycle time and reducing operational costs. Expect to see a lot of focus on it in 2020.
5. Automation Centers of Excellence Become Best Practice
As more and more enterprises look beyond the hype of digital transformation and focus on putting it into practice via business process automation, we’re seeing the emergence of Automation Centers of Excellence—teams tasked with identifying the best opportunities for automation in an organization and finding the right solution to accomplish that goal. Sometimes that may be RPA, but in others it may be AI or IPA.
These groups are responsible for defining clear KPIs and business outcomes for each use case, and connecting data science and technology with lines of business so best practices can be applied more quickly across an organization. Today, we see this most commonly in financial services, but we expect to see more Automation CoEs in manufacturing and healthcare in 2020.