Enterprise AI adoption is a marathon, not a sprint
The artificial intelligence arms race is on. AI has the potential to improve major portions of our lives and work. Everything from medical diagnosis to product recommendations to automated market research is ripe for disruption.
In the rush to adopt, enterprises often implement new AI use cases with a feverish hunger for the benefits, but without a coherent strategy, sound technical/method analysis or a solid go-to-market strategy. As a result, enterprises find themselves stuck at the “first mile” after implementation.
The AI that sounded so promising just months earlier produces little to no useful output for the company — an embarrassment for those who pushed its implementation, a cause of organizational resentment and, at worst, a blow to one’s professional credibility.
Preventing unintended consequences means understanding AI for what it is: intelligent in the way of machines, requiring interpretation and productization to be smart and relevant in what it delivers for humans. By thinking strategically about where AI fits, how it can be implemented within a human workflow, what kind of data it needs and who will manage and grow it, teams can stave off post-implementation confusion — and set themselves up for an even more intelligent future.
Here are a few questions to ask:
1. Do you have a strategy?
The equivalent of a well-fed AI is one that runs on large amounts of clean data. This requires a comprehensive, cross-functional strategy for acquiring and cleansing data. Where is your internal data coming from? How often is it updated? Which external data are you using to supplement your internal data lake? How are you ensuring a quality data feed every day, week, month and year? Data is your AI’s fuel, and it won’t run effectively without a steady, quality supply — and that is your data strategy.
2. Who owns your strategy?
Computer science departments have long driven AI strategy. Now that AI-powered solutions are proliferating every level of the organization, however, more stakeholders must surface.
The first step in finding these stakeholders is to ask yourself: Where does the AI initiative reside in my organization? Who is responsible for translating AI findings into meaningful insights? How can we ensure that our initiatives solve real-world problems with high value propositions that justify the investment?
For example, take two common types of AI: supervised and unsupervised. In supervised AI, the machine works off a training data set to generate a desired output. People must determine what type of output is desired, what data should be given and how it should be categorized.
Contrast that with unsupervised machine learning, in which an algorithm analyzes large amounts of unstructured data for patterns or commonalities, then sorts and categorizes accordingly. There is no training data set. People interpret the results and turn them into usable insights.
In the first example, a product manager might determine the ideal output. R&D might decide what data it should be given, and IT how the data should be categorized. In the latter example, the consumer insights department might interpret results, and product management turn them into meaningful product enhancements.
Indeed, key stakeholders from IT, R&D, product management and consumer insights departments tend to comprise the dedicated team required to make AI solutions work. Configure your AI team and put governance processes in place so they can successfully design, validate, implement and launch solutions.
3. Who oversees your results?
Once you’ve orchestrated your cross-functional AI team, it is crucial to add an overseer, a sort of strategic AI auditor. This person analyzes team findings in depth and ensures your team’s techniques are valid, optimally implemented and reliable. In the rush to find pots of AI gold, firms tend to forget the limitations of the techniques that underlie AI. Your AI auditor ensures that the appropriate rigor in understanding that the insights delivered are really measuring what you think they are measuring and will prove to sustainable over a period of time.
4. Is your process invisible to the end user?
Once you’ve pinned down the details above, make it a point to embed AI into the human workflow in a way that hides complexity from the end user. AI by itself is interesting, but AI connected to a workflow changes behavior. Proper integration into the human workflow is the last step toward truly realizing the benefits of AI.
Take the automotive industry as an example. Analytics to analyze profitability, values and market share of vehicles has existed for 40 years. Those analytics sat fallow until someone figured out how they could tell organizations what cars to buy, sell and reprice daily according to market supply and demand. Once that use case was in place, utilization of analytics took off.
Many of those same historical metrics are now embedded in an easy to use workflow, hiding the complexity of the analytics and difficulty of the decision process from the end user. We benefit, but we don’t quite know how. That is the goal to aspire to in AI as well.
Take the Long View
As tempting as it is to “move fast and break things” with AI implementations, that philosophy only leads to confusion. Today’s applications — chatbots, security threat analysis, automated accounting, marketing research and the like — reflect AI in its infancy.
Bombing the organization with implementations, and hoping the confusion goes away by itself, will ill-prepare enterprises for a future in which smarter AI applications, such as medical-assistant robots and self-driving delivery vehicles, become the ultimate competitive advantage. Put strategic and organizational chips in place now, while applications are relatively primitive, and you will better prepare your organization for a successful future in which both humans and machines will thrive.