AI needs human collaboration for maximum effectiveness
A few years ago it was broadly predicted that artificial intelligence and deep learning would quickly take over some industries, such as healthcare, where AI-powered analytics would be used for certain forms of radiology detection. In reality, there’s still a lot of work to be done with AI and deep learning to make it safe and effective.
Significant progress is needed in regulating AI, which is still most effective when coupled with human collaboration and insight. As proof, a new study showed radiologists are 92% accurate in detecting problems on their own. Coupled with AI and deep learning, accurate reading detection has grown to 95%. But, together, human involvement with AI, we’re looking at 99.1% accuracy. The level of development around AI and deep learning needs to continue to evolve a great deal.
There will be progress with AI governance and regulation in 2020, but we have a long way to go
In 2020 we will see continued progress in governance and regulation, but for enterprises it will mainly be moderate. As they move from AI from the lab to the real world, industry will continue to lead efforts to govern the industry.
In the long term, we can expect real stakes in the ground from governments, who will come to lead regulation while grappling with AI and the free market. Heavily regulated industries will see more concrete declarations from the SEC, FDA and other agencies on the use of AI in trading, drug discovery and autonomous weapons, in particular.
We will see advancements in “explainable AI” and more trust in AI across the board
Advancements in explainable AI will continue in 2020 and beyond as new standards are developed around the technical definition of explainability, slowly followed by new technologies to address the explainability problem for business leaders non-technical audiences. In real estate, for example, offering a compelling explanation for why a mortgage application was rejected by an AI-driven platform will eventually be a necessity as AI adoption continues.
Although we’ll see evolving technical tools and standards, progress for layperson tools will be slower with some narrow and domain-specific solutions (e.g., non-technical explainability for finance) emerging first. Like the general public’s understanding of ‘the web’ in the 90s, awareness, understanding and trust in AI will gradually increase as the capabilities and use of the technology spreads.
Real AI vs fake AI in 2020
Using sophisticated tooling to automate what we would call human creativity is now commonly referred to as AI. However, the term has become almost meaningless as “AI” as now covers everything from predictive analytics to Amazon Echo speakers. The industry needs to get their arms around real AI.
‘Real AI’ is sophisticated tooling and machine learning in a substantial way that pushes a scenario forward. ‘Real AI’ uses tools in a way that affects change for a particular vertical. ‘Real AI’ either takes ordinary tools of AI and makes them extraordinary or takes ordinary tools and applies them in an extraordinary way to a vertical.
That said, there will always be disagreement from experts as to what constitutes ‘real AI.’
The evolving expectations for the discipline were cleverly summarized by Kathryn Hume, one of my Canadian colleagues, who said that artificial intelligence is "anything computers can't do...until they can." For some observers, ‘real AI’ won’t be achieved until we succeed in creating genuine thinking machines (i.e., artificial consciousness), which might take several decades if not more.