© 2019 SourceMedia. All rights reserved.

Insufficient data quality stymies most deep learning efforts

The adoption of deep learning techniques in enterprises will be stymied by the lack of a sufficient quantity of high-quality, labeled data, according to Chirag Dekate, senior director and analyst at research firm Gartner Inc.

“Most enterprises will not have the capacity to develop deep learning initiatives from scratch,” and as a result these organizations will rely on application programming interfaces (APIs) or cloud-based services to deliver deep learning to business users, Dekate said.

Some enterprises will also use reinforcement learning and transfer learning as a means to circumvent these challenges where feasible.

Chirag Dekate.jpg

“Traditional machine learning techniques including gradient boosted trees, support vector machines, and clustering will dominate early adoption phases” before companies explore complex deep learning, Dekate said.

Cloud service providers will continue to improve their deep learning services offerings, Dekate said. In addition, independent software vendors are actively trying to improve their deep learning capabilities, and seeking to minimize complexity for end users.

With machine learning more broadly Gartner expects to see several trends, said Alexander Linden, vice president and analyst. These include more automation; better integration of various components across the machine learning pipeline; better handling of bias in data and analytics; better data ingest capabilities; and more deep reinforcement learning.

For reprint and licensing requests for this article, click here.