Top trends to expect in cloud computing, data science and AI

Register now

Last year could be referred to as the year of the cloud. In 2020, however, cloud deployments will become more popular as organizations look to reap the benefits of hybrid-cloud models.

I also expect more data warehouse modernization with containerized environments to help organizations achieve smooth deployment and management experiences.

Enterprises will look to migrate from legacy on-premise data platforms to cloud data warehouses with cost efficiency as top of mind. As retail continues to move online, retailers will concentrate on solidifying customer privacy tactics. And AI adoption and training will only continue to advance in the coming decade, as well as a growing demand for data satisfied by synthetic datasets.

So, how will the next 10 years and beyond advance technology even further? These predictions will explain what we’ll see in 2020 and why.

CLOUD COMPUTING

On-premises investment by the public cloud providers cements hybrid cloud’s dominance

In 2020, hybrid cloud deployments will increasingly become the focus on the hyperscale public cloud providers.

In 2019 alone, the three largest public cloud providers—AWS, Microsoft and Google—invested in on-premise capabilities. AWS, for instance, announced an on-premises version of its IaaS and PaaS services called AWS Outposts, planned to ship in late 2019; Microsoft announced an expanded catalogue of on-premises Azure Stack services, including software-only and converged hardware packages.

Similarly, Google added the ability to centrally maintain and manage Kubernetes clusters as a primary feature of its Anthos microservices platform and Salesforce’s acquisition of Tableau, a predominantly on-premises self-service analytics vendor.

Migrating to the cloud can be difficult, so a hybrid approach is a good option for those enterprises that are taking longer to make a full cloud transition. With hybrid cloud deployments, companies can transition to the cloud at their own pace, with less risk and at a lower cost, while still making the most of efficiency and effectiveness improvements.

Data warehouses modernization projects utilizing containers will expand rapidly

To date, cloud has primarily been used to build new apps and rehost infrastructure. In 2020, enterprises will leverage cloud to modernize existing business apps, processes and data environments.

In 2020, more data warehouse modernization programs will be deployed in a containerized hybrid/multi-cloud environment, helping organizations become more agile and deliver a frictionless deployment and management experience. This investment will be driven by the need to speed up data accessibility, improve the timeliness of insights, lessen support and maintenance costs and future proof existing data environments.

A container-based approach enables organizations to reap the benefits of “cloud-native” as quickly as possible. Containers can help these companies manage data in a hybrid cloud setup in ways other approaches cannot. Moving data to the public cloud can be extremely risky, and expensive because of data gravity, which leaves data where it currently resides.

Containers often equate to agility, but they also increase portability. By building out services and data stores within containers, businesses can more easily move them all—or some of them at a time as part of your migration strategy—to the public cloud.

Containers also provide flexibility in terms of maintaining a similar architecture across all a business’ on-premises and cloud applications, while being able to customize rollouts in geographical regions.

Cost optimization for cloud data warehouses becomes a growing priority

As organizations continue their journey to the cloud, CIOs are finding that a growing portion of their tech budgets are going to cloud subscriptions. While cloud brings business agility and on-demand computing services, it can create wasted or inefficient cloud spend. It will be critical, therefore, for organizations in 2020 to better plan, budget and forecast spending requirements for cloud consumption.

Cloud data warehouse costs are often based on resource usage patterns, including how much data is stored, queried and possibly inserted. In cloud terms, storage is relatively cheap, but can start to become expensive for organizations if the complexity of their analytics functions increases; with certain operations such as aggregating data, or raw fact-based queries pushing up compute resource.

Certain providers also have pricing models that cap costs at a certain level and therefore they cap the amount of resources employed to process workloads.

In 2020, cost efficiency will be important in successfully migrating from legacy on-premise data platforms to cloud data warehouses.

RETAIL CUSTOMER EXPERIENCE

Retailers will increase investments in personalization to improve conversion rates and customer lifetime value

Personalization doesn’t work if businesses don’t have the means to understand the needs of high-value customers on a continuous basis. This requires organizations to continue to invest in data and analytics that can pool and analyze structured and unstructured data, as well as algorithms that can identify behavioral patterns, intent signals and customer propensity. Advanced analytics capabilities to deliver recommendations or personalized content to retail systems or front-line staff.

Most personalization efforts to date have focused on digital channels. In 2020, however, leading retailers will incorporate more in-store personalization, such as using insights from advanced analytics to generate personalized product recommendations for customers in real-time.

Privacy will also take center stage in 2020, with retailers giving customers more meaningful options about the data they share through preference management and being more proactive in illustrating the value this will bring them in return.

DATA SCIENCE & AI

The demand for data will satisfied by a growing availability of synthetic datasets

Synthetic data is data that is generated programmatically. The two most common strategies for synthetic data usage we will see are:

1. Taking observations from real statistic distributions and reproducing fake data according to these patterns.

2. A model is created to explain observed behavior, and then creates random data using this model. This strategy is strong when trying to understand interactions between agents that are had on the system as a whole.

Organizations that considered their data storage capacities to be minimal will realize they need a sophisticated solution to house their synthetic data if they are to compete on the hard-hitting elements of machine learning.

Data science teams will increase use of GPUs to handle massively parallel analytical workloads

Deep learning can use regular CPUs, but for serious enterprise projects, more data science teams in 2020 will need to explore the use of specialized chips such as GPUs that can handle massively parallel workloads to more quickly train and retrain models.

Without these specialized chips, deep learning would not be practical or economical since the discipline requires significant amounts of compute capacity to process high volumes of data.

For reprint and licensing requests for this article, click here.