Going beyond analytics: The trends enabling the data mosaic in 2020
While we’re more connected than ever, that connection has only served to make the world more fragmented and siloed. There seems to be little common ground, with social media dividing us across a variety of crucial topics, be it global politics, elections, trade wars, and more.
There’s now real concern that the internet is being split across geographic lines, and many local laws, regulations and privacy protections are at odds with each other. And after years of being told core competencies are the only road forward by business schools, strategy heads and activist investors, organizations that only focus on specialization now are starting to see diminishing returns.
What we are starting to see is those that are breaking away to lead their sectors are excelling in merging data and analytics from different disciplines. Cross-vertical solutions such as connecting payments with messaging is just one innovation example that successfully leverages distributed data together with analytics.
Turning a fragmented landscape into a real opportunity will mean using both synthesis and analysis, with active metadata as the fabric. We call this the data mosaic, and leaders will apply this approach to both business models and data in 2020.
There are five key trends that will support the evolution of the data mosaic.
Going beyond big data to wide data
Big data as a term has lost its meaning in many ways—today its best to think of big data as what’s achievable with your current technology. With infinitely scalable cloud data warehouses now common, the mysticism of analytics on massive data volumes is gone.
The next focus will be on “wide data” or distributed data. Data formats are becoming more varied, resulting in a corresponding fragmentation of databases. Synthesizing these different sources together again will be essential in seeing the larger data picture in 2020.
DataOps and self-service spread across the information value chain
Modern BI delivers business users much closer to the insights they need through self-service analytics. However, until now that same model has alluded the data management side. The emergence of DataOps, a set of agile practices, processes and technologies for building and enhancing data and analytics pipelines, will help improve quality and reduce the data management cycle time.
With DataOps on the management side, and analytic self-service for business users, organizations can expect to see greater fluidity on how data is used, analyzed and the way outcomes are achieved.
Connecting data and analytics through active data catalogs
Data sets are increasingly wide and distributed. This creates a significant challenge in inventorying and synthesizing the data for company wide use. Data catalogs can help data from going stale and demand for them is skyrocketing.
There is a rise now in machine learning–augmented metadata catalogs, which help move data from passive to active while keeping up with incremental changes in the data across every environment, even hybrid/multi-cloud ecosystems. IT will start using metadata catalogs as the data connective tissue and governance layer needed to match DataOps and self-service agility. These catalogs also enable data personalization, essential in generating tailored and relevant insights. IT must remember to utilize a catalog that works beyond their analytics tool of choice to incorporate fragmented, distributed data.
Data literacy as a service
No matter how well designed, no data and analytics technology or process in the world can function without the people. It’s no longer enough for IT to drop tools on users and hope for the best. Organizations have a goal of driving adoption of data for decisions to 100 percent.
To reach this goal, the best place to start is by diagnosing where an organization falls on the data literacy spectrum, then working holistically toward making the necessary improvements. As scaling data expertise becomes more critical, business will expect to partner with vendors on this journey, which will drive the emergence of “DLaaS”—a combination of software, education and support—as a service—with outcomes in mind
“Shazam” for your data
Shazam, the now-famous musical service that identifies songs through your device’s microphone, kicked off a category of discovery through alternatives beyond traditional search. Google and Amazon have gone further, with deep learning and visual analysis now able to identify plants and animals, read and translate text, or find clothes by analyzing a photo. Can we “Shazam” our data?
In 2020, AI embedded across the whole information value chain will enable algorithms in analytic systems to get better at fingerprinting our data, finding anomalies, and suggesting new data to be analyzed. We’ll be able to point to a data source and see where it came from, who is using it, how much it’s changed, whether its quality is good, and more. It will enable reaching more insights from data, no matter its size—and combine synthesis with analysis.
In an increasingly fragmented world full of data, analytics will suffer without synthesis. Organizations and analytics professionals must expand their toolkits with methodologies and technology that support both synthesis and analysis. This is especially important when using analytics for strategy and innovation, and for navigating the complex regulatory maze. When we are able to bring data together and empower people to use it in game-changing ways—laying the data mosaic—we can foster true change in business, for people, and in the world.