You’ll hear more about this in our upcoming 25 Top Information Manager list for 2012, though the idea has been around a long time and marching down a few paths.
On one path, more enterprises are dead serious about creating and using data they can trust and verify. It’s a simple equation. Data that isn’t properly owned and operated can’t be used for regulatory work, won’t be trusted to make significant business decisions and will never have the value organizations keep wanting to ascribe it on the balance sheet. We now know instinctively that with correct and thorough information, we can jump on opportunities, unite our understanding and steer the business better than before.
On a similar path, we embrace tested data in the marketplace (see Experian, D&B, etc.) that is trusted for a use case even if it does not conform to internal standards. Nothing wrong with that either.
And on yet another path (and areas between) it’s exploration and discovery of data that might engage huge general samples of data with imprecise value.
It’s clear that we cannot and won’t have the same governance standards for all the different data now facing an enterprise.
For starters, crowd sourced and third party data bring a new dimension, because “fitness for purpose” is by definition a relative term. You don’t need or want the same standard for how many thousands or millions of visitors used a website feature or clicked on a bundle in the way you maintain your customer or financial info.
That’s how we already work, but doesn’t ensure we understand what we’re doing with our data or trustfully use it as well as we could.
I was talking about this a few weeks ago at a conference with people smarter than me, including Gartner analyst Merv Adrian. Merv pointed out that ”we’ve had logs and weblogs around forever, but in practice, we’re using more of the data now than ever before and trying to make decisions with it.”
He’s right. We fixate on the growth in the creation of new data, but from another angle it’s just as much about our fixation to analytically parse, structured or unstructured, fine and coarse, and mash up for reasons superficial and deep.
We’re freelancing because we sense opportunities in data to serve the demand for business growth. It’s possible to likely that the business analyst is going about his business, the data scientist her business and I’m doing my business without much collective awareness. We are not governing the range of data we are holding and trusting for a purpose. As the user base grows and more of us behave like data whiz kids, it’s a discussion that will eventually come up.
We could decide to share and chronicle our data sources and use cases if only to understand better how we are already conducting business. This is heresy where IT’s charter has been to banish “spreadmarts.” The whack-a-mole reaction of many cultures is the very reason “shadow IT” and spreadmarts exist in the first place.
I heard a good flipside argument yesterday from Travis Greene from NetIQ on our weekly DM Radio show, where we happened to be discussing cloud computing and business process management. Travis pointed out that IT is by degrees migrating to the role of service broker. Though the pace may be fast or slow, that’s something a lot of people would agree with. Travis’s point was, since governance falls to those who manage assets, why not involve IT in a cultural setting where it can advise and govern centrally with a library of understood services and resources?
How about that? People working with contextual license from areas of expertise, providing what Forrester BPM analyst Clay Richardson called on our show “the guardrails” to keep business on course that also reveal the fences that ungoverned users choose to leap, to our benefit or peril. He talked about finding a fabric to connect on-premise and off-premise processes.
Sooner or later, it all comes out in the wash. Better to understand governance in context and lessen our chances of looking stupid on the front page of the New York Times.