OCDQ Blog
JUL 9, 2013 3:15pm ET

Related Links

Analytics CEO Schools Payers at AHIP
June 16, 2014
Making Sound Business Decisions
June 11, 2014
Data Credibility: A New Dimension of Data Quality?
May 6, 2014

Web Seminars

Advancing Analytics in the Digital Age
July 11, 2014
The Real Cost of “Chat Now” for Your Business
July 29, 2014
Blog

Data, Metadata and the Assumption of Quality

Print
Reprints
Email

In my post “The Wisdom of Crowds, Friends, and Experts,” I used Amazon, Facebook and Pandora respectively as examples of three techniques used by the recommendation engines increasingly provided by websites, social networks and mobile apps.

Get access to this article and thousands more...

All Information Management articles are archived after 7 days. REGISTER NOW for unlimited access to all recently archived articles, as well as thousands of searchable stories. Registered Members also gain access to:

  • Full access to information-management.com including all searchable archived content
  • Exclusive E-Newsletters delivering the latest headlines to your inbox
  • Access to White Papers, Web Seminars, and Blog Discussions
  • Discounts to upcoming conferences & events
  • Uninterrupted access to all sponsored content, and MORE!

Already Registered?

Advertisement

Comments (2)
Jim, you raise a good point that brings to mind the concept of stratified stability. It was expressed by Jacob Bronowski who was a polymath and author of The Ascent of Man, which was also an enjoyable PBS series many years ago.

His premise was "the stable units that compose one level or stratum are the raw material for...higher configurations which will chance to be stable" and leading to even higher levels of configuration. Basically like "a ladder from simple to complex by steps, each of which is stable in itself."

His thinking applies if we assume this commonly, if not mistaken notion that data linearly progresses to more complex wisdom. Data -> Information -> Knowledge -> Wisdom. Meaning wisdom derives from knowledge which derives from stable information, and so on back to stable data.

But I believe some a priori wisdom is needed to have knowledge which in turn is needed for information and then for data. The vector is not like time, in one direction.

What that means is the data does not have to be highly stable to have stable information and subsequently stable knowledge and wisdom. We can have an "acceptable" level of crumby, i.e. unstable data, but still have relatively stable knowledge and wisdom.

We can even have completely bad data because at some future point we may gain the wisdom to question prior data to determine that it was wrong to start with! Unlike with time, I think with data, information, knowledge and wisdom we can go forwards and backwards and start at any point to do so.

Posted by Peter P | Wednesday, July 10 2013 at 2:51PM ET
A recommendation engine's primary purpose is maintaining communications with customers and increasingly lifetime value of a customer. It's an advertising technique. Its appeal is that it appears to the customer like a personal recommendation. Whether the recommendations are accurate or appropriate is beside the point.

In spite of the hype, these recommendation engines are simply looking for correlations. Random connections between inanimate objects (people are inanimate within the context of a database). They know nothing of the people, their behaviors, personal tastes, likes and dislikes. They are looking at purchasing patterns. This is not knowledge, these are simply correlations.

I shudder when I see terms like sentiment analysis, as if these engines can derive our feelings, biases, experiences and other random human behaviors. Some on Wall Street have been trying to predict investment opportunities based on sentiment analysis. Mining text for certain words. I suggest their time would be better spent understanding the psychology of investing, a Random Walk Down Wall Street is a good start.

When do recommendations become an annoyance to the customer? That is the primary quality measure. This seems to be lacking in recommendation engines. When are there too many recommendations? This is the same problem with any data mining or correlation based analysis. When do you have too many correlations?

I suspect we are reaching some "tipping point" with Amazon, Netflix, Facebook etc. with the use of recommendation engines. Too many "likes" and purchase correlations could result in a backlash from customers. Something like the annoying telephone marketing. Perhaps we will arrive at a point of requiring a "do not recommend list".

I suggest that those providing recommendations consider the volume of recommendations as the primary quality measure. Too many recommendations may drive people away. An example of this is too many LinkedIn endorsements.

Recommendation engines are dumb and blind. They have no human sensory capability. The data that is used to construct recommendations is limited. But, it gives these companies something to advertise. They "know" you; they "know" what you want. Now that I think about it the NSA knows you too! But remember' it's only a series of correlations. It's really not that smart and it certainly isn't knowledge or wisdom.

Posted by Richard O | Monday, July 22 2013 at 1:28PM ET
Add Your Comments:
You must be registered to post a comment.
Not Registered?
You must be registered to post a comment. Click here to register.
Already registered? Log in here
Please note you must now log in with your email address and password.
Twitter
Facebook
LinkedIn
Login  |  My Account  |  White Papers  |  Web Seminars  |  Events |  Newsletters |  eBooks
FOLLOW US
Please note you must now log in with your email address and password.