Continue in 2 seconds

Sometimes, Less is More

  • May 01 2004, 1:00am EDT

When Quantity Does Not Equate to Quality

CIO magazine recently interviewed Maurice Schweitzer, a professor specializing in behavioral research at Wharton School at the University of Pennsylvania. It was a fascinating piece, debunking the assumption that the more time and effort we put into a project, the better the outcome will be. In the interview, Dr. Schweitzer uses the example of a beer commercial where the brewer proudly talks about the beer's slow brewing process. Schweitzer states, "I'm not drinking a beer because of how long it was in a vat. I drink it because of how it tastes."1

We have long associated the input of quantity (length of time to perform a task, make a product, etc.) with the output of quality (how well it performs or is perceived). Dr. Schweitzer calls this "input bias" - we are hardwired to automatically associate input quantity with output quality.

We know from experience that input bias is not necessarily a bad rule of thumb. However, others can prey on this bias and cause you to make bad decisions. IT analysts, vendors and business users can exploit these natural biases to influence IT decisions. For a business intelligence (BI) project, this could be the kiss of death. There are many times when quantity does not relate to quality in these types of projects.

For example, IT analysts can get very caught up in gathering a "complete" set of requirements for an application, spending tremendous hours picking through every possible question that the business community may ask. The 80:20 rule is certainly in play here. We must quickly get a functioning application into the hands of the business community; and we know that it will evolve over time. The key, however, is that we produced a prioritized set of requirements for the company - ones that have the most value to the organization. You may never deliver anything if you insist on polishing the requirements to perfect the last 5 or 10 percent.

The same may be said of choosing the perfect BI access tool. Quantity may not improve quality. Many companies spend significant time and effort to find the one tool that will be perfect for every business analyst. However, how do they judge the worth of the tool? Do they judge based on the vendor's R&D dollars spent? Lines of code in the product? How about whether the business community likes it? Whether it creates their reports easily and quickly? Business users may request every report that they had in the past from your new application (quantity) without regard to what data is actually being used. They fail to consider whether the reports actually generate value for the corporation (quality). To their credit, they may be afraid of losing something they may value or need in the future.

To minimize the effects of input bias, judge the quality of your BI environment on whether it is effective - not just on how long it took to develop or how much it cost. This judgment must rely on things that can actually be measured. This means that you (and your business counterparts) must take the time to be deliberate and careful in determining what the real ROI or metrics of quality will be to make an unbiased evaluation. Think in terms of having to justify your decisions. This emphasis will cause you to rely more on input measurements than the more biased aspects. Here are some examples:

  • Calculate the effect of the new BI application on the company's bottom line. If you are developing an application to determine customer churn, you must associate the analysis with the actions taken to retain customers likely to churn. How many were retained, how many were lost and what were their revenues? These are hard dollars that can be associated with the actual creation of the analytic application.
  • Measure the speed at which a query is resolved, an algorithm solves a problem or a calculation is performed. A quality environment gives the business users fast response time to known queries and reports, and reasonable response to more ad hoc types of queries. Problems with inefficient code, poorly designed databases and ineffective or inappropriate access tools cannot be overcome by more time and effort.
  • Determine the satisfaction of the ultimate users of the environment. Satisfaction surveys are very effective for understanding and evaluating the effectiveness of software products. You may enlist new users to determine ease of use and effectiveness of each BI application.

You may want to consider having a panel of business users review the various BI applications. The panel members won't be afraid to objectively assess the value of the application. Then the IT folks can use the panel's conclusions as justification. There is nothing more compelling than a good report card from a satisfied customer. The analogy is the grade you get on a test, not on how long you studied for it.
The bottom line is to focus on the merits of the various BI applications. How effective was the BI application in performing its promised actions? How well do the metrics demonstrate this effectiveness? What is its effect on the bottom line - your revenues?

1. "Bias Beware." CIO. 1 March 2004.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access