The first part of this series showed why the traditional approach of comparing software products against each other is less useful than evaluating how they complement existing company systems. It also proposed a new measure for the value of analytical systems: the number of user groups a question must pass through before reaching someone who is able to provide an answer.

The example from last month showed five types of analytical questions, requiring different levels of technical skills and resources to answer. The base case showed the percentage of questions that can be answered by four types of users with existing systems, and the new case showed the percentages with a new system added. The change in score totals showed that the new system allows managers and business analysts to answer more questions, while shifting work away from statisticians and IT. Figure 1 illustrates the same changes even more clearly.

This approach gives a good picture of how a new system will affect the organization. If several new systems are being considered, comparing the charts for each would be enlightening. However, it wouldn’t tell you which one to buy. Picking a single system requires combining the workload figures into a single number that can be used to rank the alternatives. To do this, the detailed figures must be weighted on two dimensions: question type and user type. Question types are ultimately the same as user requirements. Because weighting based on user requirements is part of any evaluation methodology, this poses no new challenge. (The example simply added the five requirement scores, which implicitly weights them equally.)

Weights for user types are something completely different. These reflect the relative value of having different users answer the same question. Our original premise is that answers from users “closer” to the manager are worth more because they are received sooner. The closer answers are probably cheaper, too. So we know the value weight is highest for answers from managers and lowest for answers for IT.

It is possible to calculate precise ratios between these weights based on factors such as turnaround time, labor cost, accuracy and opportunity cost. But in most cases, an intuitive estimate will suffice. In Figure 2, weights of 4, 3, 2 and 1 have been applied to the original New and an alternative business intelligence (BI) product, New 2.

Examining the user-level scores, we see that New 2 gives managers slightly more capability than New (800 versus 760), but business analysts gain much less (420 versus 690). The combined value for New is higher than New 2 (1,580 versus 1,510), making New the better choice.

Where does this leave us? In a better place, I think, than the traditional selection process. Instead of a horserace between product features, this approach puts the focus where it should be: on value to your business. It recognizes that the value of a new tool depends on the other tools already available, and it forces evaluation teams to explicitly study the impact of different tools on different users. By creating a clearer picture of how each new tool will impact the way work actually gets done within the company, it leads to more realistic product assessments and ultimately to more productive selection choices.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access