To make a truly sound decision, buyers must know how software will impact their company. The software must be assessed in the context of other company systems. A list of requirements can be misleading because some of the requirements may already be met by other resources. But you can’t just ignore those requirements: if a new product is better at them than the existing tools, the value of that improvement should matter. To place the new product in context, the evaluation team must first describe how the company functions today and then see what would change if the new product were added.

The mechanics of this approach are not so different from conventional assessments. It still starts with a requirements list, preferably built by analyzing company processes through scenarios. What’s different is the next step: preparing a set of scores for how well each requirement is met today. This is the “base case.” Then the team creates additional sets of scores for the base case plus each new system (less existing systems that would be removed, if any).

The practical impact is that a new system gets credit if it meets a requirement better than the existing systems but is not penalized if it doesn’t. Instead of comparing the new systems in isolation, you’re measuring the value they add to your business.

Converting “requirements met” to “business value added” is not easy. With analytical systems, the business value often comes from making better decisions, not from processing transactions more quickly or at lower cost. How do you measure the contribution that different analytical systems will make to better decisions?

One approach is to measure who does the actual work. The logic is this: in most analytical systems, the primary goal is to answer managers’ questions. So the best system is the one that gets managers their answers quickest. This depends on how far from the manager each question must travel before reaching someone who can answer it.

Each step in this chain adds time. Therefore, the value added by an analytical system can be measured by how many answers it moves closer to the manager. For example, automated predictive modeling systems shift the ability to build models from statisticians to business analysts, speeding up the process and lowering the cost dramatically.

Figure 1 provides a concrete illustration. It defines five levels of effort to answer a question: read an existing report, drill down in a business intelligence (BI) system, analyze data extracted from an existing BI or reporting system, analyze data in a data warehouse (but not exposed to end users in a BI system) and add data not already in the warehouse. Numbers represent the percentage of questions of each type that can be answered by each user role (they add to 100 percent, reading across). Column totals show the capabilities and workload for each group.

In Figure 1, “Base” shows the current situation and “New” shows capabilities after adding a new BI tool. As the “Change” row illustrates, the new tool slightly empowers managers and greatly increases the capabilities of business analysts. The workloads of statisticians and IT are reduced accordingly.

So far so good. But how do you find the best choice among several systems? I’ll answer that question next month.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access