Continue in 2 seconds

Drilling Into OLAP Benefits

  • March 01 2004, 1:00am EST
More in

You probably see information technology (IT) surveys reported in the press every week, but how often do you think about their origins? The big question to ask is: Who is behind the survey? In particular, was it sponsored by a vendor, as most are?

It is difficult and expensive to undertake an in- depth survey of a large sample without vendor sponsorship. For these reasons, you must always consider the sample size and whether the questions were designed to tell a particular story. You must also consider how the results were interpreted. For example, were inconvenient results suppressed?

The OLAP Surveys are different. They are run entirely independently, with no sponsorship. Vendors may subsequently selectively quote some of the findings, but they have no idea about or influence over who is surveyed, what questions are asked and how the results are analyzed. This means that to conduct an OLAP survey, we need to find a significant, well-distributed sample of OLAP users, without access to vendors' customer lists. That's where you come in. DM Review's readers were, once again, the largest single source of people to be surveyed in The OLAP Survey 3. If you were one of the 578 DM Review OLAP users whose results were analyzed, thank you!

A record total of 2,897 people from 63 countries provided data for The OLAP Survey 3, compared to 2,236 in 2002. Of these, approximately 40 percent had not yet considered OLAP, and approximately 10 percent were considering but had not yet purchased OLAP technology. Of the remainder, data from some respondents was unusable for a variety of reasons, leaving data from 1,116 user sites, which, between them, were using more than 50 products. Ten products had enough users to be analyzed in depth.

Not all questions were answered by every respondent, but most of the 54 (in some case multipart) questions were answered by more than 1,000 sites, which means that the results have solid statistical validity. The key point is that this survey includes data on a wide range of products; therefore, it's a good way of comparing products. Individual vendors' surveys of their own customer bases may sometimes include larger samples of users of a single product, but they lack the essential data to compare one product with another.

What Benefits?

As in the previous survey, we asked respondents about the extent to which they had achieved eight separate potential business benefits. The possible levels varied from "Proven and Quantified" to "Got Worse/More Expensive." We used these responses to calculate weighted scores, plotted in Figure 1.

Figure 1: Weighted Benefit Scores

As can be seen, two "soft" benefits, improved reporting and better business decisions, were much more likely to be achieved than "hard" benefits such as reducing costs and head count. Perhaps this is inevitable, but I can't help wondering if more of the hard benefits would be achieved if organizations set explicit goals for them when undertaking business intelligence (BI) projects and ensured that these goals were monitored.

These weighted benefit scores provide a cunning way to calibrate nearly every other factor in projects, including the products used, amounts spent, data volumes, platforms, lead implementers, Web deployment rates and even the criteria used in product selection. The rationale, of course, is that the best measure of project success is the level of business benefits achieved (as opposed to particular technical achievements). If you know which factors correlate positively with benefit achievement, you can go out of your way to favor them in your own projects. Being feature-independent, this method allows the overall success rates of very different products to be compared.

What Affects Business Benefit Achievements Levels?

There are so many factors that affect project success that it's quite daunting at first even to speculate about which has the greatest effect. However, The OLAP Survey 3 provides some answers.

Perhaps surprisingly, by far the most significant factor seems to be the choice of product. The users of some products report markedly more achievement of business benefits than the users of others. The product with the highest level of business achievement was the former Brio Intelligence (now Hyperion Intelligence), while the product with the lowest levels of reported business success by far was SAP BW. Remarkably, the same two products occupied the top and bottom slots the previous year, though the gap widened significantly in the latest survey, with the best products doing better than ever and the worst slipping further behind.

In The OLAP Survey 3, we also asked about client tools used with Microsoft Analysis Services and Essbase, the two leading OLAP servers. The Analysis Services sample was much larger than the Essbase sample, and a wider range of client tools were in use; therefore, more analysis was possible. For Analysis Services sites, the front-end tools with the highest benefit ratings were the group of full-function, third-party Excel add-ins, which scored higher than individual proprietary client tools. Sites using only Microsoft's own client tools enjoyed less success than those using most third-party tools, though Crystal Analysis users had the lowest rating of all. Among Essbase sites, Crystal also managed to come out on the bottom, but Brio (not yet owned by Hyperion) came out on top.

Another area where there was a clear difference was the time taken for initial rollout to users: projects with fast rollouts have much higher business success rates than those that take longer.

Reassuringly, those who spent a meaty $250,000 to $500,000 on license fees enjoyed much more success than those who spent less than $5,000. However, the curve of business success versus license fees wasn't very clear-cut, with those spending more than $500,000 actually reporting reducing levels of business benefit. Even less clear was any strong correlation between spend on external implementation fees and business success.

Though the differences were smaller than in the past, for the third year running, implementations led by specialist BI consultants were the most successful, while those led by large, general-purpose consulting firms were the least successful. The latter took longer, cost more and hit more technical, data and people-related problems than other projects.

The least significant factor seems to be the choice of platform. Differences were slight, but UNIX sites had marginally lower levels of business success than others.

What's the Best Way to Choose Products?

We provided a list of 17 possible criteria that organizations might use for choosing products, and respondents could select as many as three that they had used. If you are a typical buyer, you would probably use criteria ranked high in this top-ten list (based on what respondents said had been key factors in their product selection):

1. Functionality
2. Ease of use for end users
3. Integrates with other products already in use
4. Price
5. Performance
6. Corporate standard
7. Ease of use for application builders
8. Bundled with another product
9. Ability to support large numbers of concurrent users
10. Large data handling capacity

However, is this the best set of criteria to use? The OLAP Survey 3 looked at the selection criteria weighted by levels of business benefit achievements, and the subsequent results-driven top-ten list is rather different:

1. Performance
2. Completed "proof of concept" faster
3. Concurrent user scalability
4. Ease of use for application builders
5. Ease of use for end users
6. Web architecture
7. Functionality
8. Server platform support
9. Large data handling capacity
10. Vendor relationship/reputation

In other words, you should pay more attention to performance, an effective proof of concept and concurrent user scalability, and less to functionality. In fact, query performance in general is very important: it was the most frequently cited product-related serious problem, occurring almost twice as often as the next, unreliable software.

Performance at the Speed of Thought?

Having had the importance of performance confirmed again, which products did well and which disappointed?

Oracle Express sites were the least likely to complain of poor query performance, with only 7.7 percent of sites mentioning the problem; the newer Oracle9i OLAP option had too few active sites to measure. Microsoft Analysis Services and Hyperion Essbase sites also had relatively few performance complaints, with 10.5 percent and 11 percent, respectively. At the other extreme, 28.1 percent of Brio and 27.3 percent of SAP BW sites complained of poor query performance.

Many people think that query times should be below five seconds, but only two products had median query times that beat this target: Applix TM1 (4.2 seconds) and Microsoft Analysis Services (4.7 seconds). The worst was Brio, with a median of 33 seconds. The overall median was 8.8 seconds, a half second degradation on the 8.3 seconds recorded the previous year.

The Perils of Forecasting

The OLAP Surveys concentrate on asking respondents what their organizations currently do or what they have done in the past. It deliberately asks very few questions about their future plans as these surveys are designed to survey what is actually happening, not what people think might happen in the future. However, one of the few areas for which The OLAP Surveys do ask questions about the future concerns Web and extranet deployments. One such question is, "What percentage of the users are expected to access (product name) via a Web browser within 12 months?" Figure 2 shows the percentages of the respondents who expected at least half of their users to be accessing OLAP via the Web in a 12 month time compared to the actual numbers measured at that time.

Figure 2: Actual and Expected Web Deployment Rates

As is shown, 67 percent of the 2001 sample expected to be at least 50-percent Web-deployed by 2002, but the actual rate in 2002 was just 37 percent. The gap narrowed the following year: 59 percent of the 2002 sample expected at least 50-percent Web deployment within 12 months, whereas the actual rate in 2003 (41 percent) was only four percent higher than that in 2002. Again, the 2003 sample was slightly less bullish for 2004 than their predecessors had been for 2002 and 2003; however, experience suggests that that true rate in 2004 is likely to be in the mid-forties, not the high-fifties.

Exactly the same trend was found with extranet deployments. In 2002, 12.6 percent of sites surveyed were deploying OLAP to outsiders over an extranet; however, they forecast a dramatic increase to 31.1 percent within a year. There was indeed an increase in 2003, but only to 14.8 percent. Therefore, a forecast growth of 18.5 percent turned into an actual growth of just 2.2 percent.

Again, 29.7 percent of the 2003 sample expect to be deploying OLAP over an extranet by mid 2004; however, if the previous trend continues, the real number is likely to be between 15 and 20 percent. The sites with the highest extranet deployments in 2003 were using MicroStrategy (25.7 percent), while SAP BW sites had the fewest (just 6.4 percent).

Why are the forecasts so over-optimistic? I'm sure respondents were honestly answering with the best information they had at the time, but perhaps many of the Web projects subsequently hit technical or economic snags. Additionally, maybe there's a positive feedback, with users responding to the media and vendor hype about Web deployment, which then gets fed back into more media hype. Certainly, the survey confirms that Web and extranet usages is much lower than many people think, and growing a lot more slowly.

The OLAP Survey 3 showed many positive trends. The total sample showed a much higher rate of OLAP usage, rising goal achievement rates, increasing data volumes and projects that generally seem to be going better. Even data quality seems to be improving.

However, one striking observation is just how many of the vendors that were independent when the survey was conducted have now been acquired: Brio, Cartesis, Comshare, Crystal Decisions and MIS. Several others changed hands in the year preceding the survey. This may be part of an inevitable trend, but it's not necessarily good news for users. In general, users of products that have previously changed hands report lower levels of success and inferior product support; and products from some of the larger vendors score lower than those from smaller, independent vendors. For more information, see

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access