I was recently given the opportunity to take a preview look at the BI Survey 9 – The Customer Verdict, produced by the Business Application Research Center. BI Survey 9 represents the latest in an evolution of research studies that include 8 previous editions of BI and OLAP surveys. The BI Survey has built a reputation for excellence – "The Holy Grail of Business Intelligence" – to justify it's $4995 price tag. The 300 page document is distributed as a pdf file available for download from the bi-survey.com website.

There's much to digest in this very comprehensive work. In contrast to other BI surveys I've recently read, BI Survey 9 is deeper, addressing more substantive issues surrounding BI deployments. In addition, in today's saturated survey environment, a response base that includes a final cleansed list of 1853 users, 317 consultants and 495 vendors from around the world is impressive indeed.

Even as I was impressed by the sample size, I was somewhat frustrated in my attempts to determine where the responses came from. The methodology section in Chapter 2 doesn't give enough information on how the survey underlying the report was conducted. In a follow-up with a knowledgeable BI-Survey product manager, I was able to discern that the entirely online survey ran from Oct 2009 to February 2010. BARC promoted the research to its own database through email and websites at bi-survey.com, bi-verdict.com and barc.de.

In addition, they solicited the support of leading BI tool vendors who in turn promoted the survey to customer bases via their own websites and newsletters. Various thought-leading BI media and communities also publicized the survey via editorial comments, email and newsletters. Finally, BARC solicited several BI bloggers to ask readers to complete the survey. It'd be nice if this information were available in the published report. Readers could then interpret findings in light of their assessment of the data collection process.

The research obsesses on its vendor-neutral positioning, certainly a good thing. The study also seems especially attentive to the potential “bias” of vendors stacking the survey deck in their favor to skew findings, and eliminated many suspicious responses in the data cleansing phase, also very impressive. They've convinced me the work addresses bias as prejudice.

That the study is driven from voluntary, self-selecting web respondents doesn't protect it against survey bias, however. Indeed, though I'm hardly a xenophobe,  I have to question the “representativeness” of a BI survey with twice as many European respondents as North American. Does this geographic imbalance unduly influence findings? To some extent, yes.

The authors acknowledge special efforts to widen the European response base. “The three largest BI markets are in the US, Germany and the UK, so The BI Survey 9 was produced as a collaboration between organizations in each of these countries, and in partnership with publishers and vendors in these and other countries. It features not just the well-known US products, but also regional European products.”

The results of those efforts clearly show in the products that make the review list. Among the 23 platforms that qualified for inclusion with 40 or survey more responses are the big-BI usual suspects from Microsoft, Oracle, SAP/BO, IBM/Cognos, Information Builders, SAS and MicroStrategy. Popular in-memory QlikTech and fast-growing open source Pentaho also made the cut this year. But surprising  was the inclusion of many German-based products little seen in the U.S. – Bissanz, arcplan, Jedox Palo, Cubeware, MIK – along with Danish Targit and Swiss Board. These smaller product inclusions reflect the survey's success attracting respondents in specific European markets. I'm not sure, however, how representative these product utilizations are of actual worldwide usage. At a minimum, report readers should not extrapolate survey usage percentages to the population.

Aside from the minor transparency and representativeness challenges that are fixable, there's a lot to like about BI Survey 9. The over 2000 valid responses dwarf what's generally seen in the industry today. And even if the sample is not random and over-represents small, niche vendors, there's comfort in the crowd wisdom of several thousand.

The Survey presents a wealth of information about respondents, allowing readers to assess the potential bias of the sample. Aside from the geographic breakdown, the organizational size categorizations of  respondents are quite informative. Not surprisingly given the product focus, organizations of all sizes are represented in the study. Companies using  big-BI tools such as Oracle, SAP/BO and IBM/Cognos have significantly higher revenues than those deploying products from smaller vendors. Indeed it might make sense to analyze the large and the small separately.  A busy graphic cross-classifying respondent usage by industry vertical confirms OpenBI's experience that financial services goes for big-BI products and that Pentaho rates highly in IT services and software. The product by geography chart clearly shows the European dominance of the small vendors mentioned above.

Perhaps the most significant strength of BI Survey 9 is its depth of inquiry into pertinent BI concerns  – both business and technical. I'm particularly intrigued by the business benefit index (BBI), a weighted indicator summarizing such BI positives as better business decisions, reduced headcount and increased revenue. The Survey monitors the BBI over time as well by product, vendor and other variables such as  breadth of deployment, existence of a competitive evaluation process, support quality and time to production. Tellingly, those implementations following on the heels of a competitive evaluation with broad, quick and well-supported deployments are highly rated. I'd love to see this analysis done separately for large and small companies.

A second KPI used by BI Survey 9 is the goal achievement index (GAI), whose scale ranges from “Exceeded goals” to “Not met goals at all”.  A GAI by product graphic clearly shows that respondents from smaller vendors are more satisfied than those from big-BI.  Companies that did more significant, multi-vendor pre-deployment evaluations reported noticeably higher GAI scores than those that didn't. Might it be the case the smaller vendor companies are newer to BI and hence not yet jaundiced by over-expectations and under-delivery?

There are many other analyses and breakdowns available in BI Survey 9. Total cost of ownership (TCO);  licensing fees; types of applications, from standard reporting to data mining and scorecards; and data size, ranging from 200 MB to more than 1 TB are additional metrics analyzed. Not surprisingly, each of these varies markedly by company size and hence product.

The report  also presents an exhaustive delineation of problems, issues and deterrents to development and deployment with the individual products. Independent of the between-vendor comparisons, there's much wisdom to be found  with the customer assessment of business and technical risks to successful BI deployments. The authors eventually turn the interview table,  probing the BI vendors for the sellers' insight. I must admit that I was suffering from analysis fatigue by the time I got through this section – with a third of the report remaining! I don't think readers can complain about skimpy content.

For those organizations with adequate budget support committed to building a BI capability, I'd recommend consideration of BI Survey 9 – The Customer Verdict. The wisdom gleaned from companies that did things right as well those that went astray might make the Survey an important pre-deployment investment.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access