I just came back from an exciting week in Orlando, FL, shuttling between SAP SAPPHIRE and IBM Cognos Forum conferences. Thank you, my friends at SAP and IBM for putting the two conferences right next to each other (time and location wise), and saving me an extra trip! Both conferences showed new and exciting products and both vendors are making great progress towards my vision of next generation BI: automated, pervasive, unified and limitless. I track about 20 different trends under these four categories, but theres a particular one that is especially catching my attention these days. It went largely under covers at both conferences, and I was struggling with how to verbalize it, until my good friend and peer, Mark Albala, of info-sight-partners.com, put it in excellent terms for me in an email earlier today: its all about pre-discovery vs. post-discovery of data. We can debate endlessly pros and cons of traditional row oriented RDBMS vs. newer DBMS architectures specifically designed for BI and OLAP (like columnar Sybase IQ, Vertica, Paraccel, inverted index Microsoft FAST Search, Endeca, Attivio, tokenized illuminate, and in-memory analytical DBMS TIBCO Spotfire, QlikTech) and I had lots of fun doing that on the DM Radio show last Thursday! One thing, however, remains undisputed: traditional RDBMS and OLAP architectures require pre-discovery of data, aka data integration and data modeling. No matter how much flexibility and richness we think we built into out relational or multidimensional data models, they are still only as good as our initial design. If we did not anticipate the types of questions that would be asked of our application in the future, no fixed relational or multidimensional data models will be able to help us. But the world moves way too fast on us. For example, the methodology behind economic capital calculation in the financial services industry, according to Basel II requirements, may change on a weekly, sometimes even on a daily, basis due to regulatory and competitive pressures. No traditional data models and BI tools can keep up with such furiously quickly changing requirements. As a result, one of our recent surveys found that more than half of the respondents did not have most of the information they were looking for in their BI applications, and close to two thirds relied on IT for new BI requests. Whats the answer? There are many, but one partial answer is post-discovery, rather than pre-discovery of data. For example, an inverted index DBMS from Attivio, or a tokenized data store from illuminate, or in-memory models from TIBCO Spotfire and QlikTech just need you to index or loading data "as is", not really requiring any modeling up front. And because all of these technologies can indeed cross reference every attribute with every other attribute (it's an index!), a virtual data model is created on the fly simply by virtue of asking it a question. Gone are the days of having to analyze your requirements, document your requests, work with IT to make it happen - a process that often takes weeks or months. Sounds good? It does, but this is obviously not a BI panacea. Yet. I do not think we will see a mass conversion to these analytical engines that allow for post-discovery of information and data models in the short term. Why?
Hows all this related to SAP and IBM, you ask? Simple.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access