FEB 5, 2013 7:27am ET

Related Links

Four Trends Revitalizing Business Intelligence
Cloud BI: Why You Need It, Where It's Going

Web Seminars

How Customer Analytics Can Lower Costs and Raise Revenue
July 29, 2014
Improve Omni-channel Shopping Experience with Product Information Management
August 21, 2014
Feature

Why Most BI Programs Under-Deliver Value

Print
Reprints
Email

The challenge with sourcing data from operational systems and integrating data from multiple disparate source systems is formidable, and often laden with technical hurdles. The business users have struggled with this challenge for years and the natural reaction is to include as much scope of source data in the BI project as possible. However, BI implementers learned that you can't try and deliver enterprise BI in one large project as users weren't willing to wait that long for the solution and what was delivered had no longer met their requirements. In response to this challenge, enterprise BI projects are now divided into smaller iterations, which deliver a subset of user requirements in shorter timeframes. While it initially helps solve the challenge of long delivery cycles, most organizations discover that each subsequent iteration add increased maintenance overhead. This increased overhead is caused by:

  • Non-scalable data warehouse and application architectures.
  • Variation in DW data models, data integration mapping designs and semantic layers.
  • Metadata duplicated in multiple software repositories.
  • Disparate scheduling.
  • Non-reusable components.
  • Multiple security architectures.
  • “Plug and play” stovepipe solutions.
  • Non-existent data/systems governance.

Get access to this article and thousands more...

All Information Management articles are archived after 7 days. REGISTER NOW for unlimited access to all recently archived articles, as well as thousands of searchable stories. Registered Members also gain access to:

  • Full access to information-management.com including all searchable archived content
  • Exclusive E-Newsletters delivering the latest headlines to your inbox
  • Access to White Papers, Web Seminars, and Blog Discussions
  • Discounts to upcoming conferences & events
  • Uninterrupted access to all sponsored content, and MORE!

Already Registered?

Advertisement

Comments (4)
The article is interesting, but the following comment makes no sense: "Since most of the data emanates from a foreign source, the BI software doesn't inherently know the meaning of the data. This result is data warehouse data models and BI application semantic layers that are just as challenging to understand as accessing the data from the source systems.". The whole point of mapping the sources and adding rules to integrate them is to make the data understandable and consistent. A bad semantic layer reflects a bad job, not an inherent problem of the DW solution.
Posted by Roberto M | Saturday, February 09 2013 at 10:31AM ET
Robert, I think you are making assumptions the comment doesn't share. "...the BI software doesn't *inherently* know the meaning of the data." That is some user has to understand the mapping layer, just as some user had to understand the sources to create the mapping.

Encountering undocumented or poorly documented mapping rules is as difficult to understand as your first encounter with a foreign schema.

Or are you saying a "good" semantic layer of necessity has documentation that enables understanding on the first encounter? If so, how common would you estimate "good" semantic layers to be?

Posted by Patrick D | Sunday, February 10 2013 at 1:30PM ET
Add Your Comments:
You must be registered to post a comment.
Not Registered?
You must be registered to post a comment. Click here to register.
Already registered? Log in here
Please note you must now log in with your email address and password.
Twitter
Facebook
LinkedIn
Login  |  My Account  |  White Papers  |  Web Seminars  |  Events |  Newsletters |  eBooks
FOLLOW US
Please note you must now log in with your email address and password.