In my September column, I discussed the technical and business challenges that corporations are experiencing. These challenges include:
- Rapidly changing business requirements
- Poorly integrated and inflexible current systems
- Unfulfilled business user needs
- Data quality
- Data redundancy
This final installment on meta data return on investment (ROI) will examine the solutions that a meta data repository provides to these challenges. While these challenges are daunting indeed, corporate executives are discovering answers to them in the form of meta data. The functionality that a meta data repository provides to address these challenges includes:
Impact Analysis: A meta data repository significantly reduces the costs of development and the time frame by allowing the IT (information technology) development team to run technical impact analysis reports across all corporate systems stored in the meta data repository. These impact analysis reports significantly aid the design analysts as they examine the impact of proposed changes into the DSS (decision support system) environment. For example, let's suppose that a company was reorganizing their geographic locations. Currently, Mexico and Canada sales are grouped with U.S. sales, and the company's management decides that they need to separate these sales numbers. The meta data repository would allow the development team to generate an impact analysis report showing all systems and programs that use geographic regions. This benefits the development team by minimizing the costs of the system enhancement and helps to reduce the propensity of new development errors.
Anyone who doubts the value of this analysis need look no further than the Y2K situation. What is Y2K? In the early years of software development, programmers stored the year portion of a date in a field that could only hold a two-digit number (e.g., 90) instead of a field that could hold a four-digit number (e.g., 1990). As a result, many information technology systems in businesses today will not function properly with dates beyond December 31, 1999. If we stop and think about it, Y2K is merely a date field that has been designed to hold a two-digit year instead of a four-digit year. Therefore, this is just a change in a field's length. When the problem is considered in these terms, it doesn't appear to be highly significant. However, Federal Reserve Chairman Alan Greenspan estimates that several hundred billion dollars will be spent trying to fix the computer glitch before the turn of the century. It would have been interesting to see the ramifications of this situation for companies with a meta data repository that would allow them to run impact analysis reports. It certainly would have drastically reduced the impact of the Y2K system change by making the systems much more flexible.
Transforming Data into Knowledge. The reason we exist as IT professionals is to meet the informational needs of our business users. As discussed in the September column, our current systems are falling well short of meeting these needs. A recent survey asked CEOs, "Do you feel your IT systems meet the needs of your business?" Eighty-four percent of these CEOs felt that their IT systems did not meet the needs of their business. This number is staggering and sends a clear message to all CIOs and IT directors. We, as IT professionals, must do a significantly better job at fulfilling the needs of our business executives, because they are demanding us to pay greater attention to the value IT brings to the business.
One of the reasons this is occurring is that instead of designing systems that speak to our business users in the business terms with which they are familiar, we have built systems that communicate to them in IT terms. Meta data addresses this situation by providing a semantic layer between IT systems and business users. In simple terms, meta data seeks to translate the technical terminology into business terms that are familiar to the users. For example, when a business user is looking at profitability numbers, it is very valuable to them to understand the formula used to calculate the numbers and what systems were sourced to provide the data for the formula.
Many legacy systems contain redundant or inaccurate data. This lack of data quality in the operational systems has caused more than one decision support effort to fail. As a result, this operational data must be cleaned before it is loaded into the data warehouse to ensure its usability. Meta data is critical for monitoring and improving the quality of the data coming from the operational systems. Meta data tracks the number of errors that occurred during each data warehouse load run and can report when certain error thresholds are reached. In addition, the DSS data quality metrics should be stored over the history of the DSS system. This allows corporations to monitor their data quality over time.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access