Drowning in Data


As IT becomes ever more prevalent in nearly every aspect of our lives, the amount of data generated and stored continues to grow at an astounding rate. According to IBM, worldwide data volumes are currently doubling every two years. IDC estimates that 45GB of data currently exists for each person on the planet: that’s a mind-blowing 281 billion gigabytes in total. While a mere 5 percent of that data will end up on enterprise data servers, it is forecast to grow at a staggering 60 percent per year, resulting in 14 exabytes of corporate data by 2011.

Industry Trends


A major trend over the last few years has seen many organizations implementing enterprise resource management and customer relationship management solutions.  This, in turn, has caused a dramatic increase in the amount of data we are storing about our customers, prospects, partners and suppliers.  
Companies are also investing in ever more sophisticated business intelligence and analytics.  In an increasingly competitive marketplace, the ability to base business decisions on solid, reliable and timely management information is becoming a key differentiator, but trend analysis can require very large amounts of historical data to be stored and managed. 
The trend toward company consolidation is not a new one, but the current economic situation has inevitably resulted in a significant increase in the number of mergers and acquisitions. This is creating a huge increase in data volumes, with associated data duplication and application retirement issues. Organizations are faced with not only managing all of their own data, both historic and current, but also this influx of additional data from other parties.  Imagine the data headache of combining all of the ERP, CRM, BI and analytic systems from different organizations into one manageable enterprise system.

Legislation


Corporate compliance legislation has had a major effect on how we use, store and maintain our data.  The requirements placed on organizations by HIPPA, Sarbanes-Oxley, Basel II and others mean that many companies are having to keep hold of more data for longer periods. Just as importantly, that retained data rapidly transforms from a corporate asset to a liability once the legal minimum retention period has expired, making it vital that such data can be accurately identified and deleted. It is vital that organizations adhere to this legislation in order to avoid the cost of court appearances, heavy fines and the resultant damage to the brand.

Technical Trends


New capabilities within the databases used to store corporate information are another major driver of data growth.  For example, DB2 now supports XML and LOBs (“large objects” such as audio, video, images, etc).  The ability to store this kind of data alongside more traditional structured information can be very useful but can also have a huge impact on the overall size of the database. Other technical trends that are contributing to database growth include storage of data in Unicode format (which can often expand overall database size by 10 to 50 percent depending on the data), and duplication of databases due to replication requirements and/or backup strategies.
Finally, there’s the perennial problem of removing old or obsolete data once it has reached the end of its useful life. Application data archiving is often considered as an optional extra, and even if it is included in the initial project plan it is often the first item to be postponed to a later release.

Effects of Rapid Data Growth


This unprecedented growth in data volumes is having a significant effect on many organizations. Perhaps the most obvious impact is on operational costs.  More staff time is required for routine maintenance and data-related exception handling such as out-of-space conditions and repartitioning.  As the database increases in size, so too does the central processing unit cost of running batch operations and routine housekeeping.  Ongoing running costs also increase due to the additional disk space required, and storage and processing capacity upgrades may be needed even though they often haven’t been budgeted for.  
Painful though they may be, increases in operational costs aren’t the end of the story. What price can you place on customer satisfaction? Performance for critical application processes can degrade as data volumes increase, resulting in missed service level objectives. Teams across the whole organization may be affected, with call center staff unable to access the information they need quickly enough to satisfy customer demand.  

Coping with the Data Explosion


Various coping strategies are available to address the issues associated with rapid data growth. Measures such as implementing database partitioning and data compression or purchasing extra CPU/direct access storage device can help.  However, these have their own costs there are many issues still remaining, including:

  • Disaster recovery times,
  • Legal risk of exceeding minimum data retention periods (data as a liability, not an asset),
  • DBA effort to manage/tune workloads and databases and
  • Cost of spending IT budget on maintaining current capacity, not innovating.

So, what are the alternatives? Implement a data archiving strategy!  
According to a recent Gartner report, database archiving significantly lowers storage costs for primary storage by moving older data to less costly storage. They go on to say that archiving reduces the size of primary storage, resulting in improved application performance and lower storage requirements for copies of the database for testing, backup and other purposes.
Also, you may think that archiving is only applicable to the largest of applications, but in the same report, Gartner states that performance and cost improvements can be sizeable, even with applications that have less than 200GB of data.
So, it would appear that a data archiving strategy is the best way for organizations to cope with growing data.  Giving cost savings and improved application performance.  However, once the need to archive has been agreed many new questions arise:

  • Build versus buy?
  • Flexibility versus speed?
  • Software expenditure versus staff time costs?

These tough decisions need to be made before a data archiving strategy can be put into place.  While the temptation to build in house may be strong, is there really justification for doing so?  Can staff be spared to work on this project?  Although the up-front cost is cheaper, what about the long-term cost? What about the need to implement the strategy across multiple platforms within the same organization?  Can we spare project staff from each area of the organization to work on developing a bespoke solution for their operating platform?
The answer is potentially a brought-in solution that will work across multiple platforms thus bringing a scalable solution to the enterprise without needing to take precious staff time away onto separate, long-term test and development projects to create a bespoke solution.  
So, it seems that there are ways to control data growth before it controls us.  By implementing a thorough archiving policy and an intelligent archiving system, we can manage data throughout its lifecycle.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access