Do you know if you can recover your critical data quickly in the event of loss due to human error, theft or natural disaster – or in response to a regulatory mandate or e-discovery request?
Whether it’s as small as accidentally overwriting a document or as massive as a flood or earthquake impacting your data center, data loss is inevitable. By the same token, anytime data is lost, the key to recovering it successfully is a consistent, reliable and comprehensive backup process.
Despite these realities, many small-to-midsized businesses (those with up to 1,000 employees) do not have adequate data backup/recovery programs in place. According to the “2012 Disaster Preparedness Survey” by Semantec, 74 percent of SMBs have no disaster recovery plan in place today, and only 21 percent are “extremely confident” they can restore data from their backups.
It is wise to consider a rationale and approach for testing your data recovery capabilities. This article offers guidance on how to implement a straightforward, cost-effective data protection policy that will enable your business to recover quickly when data loss occurs.
Why Data Loss Occurs
Many businesses recognize too late that there are gaps in their data protection strategies and that lost data cannot be recovered. Some of the most common (and often avoidable) reasons why data loss and downtime occur include:
The data was never backed up to begin with: Perhaps the single reason that data is irrecoverably lost is that it was never backed up. According to the 2012 Symantec survey, “When Good Backups Go Bad: Data Recovery Failures and What to Do About Them,” half of SMBs back up less than 60 percent of their data, leaving the rest vulnerable to loss at any time. Sixteen percent of SMB organizations never backup PCs and laptops. Many more fail to cover remote offices or mobile employees, while others choose not to regularly backup email or other critical applications and databases.
The data recovery process fails. Unless you test it regularly, you may not know if your data protection process would be successful. All too often, businesses are unaware that their backups are failing or incomplete, perhaps for months, until the time comes to attempt a restore operation under pressure. An Acronis and Ponemon Institue survey called “The Acronis Global Disaster Recovery Index: 2012” found that 23 percent of SMBs lack a remote backup strategy, leaving their entire corporate data store vulnerable to disaster. Similarly, 42 percent of SMBs have no automated backup program and instead still rely on failure-prone, manual backup technologies, such as tape or disk.
Backup media are damaged or corrupted: Unreliable backup processes and unreliable media lead to many data recovery failures. Tape and disk both have limited “shelf life” and can fail if improperly stored or dropped, as well as being vulnerable to misplacement, loss or theft. Recovery from tape backups is notoriously failure-prone, with various sources estimating failure rates from 10 percent to over 50 percent.
Backup process complexities lead to problems: A common cause of data recovery problems is failure to keep pace with changes in the IT infrastructure (e.g., leaving backup scripts pointing to devices whose addresses have changed). System-level restores are doomed to failure if the server’s system state data is not also backed up and restorable. In general, complex backup systems that rely on manual methods, legacy technology and/or components from multiple vendors are much more likely to fail than an up-to-date system from a single, reputable source.
Natural and man-made disasters occur: While natural disasters (10 percent) like floods and earthquakes are perhaps the most dramatic and potentially devastating cause of data loss and business system downtime, human error (60 percent) causes more downtime than any other factor, according to the Acronis and Ponemon index. The vast majority of organizations worldwide (86 percent) experience one or more instances of system downtime annually, lasting on average 2.2 days.
SMBs fall short on data protection: Due to the growing number of internal and external risks to data, combined with stronger regulatory enforcement, data security continues to be the top priority for IT organizations year after year. However, despite awareness of the risks, a major cause of data recovery failures is not testing the viability of backup and recovery processes in the first place.
According to the latest research from Storage Strategies NOW, 7.9 percent of SMBs test their backup procedures less than once a year. Further, 4.5 percent do so only once annually, while 10.1 percent test only “after a problem” (see Figure 1).
Figure 1: How Often Do SMBs Test Their Backup Procedures?
Confidence in data recovery capabilities is also low among SMBs. Almost 25 percent are only “somewhat confident” (the lowest confidence level in the survey) in their ability to restore lost data. Shockingly, 20.2 percent have no disaster recovery plan, and 25 percent of those who do never test it.
How quickly can your company restore the mission-critical applications and data necessary to run the business after a major loss caused by a disaster? According to the survey, 29 percent of peer organizations report that it would take them one day or more (see Figure 2).
Figure 2: How Quickly can SMBs Restore Mission-Critical Applications and Data After a Major Loss Caused by Disaster?
Improving the Odds
Almost 25 percent of SMBs worldwide are leveraging cloud backup services as a second tier of data protection (by storing data offsite) – and that number is growing rapidly. A big reason is faster recovery time. Organizations that had moved at least part of their data storage to the cloud recovered from downtime events almost four times faster (2.1 hours versus 8 hours on average) than those with no cloud storage program. Many SMBs that adopt cloud backup services continue to protect data locally as well, because recovering data from local storage is often faster, especially when the volume of data that needs to be recovered is large. However, offsite backup remains key to a data protection strategy for the subset of applications, files and databases that demand extra protection against deletion or corruption.
Testing Your Data Recovery Capabilities
To know your data is secure and identify potential data protection problems before they impact business activities, your company must regularly test its data recovery capabilities. Testing is at the core of a data protection policy that encompasses comprehensive backup and timely, reliable recovery.
Try recovering a critical application and its data (e.g., Microsoft Exchange) or an entire server. Were you successful? And, if so, how long did the process take?
Testing should be performed at least quarterly, especially for the most important types of data. Each business should also define recovery time objectives for each class of critical data and then ensure, through testing, that they can be met.
Data loss is inevitable, but it doesn’t have to derail your business. To minimize downtime and other impacts from data loss, your business needs a data protection strategy you know is reliable and meets your recovery objectives. Regular testing is the only way to ensure your critical systems and data will be there when you need them. As a SMB, make sure that you identify and address gaps in your data protection policy.
Register or login for access to this item and much more
All Information Management content is archived after seven days.
Community members receive:
- All recent and archived articles
- Conference offers and updates
- A full menu of enewsletter options
- Web seminars, white papers, ebooks
Already have an account? Log In
Don't have an account? Register for Free Unlimited Access