Do you know if you can recover your critical data quickly in the event of loss due to human error, theft or natural disaster – or in response to a regulatory mandate or e-discovery request?

Whether it’s as small as accidentally overwriting a document or as massive as a flood or earthquake impacting your data center, data loss is inevitable. By the same token, anytime data is lost, the key to recovering it successfully is a consistent, reliable and comprehensive backup process.

Despite these realities, many small-to-midsized businesses (those with up to 1,000 employees) do not have adequate data backup/recovery programs in place. According to a 2012 Symantec Disaster Preparedness Survey, 74 percent of SMBs have no disaster recovery plan in place today, and only 21 percent are “extremely confident” they can restore data from the backups they have.

This article discusses an approach for testing your data recovery capabilities, and offers guidance on how to implement a straightforward data protection policy that will enable your business to recover quickly when data loss occurs.

Why Data Loss Occurs

Many businesses recognize too late that there are gaps in their data protection strategies and that lost data cannot be recovered. Here are some of the most common (and often avoidable) reasons why data loss and downtime occur:

The data was never backed up to begin with. Perhaps the single largest reason that data is irrecoverably lost is that it was never backed up. According to the 2012 Symantec survey, “When Good Backups Go Bad: Data Recovery Failures and What to Do About Them,” half of SMBs back up less than 60 percent of their data, leaving the rest vulnerable to loss at any time. Sixteen percent of SMB organizations never backup PCs and laptops. Many more fail to cover remote offices or mobile employees, while others choose not to regularly backup email or other critical applications and databases.

The data recovery process fails. Unless you test it regularly, you may not know if your data protection process would be successful. All too often, businesses are unaware that their backups are failing or incomplete, perhaps for months, until the time comes to attempt a restore operation under pressure. The “Acronis Global Disaster Recovery Index: 2012” from Acronis and Ponemon Institute revealed that  23 percent of SMBs lack a remote backup strategy, leaving their entire corporate data store vulnerable to disaster. Similarly, 42 percent of SMBs have no automated backup program and instead still rely on failure-prone, manual backup technologies, such as tape or disk.

Backup media are damaged or corrupted. Unreliable backup processes and unreliable media lead to many data recovery failures. Tape and disk both have limited “shelf life” and can fail if improperly stored or dropped, as well as being vulnerable to misplacement, loss or theft. Recovery from tape backups is notoriously failure-prone, with various sources estimating failure rates from 10 percent to over 50 percent.

Backup process complexities lead to problems. A common cause of data recovery problems is failure to keep pace with changes in the IT infrastructure (e.g., leaving backup scripts pointing to devices whose addresses have changed). System-level restores are doomed to failure if the server’s system state data is not also backed up and restorable. In general, complex backup systems that rely on manual methods, legacy technology and/or components from multiple vendors are much more likely to fail than an up-to-date system from a single, reputable source.

Natural and man-made disasters occur. While natural disasters (10 percent) like floods and earthquakes are perhaps the most dramatic and potentially devastating cause of data loss and business system downtime, human error (60 percent) causes more downtime than any other factor, according to the Acronis and Ponemon Index. The vast majority of organizations worldwide (86 percent) experience one or more instances of system downtime annually, lasting on average 2.2 days.

SMBs Fall Short on Data Protection

Due to the growing number of internal and external risks to data, combined with stronger regulatory enforcement, data security continues to be the top priority for IT organizations year after year. However, despite awareness of the risks, a major cause of data recovery failures is not testing the viability of backup and recovery processes in the first place.

According to research from Storage Strategies NOW, 7.9 percent of SMBs test their backup procedures less than once a year, 4.5 percent do so only once annually, while 10.1 percent test only “after a problem” (see Figure 1).


Figure 1: How often do SMBs test their backup procedures?

Confidence in data recovery capabilities is also low among SMBs. Almost 25 percent are only “somewhat confident” (the lowest confidence level in the survey) in their ability to restore lost data. Shockingly, 20.2 percent have no disaster recovery plan, and 25 percent of those who do never test it.

How quickly can your company restore the mission-critical applications and data necessary to run the business after a major loss caused by a disaster? Twenty-nine percent of peer organizations report that it would take them one day or more (see Figure 2).


Figure 2: How quickly can SMBs restore the “mission critical” applications and the data necessary to run the business after a major loss caused by a disaster?

Improving the Odds

Almost 25 percent of SMBs worldwide are leveraging cloud backup services to reduce the business risk of data loss. A big reason is faster recovery time. Organizations that had moved at least part of their data storage to the cloud recovered from downtime events almost four times faster (2.1 hours versus 8 hours on average) than those with no cloud storage program. Many SMBs that adopt cloud backup services continue to protect data locally as well, because recovering data from local storage is often faster, especially when the volume of data that needs to be recovered is large. However, offsite backup remains key to a best-practice data protection strategy for that subset of applications, files and databases that demand extra protection against deletion or corruption.

Testing Your Data Recovery Capabilities

To know your data is secure and identify potential data protection problems before they impact business activities, your company must regularly test its data recovery capabilities. Testing is at the core of a data protection policy that encompasses comprehensive backup and timely, reliable recovery.

Try recovering a critical application and its data (e.g., Microsoft Exchange) or an entire server. Were you successful? And, if so, how long did the process take?

Testing should be performed at least quarterly, especially for the most important types of data. Each business should also define recovery time objectives for each class of critical data and then ensure, through testing, that they can be met.

Data loss is inevitable, but it doesn’t have to derail your business. To minimize downtime and other impacts from data loss, your business needs a data protection strategy you know is reliable and meets your recovery objectives. Regular testing is the only way to ensure your critical systems and data will be there when you need them. As an SMB, make sure that you identify and address gaps in your data protection policy.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access