In today’s constantly changing IT world, companies are struggling with the increasing amount of data being stored on their networks. As storage costs continue to drop and IT administrators are consumed with more pressing issues, they are becoming more lenient about what they allow their users to store on the corporate network. The networks of small and medium-sized businesses are full of personal user data, including their music, video and photo collections.

Moreover, duplicate data is becoming more and more prevalent. At the same time that network storage is growing tremendously, users are becoming more and more dependent on having access to their information anywhere. With Internet access virtually ever-present today, users want to be able to access their data wherever they are and from a variety of devices, including smartphones, tablets and laptops.

As hardware and software evolve, organizations have become less and less tolerant of downtime. Virtualization is continuing to take a larger share of the market, offering several effective solutions to minimize downtime. Various virtualization products on the market offer high-availability and fault tolerance options. With such solutions in place, entire physical servers can fail without end-users even noticing. New versions of Microsoft servers, such as Exchange 2010 have introduced easy-to-use and easy-to-deploy high availability. These are ideal solutions for companies, but they set an expectation of 100 percent availability. They also require special attention when planning your data backup and recovery procedure. Backing up and restoring a virtual machine requires specific knowledge and procedures. Make sure to read up on all the information available to ensure you are following the vendor’s best practice.

Creating a formal document is crucial to any IT department’s data backup and recovery strategy. Interview your team members and company executives to determine the most critical data within your organization. If your office were to disappear tomorrow, what are the first things employees would need to access? Is it your email, database or a particular folder on your network? Whatever it is, there needs to be a plan in place to get the most business critical systems up and running as fast as possible (recovery time objective) with as little data loss as possible (recovery point objective).

Consider your recovery time objective. Many CEOs believe they need everything to be up and running immediately in the event of a disaster, but with limited budgets and out-of-control data, that is not feasible. Determining your RTO is not just a technical conversation; it involves all aspects of your organization. What type of solution can you afford? How much downtime is actually acceptable? Once you determine how long your company can go without access to business critical applications and how long it can go without the rest, you can start to see how your data backup and recovery solution is going to look. The lower the RTO, the more costly your solution is going to be. In today’s disaster recovery-oriented world, there are a lot of options and solutions out there; every business should be able to find something that fits its requirements.

Another important consideration is how much data you can afford to lose in the event of a disaster. Your recovery point objective, or RPO, is vital to know when creating a data backup and recovery strategy. For example, if you back up your Exchange server at 10:00 p.m. on Tuesday night and it fails 5:00 p.m. on Wednesday, you are going to lose 19 hours worth of email. Is that acceptable? It depends on your organization and its needs. If it isn’t acceptable, you need to make sure that you are backing it up more frequently or considering alternative solutions such as replication or clustering in order to ensure that you can meet your RPO in the event of a disaster.

Your data backup and recovery plan needs to clearly state all aspects of the data backup and recovery process. What technologies are you using to protect the various components of your environment? How often do your backups run for specific data sets? Do certain applications need to be real time replicated in order to meet your RTO and RPO?

One common among organizations is that new data is added or a folder is moved, which is then no longer being protected. If you are using a tape-based backup solution or if you are backing up to the cloud, it is likely you selected specific folders when setting up your backup jobs. What happens when a new folder or server is added? It is very important to frequently review your backup selections to ensure all your critical data is being protected.

Having a data backup plan is one important piece of the puzzle but even more important, is your plan to recover that data. If you work for an SMB, it is likely that you are not replicating, in real time, your entire server environment to a secondary data center that can fail over seamlessly at the push of a button. Therefore, it is critical to have a plan in place to recover that data and make it available to your end users. So how are your users going to access that data in the event of a disaster? If your office is destroyed, where are the employees going to go and how are you going to ensure that they can access your servers with minimal technical skills? How are you going to communicate with them if your phones are down? This part of the plan is going to be determined by a variety of influences and factors.

A lot of SMBs have their data backed up offsite but have nowhere to recover if they were to lose their office. In this scenario, hardware would have to be ordered, operating systems and applications installed, and then, finally, their data put back on top of it. This can be a very lengthy and painful process. One option that has emerged in the last few years is virtual disaster recovery.

The most important part of any data backup and recovery plan is testing. Running a disaster recovery drill allows you and your team to familiarize yourself with the process and work out any kinks involved. What if one of your applications or servers doesn’t fail over properly? It is much better to find this out and work out a solution during a calm, scheduled test than in the frenzy of an actual disaster.

Tests should be performed at least semiannually, but of course the more tests you can run, the better prepared you will be. Make sure that every step of the process is documented, including the bugs you encounter and the resolutions you discover. This document needs to be available somewhere outside your network so that if your network goes down, you can still find this information. Once your environment is failed over, have one of your users walk through the process of connecting to your recovery environment remotely and reviewing the data. This will show you what the process is like for someone not as technical as you. This will also help ensure that your data is accurate and up to date, as they are likely more familiar with the data than the IT department is.

Having a written data backup and recovery plan is great (and better than what most companies have), but you need to know how to execute the plan. You have to make sure people in the IT department are constantly monitoring and watching your methods of data protection. If one person is responsible for this and he or she is out of the office, there must be someone to step up and ensure that the system continues to function seamlessly. If you have a disaster and lose data because one of your team members was out sick and no one else was monitoring the solution, your boss is not going to be very sympathetic. Constantly review your backup selections, monitor the progress of the jobs and be sure to frequently test.

Register or login for access to this item and much more

All Information Management content is archived after seven days.

Community members receive:
  • All recent and archived articles
  • Conference offers and updates
  • A full menu of enewsletter options
  • Web seminars, white papers, ebooks

Don't have an account? Register for Free Unlimited Access