Please note that this is a page from a previous version of Continuity Central and is no longer being updated.

To see the latest business continuity news, jobs and information click here.

Business continuity information

Choosing a data disaster recovery strategy

Nick Mueller provides some advice for companies considering the best route to take for data disaster recovery.

Having a data disaster recovery strategy in place is undeniably critical to ensure business continuity in the event of unexpected disruptions. But implementing such a strategy can often be delayed for two reasons: one, it’s complicated to evaluate business operations to find critical data that needs to be made available immediately after a disaster, and two, many believe that disaster recovery is just too expensive, particularly for small and medium-size businesses.

Both of these issues create friction that slows down the adoption of disaster recovery strategies and technologies; but being able to recover quickly from a data disaster is more important than ever. The average yearly cost of downtime is $880,000 for mid-sized businesses, according to a report by the Aberdeen Group.

The three main options that a small or medium sized company has when building a disaster recovery strategy are:

  • Physically moving tapes or drives offsite.
  • Replicating data between offices or to an offsite data center/centre.
  • DR-as-a-Service from the cloud.

This article will address the best situations for each solution, the disadvantages and the, always crucial, costs of each.

1. Physically moving tapes or drives offsite.

Tapes are holding their position as an affordable backup solution that can also be used for disaster recovery. So, if you are using tape for backup here’s how to set up for recovery after a data disaster.

There are two main ways to use tapes for disaster recovery. The most common way is to keep Monday, Tuesday, Wednesday and Thursday tapes with full overnight backups of everything and take them offsite each day. The second option, to do a full backup over the weekend and add incremental changes each day to capture updates, is generally more efficient. The granularity of recovery is the same for a lower cost and less time is spent managing the tapes.

Then, depending on the needs of your business, have another tape backup copy, for example each Friday of the month, which can stay onsite in a fireproof safe, for recovering an accidentally deleted folder, for example. Note that most safes aren’t melt proof, so after a certain amount of time the tapes will still be vulnerable during a fire.

Then, for archival purposes, make six (or three or nine) monthly tapes, that can go to a bank safety deposit box, these are the last line of protection and should definitely be offsite.

The best situation for a tape-based disaster recovery strategy is when a company’s recovery time objective (RTO) can be comfortably in the two-five day range. A retail business or a school that will be closed after a natural disaster, fire, or major theft is an example of an organization that can comfortably leverage offsite tape backups for disaster recovery.

The main disadvantage of tape-based disaster recovery is in day-to-day operations. The amount of effort it takes to replace a single accidentally deleted file or folder means that some user files just go unrecovered.

The cost of maintaining a tape disaster recovery plan is low when you consider that the hardware has already been paid for, but it also creeps up over time. For example, it costs about $5,225 a year for weekly tape pick-up and drop-off according to an analysis that my company did using a sample environment with 2TB and 20 endpoints.

Using tape backups for disaster recovery can be pretty complex, but are a good way to leverage existing equipment to put a DR strategy in place.

2. Replicating data between offices or to an offsite data center/centre.

This is a popular choice for IT directors who have multiple offices with IT assets in each. There’s a lot of flexibility for ‘roll-your-own’ solutions with this approach, and it’s possible to use existing hardware this way also. There are plenty of backup software options that will let you use old fileservers as a backup target and then replicate the backup server in the main office to one in a remote office.

Companies with data sizes above 5TB may also consider the ‘rep and break’ tactic, by replicating data over the LAN between two arrays, then moving one of them to a remote location. This effectively ‘seeds’ the remote copy.

Regardless of the method used, the key benefit is replication. In case of a disaster in the main office, you can VPN into the remote backup server and start to recover. This works just as well for production server crashes in the main office and helping users with their accidentally deleted folders.

The costs of this solution vary widely. If you happen to have multiple offices with existing IT assets then, just like above, it’s possible to get disaster recovery ready with just a few extra licenses for the backup software. However, if you don’t have a remote office, renting rack space in a local data center can get really expensive with the space, power, and management fees.

Alternatively, buying multiple backup appliances for different offices to replicate between can be a way to get strong data protection. Over a few TB, the price of multiple appliances starts to get pretty high, but some CFOs prefer a big up-front cost that they can amortize, rather than a regular monthly bill.

3. DR-as-a-Service from the cloud.

A cloud, or online, disaster recovery strategy has the most advantages for small IT teams with over 1TB of backup data that don’t have time to manage multiple data centers or tape rotation schedules. Double that for companies that have remote offices without their own IT assets. The cost for a solution like this starts around $500 per TB per month, and does not require buying any new appliance hardware.

To really use the cloud for disaster recovery the data has to be a replica of your live file system in its native format. Having a replica file system lets end users recover their own individual files over the web, and lets IT recover to dissimilar hardware, and have complete control over which files and databases are restored first.

One of the biggest advantages of cloud disaster recovery is the ability to instantly recover files, folders, and databases. However, the term ‘instantly’ has two meanings in this context and it’s important to be aware of each.

Fast data transfer technology makes recovery feel instantaneous.
Recovering an individual file from the cloud can seem to happen instantly, if the file is small and it’s being recovered over an Internet connection with 10MBps of bandwidth, for example. Recovering a whole server or entire office worth of data is a different story. Consumer-grade cloud backup services aren’t designed for high-speed recovery, and will significantly extend recovery time.

A cloud disaster recovery service that uses the WebDAV protocol to transfer data over HTTP, just like the Internet itself, will deliver significantly faster data transfer speeds. Using the WebDAV protocol allows communications between customer equipment and the data center to be done in a rapid multi-threaded mode, fully utilizing available bandwidth and shrinking backup and recovery time.

Replicated file systems can be recovered instantly because data isn’t kept in a proprietary backup format.
Disaster recovery solutions that use a ‘snapshot & replication’ strategy can be instantly recovered because data doesn’t have to be unpacked, and IT can choose just what to recover, instead of having to download big chunks of data then pull the critical files out.

A note about security and compliance
Businesses that have data with sensitive IP or compliance requirements like FINRA or HIPPA and want to use a cloud DR strategy need to pay particular attention to the security level provided by vendors on their shortlist:

1. Ask about SSAE-16 audited policies
SSAE-16 is an auditing standard which verifies that service providers (like cloud DR vendors) are following their published data integrity policies. This becomes very important for compliance standards or legal situations when chain-of-custody has to be established for data.

2. Ask about technical penetration testing
Security is always important, but having a cloud DR vendor with regular third party penetration testing means it won’t be your company’s data splashed on the front page of TechCrunch.

Author: Nick Mueller is the corporate reporter at Zetta.net, a 3-in-1online backup, disaster recovery, and archiving provider.

•Date: 9th Oct 2012 • US/World •Type: Article • Topic: ICT continuity

Business Continuity Newsletter Sign up for Continuity Briefing, our weekly roundup of business continuity news. For news as it happens, subscribe to Continuity Central on Twitter.

How to advertise How to advertise on Continuity Central.

To submit news stories to Continuity Central, e-mail the editor.

Want an RSS newsfeed for your website? Click here