Please note that this is a page from a previous version of Continuity Central and is no longer being updated.

To see the latest business continuity news, jobs and information click here.

Business continuity information

The big virtualization win: backup, restore and disaster recovery

By Roger Richardson.

Virtualization is a running production operating and application systems on top of a ‘virtual machine’ operating system, which is designed to replicate a hardware neutral environment. This is now spreading from the largest computer users with the biggest budgets through to small and medium sized companies. These companies are attracted by the flexibility of virtualization but often lack the skills to exploit the technology fully.

So, what are the real benefits of virtualization? Typically, when you ask this question, answers will include maximizing hardware investment, enhanced management capabilities, standardization, and reduction of operational costs. Virtualization technologies have add on disaster recovery and high availability tools that are often overlooked and may well be the most significant.

Some companies have already adopted virtualization internally, yet maintain old techniques of disaster recovery. Most of these legacy mechanisms such as; tape backups, copying data to inexpensive disk, offsite storage and ‘spare’ servers create as many challenges in the event of a disaster as they address. Often as an industry we have battled both restore times and achieved marginal success rates for decades.

Current challenges with the old recovery standards
“I have my backup tape(s), now what?” How often have we heard this? The answer is, enough times to realize that this is not recovering from failure, more an uphill battle to resume normal operations. It is a recognised reality that a significant percentage of such failures never recover completely.

Many backups are data-only backups and are disconnected from other information on a given system. This is the concept of application data being disjoined from the operating system and applications themselves. Many companies do not backup operating systems. When it comes to restoring such backups, this adds time and complexity to the restore process, especially in cases of total system failure. Nine times out of ten this is not a serious issue – for example, when restoring a corrupted file or one deleted in error. The major concern here is the restore time, which can be mitigated quite easily and is standard practice in most businesses today.

However, one out of ten times when a total system or location failure occurs, is when this disconnection of data and systems backups costs companies both time and money. Restoring data from various media and then reassembling the process of representing this data to the end user can be extensive. In addition, operating systems, patches, applications, and various system configurations will need to be replicated, as well as the restored data integrated before a system could resume normal operation. Depending on the systems’ complexity this is a minimum of a few hours, sometimes stretching to days of downtime.

‘Bare-metal’ restores
Bare-metal restores can bridge the gap between disconnected backups and a full system restore. Many offer real-time replication of live and cold systems, yet, there are caveats. For a smooth restore the bare-metal machine needs to be of very similar hardware to the original. Otherwise an extended clean up post restoration will be required or the result will be, and often is, a failed recovery. These restore procedures can take anything from one to six hours. There are many moving pieces; data is restored, image creation, image presentation to a clean system, installation, and finally clean up of the restore. Matching hardware eliminates much of the clean up, yet requires the company to duplicate all production systems ready for the recovery process. This is a significant investment. Virtualization removes this problem.

Recovery potential of virtualization
‘Disaster recovery testing.’ This string of words causes wide-scale indigestion in the IT community. Real testing of a disaster event can consume days of human resources in the planning and execution, often with little expectation of success.

‘Only trust after you test’, the biggest weakness of traditional backups is one of the biggest advantages in the virtual realm. ‘Only trust a tape backup after a full restore is tested’, this was a common mantra in the past decades, yet this requires extensive time and human resources. Even bare-metal restores via physical disk, which requires hours of time and human interaction for full testing. This is a situation where a virtual environment and technologies prove highly powerful. Powering on a clone of a production server can occur in minutes, even located thousands of miles away from the production systems, offering huge scope for testing and troubleshooting.

Virtualization and bare-metal restores
Virtual host systems are natively bare-metal, allowing restores to diverse hardware on the fly. Systems could be of different hardware generations or completely diverse processing such as Intel or AMD based, as this doesn’t affect virtualization.

The key lesson learned in disaster recovery planning is that real life disasters and the plans designed to cope with them seldom go as anticipated. Virtualization adds a set of standards hard to replicate with physical hardware and it is this built-in flexibility that gives the company a much better chance of a swift recovery. Testing is much easier and therefore may get greater attention. The essential building block in disaster planning is flexibility. Location, physical hardware, scale of hardware, and presentation options all maximize the success rate of disaster plans.

The Big Win
Virtualization should not be thought of as an all-inclusive backup product; it is much more than that. Certainly, backup and restore are made easier! However, full replication and snap shots of data are also easier to engineer and are now affordable for smaller companies. Using replication makes the mean time to recovery much shorter and in a real time transaction based application both data loss and recovery time are minimized. Granular backups are viable and many of the robust virtualization replication tools offer granular backups. Add this to the fact that a replicated system can be turned into a production system in a few minutes and ‘the big win’ is a reality. The company keeps working; the off-site engineers create a second virtual machine to replicate the now production system and at the same time, the on-site engineers do whatever is required to the original system to bring it back into production.

Author: Roger Richardson, CEO, Nexus Management Plc.

•Date: 17th Feb 2011 • Region: World •Type: Article •Topic: IT continuity

Business Continuity Newsletter Sign up for Continuity Briefing, our weekly roundup of business continuity news. For news as it happens, subscribe to Continuity Central on Twitter.

How to advertise How to advertise on Continuity Central.

To submit news stories to Continuity Central, e-mail the editor.

Want an RSS newsfeed for your website? Click here