Please note that this is a page from a previous version of Continuity Central and is no longer being updated.

To see the latest business continuity news, jobs and information click here.

Business continuity information

Insourcing after outsourcing: how to handle the business continuity issues associated with IT migrations

By Ian Masters.

Outsourcing IT has been a popular option for companies looking to save money. However, over the past six months, a trend has arisen for public- and private-sector organizations alike to bring their IT services back in-house. For these organizations, the quest for efficiency and the growth of new computing environments, including the cloud, have been the catalyst for change. The question is: how should these IT migrations be handled to reduce potential downtime and maintain business continuity?

Coventry City Council and Cumbria County Council have both announced plans to migrate IT back in-house because councillors believe that managing IT services internally will be more cost-effective than continuing with their outsourcing strategies. In the private sector, General Motors has decided to consolidate its IT environments as well. Instead of consuming services from multiple outsourcing partners, the company is consolidating IT into three major data centres/centers over the next two years.

With an internal IT operation, these organizations and others like them can get greater control of their IT operations. The main business benefits from this are reduced costs and greater agility compared to their current outsourcing arrangements. However, each company has to develop its own data centre environment to achieve these objectives. In turn, this requires a complete business continuity strategy to be devised and implemented, covering the initial move as well as protecting systems in the longer term.

If organizations opt to bring IT back in-house, they must make sure they plan ahead and work with their existing provider to transition IT services before the outsourcing contract ends. This effort can be broken down into two stages: set-up and migration.

The set-up phase includes estimating all the physical IT equipment required to host the organization’s applications and data, including servers, storage and networking assets. For most organizations, this can include virtualized environments such as VMware or Microsoft Hyper-V on the server virtualization side, as well as a storage area network (SAN). There may be specific applications or services that have to remain on physical servers as well, due to specific workload requirements or licensing requirements from the vendor.

Depending on the approach, the organization may choose to simply virtualize its servers and shift applications and assets back in-house. The other alternative is to create a full private cloud environment; this will be based on implementing a virtualized data centre, but would include greater flexibility on provisioning resources on demand as well as more tracking of workloads for charge-back. The cloud route does deliver more flexibility, but there will be additional management and complexity overheads to consider.

Once this initial round of scoping and set-up is completed, the migration can begin. Typically, this will involve taking copies of the existing systems as snapshots and then moving them onto the new data centre or cloud instance. Using snapshots in this way can make it faster to migrate, but there is a downside: downtime.

A snapshot is exactly that: a machine image from a specific point in time that can then be moved across. Snapshots are most often used for disaster recovery purposes, as a snapshot image can be brought up in the event of a problem. However, because it is static, using a snapshot for migration leads to downtime, since the application and data can’t be updated while the move is being carried out.

As soon as the application is used, the snapshot is ‘out of date’ and new data has been created. Each new item, from simple email messages to full customer orders or requests, pushes the snapshot further from the current state: leading to lost data when the new systems are turned on. While turning off some applications during the migration might be acceptable, leaving those systems that generate revenue off for any length of time can lead to considerable opportunity costs and lost money. Moving the snapshot across and then updating the incremental changes certainly helps reduce the amount of downtime but does not always provide a ‘test’ mechanism to practice the migration.

Instead, replication technologies can be used effectively to copy a company’s servers in real time and without incurring downtime. This means business operations are protected because a copy of these servers can be migrated to any location without any interruption to data being replicated or perceived downtime for applications. No downtime means business operations can carry on as normal and those applications generating revenue can still be used. It also means there is a reduction in the administrative workload of data centre professionals, as IT migrations can take place during business hours rather than over weekends or during late nights. An additional benefit in using these technologies is the ability to test the migration process without having to start all over again after the practice run.

Once the migration of workloads has been completed, the organization will be responsible for its own IT destiny. In the long term, this means taking responsibility back for business continuity planning and strategy. Replication technologies can continue to play a role here for keeping workloads up to date across multiple sites. As a change is made in one location, this can be replicated back across to a secondary site for recovery and continuity purposes.

One issue to consider as part of bringing IT back in-house is how to manage business continuity for data centre assets after the transition. From a recovery perspective, this means keeping a second site with up-to-date copies of critical data, applications and IT hardware. This can be owned by the organization, or it can be a service provided by an outsourcing company.

Regardless of how this secondary site is implemented, keeping the data current is a critical part of a successful continuity strategy. As companies consider moving IT strategies and assets back in-house, continuity planning is also a crucial requirement for the long-term success of the project. The strategy used for migration can be re-used for continuity as well. This not only creates more value from the migration strategy, but it can be used to support multiple IT assets in the future.

The growth of cloud computing and virtualization has reduced the cost of IT, as well as improving the agility of services that can be delivered by IT professionals. This has led to serious considerations for outsourcing deals, as customers seek to get the best value for their money, whether this is delivered by internal teams or an outside resource. Whatever decision is made, the emphasis should be on keeping those systems available and meeting end-user requirements so the benefits of IT can be delivered.

Author: Ian Masters is sales director Northern Europe, Vision Solutions.

•Date: 8th Nov 2012 • UK/World •Type: Article • Topic: ICT continuity

Business Continuity Newsletter Sign up for Continuity Briefing, our weekly roundup of business continuity news. For news as it happens, subscribe to Continuity Central on Twitter.

How to advertise How to advertise on Continuity Central.

To submit news stories to Continuity Central, e-mail the editor.

Want an RSS newsfeed for your website? Click here