Cloud’s scalability and pay-as-you-go model have made it attractive to companies looking for cost-effective ways to ensure their IT can withstand a disaster. However, says Udistra Dandaraj, there are certain issues that companies need to think through:
Don’t confuse convenience with certainty. One of the characteristics of the cloud revolution has been the emergence of ‘shadow IT’, which describes the growing tendency of corporate IT users to access services through the public cloud, often in frustration because of the slow pace of internal IT projects. A good example is the use of free services like Dropbox, Google Drive or Microsoft One Drive to share and store documents.
Using these free services in the course of normal business raises issues of data security, especially in the light of the growing emphasis on data privacy in legislation such as the Protection of Personal Information (POPI) Act. Public cloud providers do not reveal where data is kept and how it is protected—nor do they offer any warranty.
Many people tend to see these services as constituting a viable backup in the case of a disaster. This is simply not so: these free services offered via the public cloud come with no guarantees or service-level agreements. If the data turns out not to be available, then there is no recourse.
Consider the private cloud
The cloud model is extremely well suited to providing the extra infrastructure and connectivity needed for IT disaster recovery. Creating a private cloud removes the security issues inherent in the public cloud. For example, a product like ownCloud provides the same functionality as Dropbox but with the data hosting controlled by the client.
Scope bandwidth needs properly
Cloud computing is highly reliant on good connectivity. It makes sense to spend time upfront to ensure you understand your bandwidth needs. This is particularly true when it comes to disaster recovery, where replication is replacing unreliable tape backups for disaster recovery. However, for replication to work, it’s important that the production and recovery locations have a suitable link; this means considering not only the rate of change within the production environment but also what bandwidth would be required after restoring systems in the event of a disaster. It’s also vital to consider the implications of where the service provider’s data centres are located. If it is located outside of the country, this will increase latency and affect the user experience on certain applications. Location will also affect what you are paying for the link to the recovery site and will in turn affect the costs of migrating to cloud.
Assess the options
Enterprises should consider the following options when taking business continuity into the cloud. All of these rely on replication of data and systems to an offsite location.
The first option is co-location or rack hosting at the recovery location, a model in which companies still own the hardware located at an offsite data centre / center. Drivers for such a move would include the size of the current environment, and its requirements. Quite often this is not the most cost effective route and requires the organization to have skills and time to manage the recovery site. However, for organizations dealing with sensitive information, this may be the only option due to legislation.
The next option would be managed services delivered by a third party within a private cloud. This model is particularly well suited to companies that do not wish to own their hardware and do not have the requisite skills to manage such a site. Service-level agreements govern this type of environment.
The final option is the move to the public cloud. Moving to this model warrants an in-depth assessment of the service provider, its levels of security and responsiveness, location and understanding what the service-level agreement covers.