The 3-2-1 rule has been at the heart of successful data protection strategies for many years. Christopher Rogers looks at whether it still has relevance in today’s rapidly changing technology and threat environment.
For organizations operating in the ransomware era, having effective solutions in place to safeguard and recover data is not an option, it’s a necessity. In many cases, the foundation for data protection is the tried and tested 3-2-1 rule – an approach developed by photographer Peter Krogh nearly 20 years ago and presented in his book on digital asset management.
Since then, it has become an industry standard for data protection and disaster readiness. It states that organizations should:
- Keep three copies of data, including all production data and two backup copies.
- Store those backup copies on two different types of storage. This should include any combination of on-premises, cloud or offline infrastructure or services.
- Finally, make sure that one backup copy is stored at an off-site location, such as a public cloud server.
Sitting at the heart of the 3-2-1 rule is redundancy. By creating multiple copies of production data and storing them in a variety of locations that aren’t exposed to each other, organizations can significantly improve resilience and their ability to recover should disaster strike.
In practical terms, should copy one be damaged or lost, copy two remains secure in its own storage medium, and still enables fast and effective recovery. And since an organization’s data is the backbone of its operations, this enhanced resilience and recoverability is crucial to business sustainability and success.
Is the 3-2-1 rule still fit for purpose?
The 3-2-1 rule remains a relevant and valuable method of ensuring effective data protection and recovery. But since it was formalised in 2005, times and threats have changed and many organizations have yet to move beyond this solid but ageing data protection paradigm.
Today, there are a huge variety of risks and vulnerabilities that simply didn’t exist when the rule was created. As a result, there are many contemporary backup use cases, such as ransomware, that aren’t being adequately addressed by traditional approaches that rely on periodic backup.
There’s no doubt that traditional backup strategies, where periodic snapshots are used to protect data, have served organizations everywhere extremely well, and in some cases, may still offer an acceptable and effective way of protecting data. The problem is, as data volume continues to grow – exponentially in many cases – the backup job run time lengthens. In many of these growing environments, the time now required to complete just an incremental backup can easily exceed the available window of opportunity.
Clearly, this can result in major retention and compliance problems, not least because backups in these environments can get skipped, may fail to complete, or are cancelled by administrators if they start to degrade production performance, particularly if the backup process is still operating during working hours.
Increasingly, the answer to this potentially serious bottleneck comes in the form of disaster-recovery-as-a-service (DRaaS). A product of the outsourced ‘as-a-Service’ revolution, DRaaS providers host infrastructure resources and disaster recovery software to which their customers replicate data and workloads.
Here’s how a DRaaS-based strategy can play out. In the event of a disaster at a primary site, the user can immediately fail over to their disaster recovery site with the help of their DRaaS provider. In these circumstances, the DRaaS provider has already done all the heavy infrastructure work. For example, they operate the data centre / center, keep hardware up to date, and perform all the essential daily troubleshooting tasks. When an organization needs to call on their backed-up data, they’ve also got access to people with the experience and focus to make the process as effective as possible.
For organizations looking at DRaaS to build on a traditional 3-2-1 strategy, there are a series of important questions to consider when selecting a potential provider, such as:
- Can they deliver a non-disruptive solution, so operations are not halted during setup and subsequent testing?
- Can they provide fast near-synchronous replication and is their hardware and software agnostic?
- Do they allow workloads to move to the cloud with the least number of barriers?
- Does their service consist of a software-only solution that can scale to the client’s needs?
- Can they ensure a clear cost advantage over developing an on-premises private cloud?
Applying these requirements when developing a disaster recovery strategy based on DRaaS can help ensure that organizations make the right decisions to suit their unique circumstances and priorities. In doing so, they can enhance their existing 3-2-1 approach to make sure they deliver the flexibility and performance that is required to address today’s data protection challenges head-on.
Christopher Rogers is Technology Evangelist at Zerto, a Hewlett-Packard Enterprise company.