Continuous data availability: the focus should be on recovery, instead of backup
- Details
- Published: Monday, 12 December 2022 09:51
Ever-increasing levels of organizational data growth are far outpacing existing backup systems says Jason Lohrey. In this article he looks at the issue and what organizations can do to ensure that data protection systems are future-proof.
Data underpins every human endeavor. We are now in the Data Age, where data volumes quickly grow to billions and trillions of files. Terabytes of data are rapidly becoming petabytes to exabytes of data and beyond. According to IDC, the amount of new data created, captured, replicated and consumed is expected to more than double in size from 2022 to 2026 (1), and Statista Research predicts the world’s collective data creation will reach more than 180 zettabytes by 2025 (2).
Data-intensive organizations today are greatly challenged by the exploding data levels that are far outpacing existing backup systems. Keeping large-scale data sets secure and resilient is a significant challenge and traditional backup often proves unviable.
At the same time, vulnerabilities scale with data growth: corruption, malware, accidental deletion, mysteries. And the time it takes to find lost data with traditional backup systems increases with the amount of backup data stored. IT departments are constantly pulled into the task of data recovery. Data resilience for trillions of datums, and instant, self-serve, data recovery is not possible with backup as we know it.
Traditional backup works by scanning a file system to find and create copies of new and changed files. However, scanning takes longer as the number of files grows – so much so that it’s becoming impossible to complete scans within a reasonable time frame. They usually run during the night when systems are likely to be less volatile.
In addition, the process occurs at set intervals, which means any change before the next scan will be lost if there’s a system failure. Traditional backup cannot and does not meet the objective of zero data loss. Recovering data in petabyte sized repositories is time extensive. The process of recovery is not what it should be – it’s tedious and slow. When someone wants to recover data, they will typically ask an IT administrator for help to recover it.
The administrator will then ask them for the path and names of the missing files, along with the date and time they existed – many people will not remember those details exactly, and so begins a process where different backup sets are restored one after another and inspected until the missing or damaged files are found. That process can take hours, days, or longer, to recover data – a process that is inefficient and costly.
Deficiencies in securing, locating, and restoring data impede the goal of continuous data availability, forcing business leaders to accept data loss. Traditional backup solutions were not designed to handle the scale and complexity of today’s data, and there are significant limitations, including:
- Backup is a discrete process; access to data created between backups is lost.
- Backups and snapshots run during working hours negatively impact end-user productivity.
- Organizations must continually add storage for backups.
- Data recovery is an IT-intensive process that uses up valuable time and resources.
- Backup is particularly broken at scale.
- Recovering data is often a costly, clumsy, procedure that involves back-and-forth activity between users and IT teams and can take days or weeks to achieve.
Furthermore, enormous data growth makes organizations more vulnerable to cyber attacks such as ransomware cyber locking. Cybercrime can have a devastating impact on a business when it hits and for years following the initial attack.
Continuous data availability has been out of reach given the weaknesses of traditional backup. Business leaders have been forced to accept levels of data loss, measured by recovery point objectives (RPO), and downtime, measured by recovery time objectives (RTO). Achieving data resilience at scale is increasingly critical in today’s data-driven world. Organizations need the ability to spring back seamlessly and reduce the risk of data loss and minimize impact from downtime, outages, data breaches, and natural disasters.
A fresh approach is needed to address the scale and complexity of modern data demands, one that maximizes data resilience at scale and revolutionizes today’s broken backup paradigm. IT leaders must shift their focus from successful backups to successful recoveries.
Traditional backup is independent of the file system, but a new approach would make the file system and backup one and the same. As a result, every change in the file system is recorded as it happens, end users can recover lost data without the assistance of IT, and finding files is easy, regardless of when they may have existed, and across the entire time continuum. Such an approach would redefine enterprise storage by converging storage and data resilience in one system. Every change that occurs in the data path would be captured.
This model would not only increase data resilience, but it would also provide a strong, first line of defense / defence against ransomware cyber locking, enabling organizations to recover compromised data easily and swiftly. It would allow users or IT administrators to go back to any point in time to recover needed files – even in the event of a cyber attack where files have been encrypted.
An analogy is insuring an at-risk house against wildfire. One strategy involves risk mitigation – removing trees around the house, making sure there are fire breaks, cleaning roofs and gutters of dead leaves, debris, and pine needles, and removing any flammable material from the wall exterior. On the other hand, a person could passively take no measures and just wait for the house to burn down, hoping the insurance is adequate to recover the loss. The first approach is proactive – avoid the disaster in the first place. The second is reactive – a bad thing happened, now we will spend a lot of time, effort, and money and hope we can recover to where we were before the event.
This is the difference between recovery from a state of continuity (continuous data access) and discontinuity (a disaster that strikes). A proactive approach – one that involves continuous inline data protection – would eliminate the cost and business impact of lost data and allow for the following benefits:
- Data security and resilience at scale with extremely fast data recovery and zero data loss accomplished by uniting the file system and data fabric.
- Continuous data protection that makes it possible to achieve continuity of service at scale with the ability to instantly unwind the file system to appear as it was at the selected point in time before the data corruption, hardware failure, or malicious event.
- The ability to roll back ransomware attacks and provide the first line of defense against corporate loss and strong protection against criminals holding a business and its data hostage in, at most, minutes rather than days, weeks or more.
- Expedited data recovery that enables users to interactively find and recover what they need – a ‘do it yourself’ data search and recovery process that eliminates the need for IT intervention.
This type of fresh thinking could provide unprecedented data protection, making it possible to approach the ideal of objectives of zero RTO and zero RPO, and business leaders should push technology vendors to achieve those objectives. Anything less is a compromise that exposes organizations to increased risk of data and financial loss.
The author
Jason Lohrey is Founder and CTO, Arcitecta.
References
(1) IDC, Global Datasphere 2022-2026 Forecast, Doc #US49018922, May 2022
(2) Statista Research Department, Amount of data created, consumed, and stored 2010-2020, with forecasts to 2025, September 2022