Using software-defined storage in hyper-scale environments
- Published: Thursday, 04 February 2016 08:16
Storage workloads in modern data centers / centres increasingly require scale-out environments to run demanding enterprise applications. These hyper-scale architectures can benefit greatly from software-defined storage (SDS) in terms of economic value, flexibility, and operational efficiency, according to experts at FalconStor Software Inc.
Scale-out workloads such as No-SQL, online transaction processing (OLTP), cloud and big data analytics are hungry for performance and capacity to provide appropriate service levels to end users and applications. The architecture of a hyper-scale data center that must grow to meet compute, memory, and storage requirements on-demand often depends on the nature of the applications and business priorities, such as flexible capacity, security, and uptime. Projects typically are driven by the overall cost of ownership, particularly as requirements reach hundreds of petabytes.
“Many modern applications that need these hyper-scale scale-out environments offer built-in resiliency, protect themselves from hardware failures, and can self-heal, which eliminates the need to build in high-availability at the storage layer,” said Farid Yavari, vice president of technology at FalconStor. “That opens the door to using consumer-grade, commodity hardware that can fail without impact on service availability. On the other hand, the relatively smaller footprint of revenue-generating scale-up applications may justify paying a premium for name brand storage with HA and data protection features, because it’s unwise to test radical new technologies in that environment.”
Properly architected SDS platforms enable the use of heterogeneous commodity hardware to drive the lowest possible cost, orchestrate data services such as replication, and create policy-driven, heat-map based tiers, so data is placed on the appropriate storage media. An SDS approach eliminates the reliance on expensive, proprietary hardware and vendor ‘lock-in’.
The two most common models for scale-out hyper-scale storage are either direct-attached storage (DAS), or a disaggregated model based on various protocols such as Internet Small Computer Systems Interface (iSCSI) or Non-Volatile Memory Express (NVMe). Some very large custom data center installations at companies that have the right protocol-level engineering staff run on homemade, workload-specific protocols specifically developed to optimize the storage traffic for their custom use cases. Since the available slots constrain the DAS model in a server, the scale is limited and often quickly outgrown. In the DAS model, independent scaling of compute and storage resources cannot be optimized. Therefore, enterprises start with, or must ultimately move to disaggregated storage models built with commodity hardware.
SDS adds intelligent orchestration and management to the disaggregated data center via an abstraction layer separating heterogeneous storage hardware from applications. SDS results in a more resilient, efficient, and cost-effective infrastructure. The fact that SDS is hardware agnostic allows enterprises to implement new storage technologies in a brown field implementation, eliminating the need for deploying greenfield infrastructure when initially migrating to newer storage models. Using SDS capabilities, the migration from legacy to modern technologies can happen over time, maximizing Return on Investment (ROI) in an already established storage infrastructure. SDS provides flexibility in data migration, seamless tech refresh cycles, and independent scaling of the storage and server resources. Even where data protection and high availability (HA) capabilities aren’t necessary, SDS can provide other valuable features such as actionable predictive analytics, wide area network (WAN) optimization, application-aware snapshots, clones, quality of service (QoS), deduplication and data compression.