Making a case for a disaggregated storage architecture

Making a case for a disaggregated storage architecture

The amount of data we create as a society is rising exponentially. To respond to customers’ growing needs, storage original equipment manufacturers and data center architects are looking for strategies to improve and optimize their storage systems. With the emergence of new technologies, many have chosen to upgrade their traditional Hard Disk Drives (HDDs) with higher-performing and denser Solid State Drives (SSDs).

In order to meet the extra computing and higher speed requirements by the SSDs, new standards and server architectures have been introduced into data centers. One of the most important evolutions has been the switch to a higher-performing NVMe protocol. NVMe has become the storage enclosure’s new de facto standard, eliminating the bottlenecks associated with older protocols designed for HDDs. In fact, The NVMe storage server market is expected to grow exponentially by 2020, becoming a multi-billion dollar market.

NVMe solutions: issues and bottlenecks

The fundamental weakness of NVMe SSD-based storage solutions is that compute and storage resources cannot be scaled independently. This lacking is compounded by the need for ever more capable and numerous cores, in so far as global data center performance depends strongly on how applications can quickly access their data within centralized storage.

What is more, the performance of the Network Fabric and SSDs always leads to the increase of demand in DRAM performance. The problem here is  twofold:

  • As network and SSD latency are brought down to a few microseconds, more capable CPU cores with a higher core count and clock rates must match to avert the degradation of the available compute per IO. It only takes a few CPU cache misses to ruin the IO-cycle budget. The available compute per core is constantly under pressure from network and SSD latency improvements. This is the CPU bottleneck.
  • NVMe and network scale improve at an exponential rate, doubling every 18 months. However, DRAM performance only doubles every 26 months. With this timeline development mismatch, storage architects never have enough DDR channels, and it is not getting any better. This is the DDR bottleneck.

These two bottlenecks lead to near-constant pressure to upgrade the servers running the storage services (parallel file system, RAID, Volume management,…)  provision either in number or performance/abilities, which constitute additional imperatives to successfully scale compute capacity independently from storage.

An alternative: the NVMe-oF protocol

Soon after the creation of the NVMe protocol, an Ethernet-compatible version was created: NVMe over Fabrics (NVMe-oF). The introduction of the NVMe and NVMe-oF protocols has given data centers a significant opportunity for defining new storage architectures that are more flexible, effective and optimized.

Since enterprise storage customers require a “Pay-as–you-Grow” model wherein storage capacity increases but not the compute capacity, being able to scale the capacity independently from the compute is foremost.

Let us  look at how the NVMe-oF standard addresses the scaling vectors need. NVMe-oF defines a standard protocol designed to transport the PCIe NVMe storage protocol efficiently over a “fabric” (network). This new protocol for storage systems scales out to large numbers of NVMe devices and extends the distance over which NVMe devices and NVMe subsystems can be accessed within a data center, whilst retaining low latency and high IOPS.

Adoption of NVMe-oF as the Back-End fabric (BE Fabric) allows the disaggregation of Storage Head Nodes (CPU running the storage services) and storage capacity, making these two elements independently scalable. This enables the customer to build various storage solutions by merely defining the number of SHNs and JBOFs (Just a Bunch Of Flash) without incurring further latency or other performance impact upon the storage solution.

To avoid vendor lock-in solutions, companies now build their own storage solution using on-the-shelve components and Software Defined Storage. Surfing on this trend, several companies are now providing Just a Bunch of Flash enclosures which are are NVMe-oF ready (Supermicro, Newisys,…), assuming that new products can combine the functionality of a CPU and an RDMA NIC in a PCIe Card form factor.

The next step: Kalray’s NVMe-oF Target Controller (KTC) 

The Kalray NVMe-oF Target Controller (KTC) does precisely this.

The KTC product offering is a family of PCIe cards integrating CPU, DDR and RNIC functionalities. These cards are delivered with all the optimized software to realize an NVMe-oF target controller function. KTC helps the transition to a significantly more efficient NVMe-oF JBOF.

The key operational benefits that KTC offers are:

  • Achievement of NVMe SSDs’ performance potential in a disaggregated architecture
  • Higher density whilst cutting down the power consumption
  • Significant increase in performance and performance per Watt
  • Simple PCIe JBOF upgrade

Moreover, by leveraging Kalray’s Massively Parallel Processor Array (MPPA®) technology, KTC’s programmability provides the following benefits:

  • Become the universal NVMe-oF solution with a simple software upgrade to support iWARP
  • Keep track of the evolution of the NVMe-oF protocol and support the emerging ones like NVMe-TCP
  • Offer the full flexibility to support in-band or out-of-band board management, taking advantage of the Linux programming environment
  • Realize a smarter JBOF by utilizing KTC’s spare processing capacity for such features as inline-user processing (AI, Erasure Coding, Dedup, RAID, Encryption, compression)

The unique contribution of the Kalray Target Controller toward supporting a disaggregated storage architecture is that it delivers highly optimized, low-latency, high MIOPS and inline data processing  in a programmable platform that can track evolving storage standards.

Read more about the KTC: [downloads include =”6786″ template=”title”]

Coming up soon: “3 things to know before NVMe-oF deployment”