Rack Scale Design (RSD), pushed by Intel, isn’t new. In fact, the first international workshop on the topic appeared back in 2014. And the Ericsson Hyperscale Datacenter System 8000, the first commercial implementation, dates back to 2016. But it seems 2018 may be the year RSD finally gains traction.
Why RSD? Why Now?
The idea behind RSD is to free data center administrators to scale and control their assets not as servers and storage components, but as full racks of resources and even pools of several racks, or pods. This promises to add substantial efficiency and automation, just in time for the 5G roll-out, Big Data advancements, and other demands coming at teleservices carriers, cloud services providers (CSPs), and business enterprises.
The folks over at Cloud Foundry are among the RSD believers. First of all, they cite the various OEM products now available. The Ericsson system was a start, a set of highly standardized x86 servers inside a specialized rack with shared Ethernet switching and power/cooling. Firmware layers allow administration of the rack for asset discovery and management of switches, server nodes, storage, and more, with open APIs providing control. But now there are a variety of options on the market, including the Dell EMC DSS 9000, the HPE Cloudline, and the FusionServerE9000 and X6800.
And there is evidence of adoption. CenturyLink, for example, is using the DSS 9000 to offer build-to-order public and private cloud infrastructure for customers.
To spur greater uptake, Intel has been opening resource centers for customers to test rack scale systems, and Dell EMC has the DSS 9000 up for evaluation in its Amsterdam Solution Center. Supermicro in San Jose also permits test drives, and more labs are expected to open soon.
New RSD Version and New Capabilities
Perhaps as important as the current product ecosystem are the signs of rapid progress in RSD. Intel’s next version is anticipated, and the company is also expected to release reference code for using NVMe (non-volatile memory express) over Fabrics. NVMeOF will enable fast, easy pooled storage across multiple racks, as Intel and Supermicro demonstrated at a conference in December. In other news, Canonical, an Ubuntu provider, has integrated Intel RSD APIs into MaaS, Juju, and OpenStack distribution, while Red Hat will support Redfish APIs in OpenStack 12. And demonstrations at 2017 KubeCon showed how to use Intel RSD with container environments like the SUSE CaaS Platform.
With all these pieces coming together, even skeptics are starting to open their minds to RSD, meaning 2018 could be the time to look seriously at the hardware solutions coming off the line, as well as the new technologies just around the corner. Ask Intel about Rack Scale Design and they’ll describe it as “a revolutionary, new architecture that disaggregates compute, storage, and network resources, and introduces the ability to more efficiently pool and utilize these resources.”
But what does that really mean? For many data center operators and cloud services providers, it might just mean an ability to better compete with the hyperscale providers like Google. That’s because RSD holds promise to deliver similar benefits as the supersize CSPs get from their purpose-built hardware and proprietary, software-based automation. The goal is to improve utilization rates, which rarely exceed 40% in data centers today and often fall far below, according to Data Center Dynamics, even with increasing virtualization.
The means to the end is to “break up” traditional IT systems and their fixed ratios of compute, memory, storage, etc. Once disaggregated, these resources can then be composed into logical systems for specific workloads. When the workload is complete, they can be disassembled and returned to the pool for reuse. Temporary, purpose-built systems are fully optimized to the application, thus avoiding overprovisioning. Because it all happens within software, automation will become increasingly important with RDS, which already works with various cloud and virtualization environments.
A New Era of Reusability
RSD proponents talk about dynamic configuration of systems, manageability based on API-based software, and the interoperability and flexibility of the technology. As we’re in the business of helping clients make the most of their IT investments, we have to admit to being highly interested in RSD’s potential to change upgradability and end-of-life for IT systems. How does that work? Remember that with RSD, components are no longer dedicated to specific servers or preconfigured systems. They are disaggregated. This means that one can replace components independently. A new CPU could be added this month and more HDD storage next year. Keeping power/cooling, chassis, and other components could reduce refresh costs by 40% or more, and Intel is reporting savings of 77% in technician time.
In many ways, this aspect of RSD builds on the upgrade strategies Park Place Technologies already employs to get maximum life out of IT hardware. As such, we’re interested to see how RSD evolves from carriers to CSPs to the enterprise and how it will fill out the IT professional’s cost-saving toolbox.