Moore’s Law continues to drive innovation in the speed of computing. In particular, processors and memory continue to push the boundaries of latency and performance. Persistent storage is always playing catchup in a game where a single I/O to disk is aeons of time compared to a load/store from DRAM. Flash storage improved latency and throughput at the media level, while NVMe is starting to address optimisation needed in the storage protocol itself. NVMe and local storage highlight the benefits of moving persistent media closer to the CPU. The improvements in NVMe reduce the I/O path and move storage onto a bus with direct memory access. Storage Class Memory and Persistent Memory offer the ability bring persistent data even closer to the CPU and to extend the capacity of volatile memory using flash storage.
In a desire for ever-faster systems, Persistent Memory is emerging as a technology that combines DRAM and NAND to accelerate I/O on the memory bus. Put simply, PM can be thought of as persistent storage in a DIMM slot. Hence we see the name NVDIMM or Non-Volatile DIMM being used. The memory bus is as close to the CPU as you can get (without being part of the CPU itself) which offers the ability to deliver storage with extremely low latency. However, there are some technical challenges.
DRAM is a volatile medium. We expect the contents to be lost when a server is turned off or rebooted. At boot time, a server scans the memory slots and determines what capacity of volatile memory is available. This information is then passed to the operating system. Servers with PM installed require changes to both the server BIOS and operating system to effectively use the technology. Although NVDIMM devices are electrically identical to standard DIMM, the operating system needs to know that an NVDIMM device is present in order to treat it as persistent. Otherwise, an NVDIMM would simply look like a small DIMM card. In addition, the BIOS needs to know that the NVDIMM contents should be preserved at boot up. This means a different process for checking and clearing memory. It wouldn’t be desirable to wipe an NVDIMM at boot time.
NVDIMM standards have been established by JEDEC, an organisation whose roots can be traced back to the 1920’s and the earliest days of radio development. JEDEC stands for Joint Electron Device Engineering Council, which when named in 1958, probably had more relevance than it does today. JEDEC standards include DRAM and flash components. It makes sense for this to be extended to cover PM and SCM solutions, which essentially combine both of these technologies.
JEDEC has established three NVDIMM standards.
Currently the most popular and only product available on the market today, with capacities of 8, 16 and 32GB. This format uses DRAM as the primary I/O area, with an equivalent capacity of NAND used for vaulting a copy of data on power loss. NVDIMM-N uses super-capacitors to initiate the data save process when main power is lost. An NVDIMM-N device can be written to as either a block or byte-addressable device. Byte addressability means individual memory addresses can be written with load/store instructions, whereas block addressability means using read/write instructions that access an entire block of data at the same time.
This variation uses flash storage as the only storage medium. It provides block-level addressability to flash storage, with an onboard controller performing the translation task from system memory requests to the NAND storage.
The third variation uses both DRAM and NAND, with significantly more NAND than DRAM. Although standards for NVDIMM-P haven’t been fully released, the implication is that the DRAM acts as a cache and either optimises writes to/from the NAND part or is used to improve latency.
What purposes do the three products serve? NVDIMM-N delivers DRAM-level performance and simply adds persistence, but will be more expensive than DRAM because of the additional NAND and controller. So the extra cost here is for persistence. NVDIMM-F provides greater capacity and persistence at the expense of performance, but at lower pricing than DRAM. Bridging the gap is NVDIMM-P, which should be cheaper than DRAM (due to the ratio of NAND and DRAM onboard), offer persistence and near-DRAM performance.
Storage Class Memory
SCM products offer a slightly different approach to the use of NAND in that they extend the apparent capacity of a standard DIMM by using NAND as a backing store and DRAM as a cache for active data. This gives the ability to deploy servers with very high DRAM capacities and at a lower cost than using traditional DRAM alone. SCM solutions look and operate like standard DRAM and shouldn’t require any BIOS or O/S changes to use.
NetList Inc has a number of products in the SCM and PM spaces. HybriDIMM is an SCM product that utilises DRAM and NAND to act as either a memory device, as a block storage device or as persistent memory. As a memory device, HybriDIMM offers up 4 times the capacity of traditional DDR4 DRAM (512GB) with 1024GB models planned. NVvault DD4 is an NVDIMM-N device in either 8GB or 16GB capacities. Netlist also has another DDR3 version product (NVvault DDR3 NVDIMM). Prior to the company’s demise, Diablo Technologies had developed Memory1, an SCM product that offered up to 256GB of DDR4 DIMM capacity. Diablo also developed an NVDIMM product back in 2013 called TeraDIMM that was marketed with Smart Storage as ULLtraDIMM. Smart Storage was acquired by SanDisk in 2013.
For SCM products that extend the apparent DRAM capacity of a server, there are potential performance improvements that can be gained by not using the multi-processor interconnect in NUMA architectures. This was discussed by Diablo Technologies at Tech Field Day 10 (check out the Performance Benchmarking presentation). Diablo also showed benchmarking against uses cases such as I/O intensive analytics.
For NVDIMM, there are benefits to any application needing to maintain persistence with memory-like performance. Microsoft has two informative videos that show the implementation of NVDIMM as block and byte-addressable devices that can be formatted as disks or used with Storage Spaces Direct (video 1, video 2). At an application level, Microsoft has demonstrated accelerated performance of SQL Server 2016 using NVDIMM technology. For general adoption of the technology, we can expect applications to be re-written or updated to be aware of any persistent storage installed in a server.
The Architect’s View
SCM and PM offer potentially huge performance gains by reducing further the I/O path from processor to persistent storage. In this article, we’ve defined SCM as volatile memory that is capacity extended with NAND and PM as “persistent DRAM”, however, the terms are generally interchangeable. As the market develops, a combination of SCM and NVDIMM-P could provide individual servers with much more horsepower and significantly improve TCO. From a storage perspective, I believe vendors are already using NVDIMM technology in storage appliances, although I’ve yet to see any SDS products using the technology.
Last week on the Storage Unpacked podcast, we talked to Rob Peglar, SVP and CTO at Symbolic IO. Rob is also on the board of SNIA and has an interest in persistent memory. You can listen and subscribe to the podcast here. The discussion gives some further insight into the technology and where we might see it headed.
- Performance Analysis of SAS/SATA and NVMe SSDs
- Has NVMe Killed off NVDIMM?
- Tech Field Day 10 Preview: Diablo Technologies
- The Persistence of Memory with Rob Peglar (Storage Unpacked, retrieved 6 February 2018)
- SMART Storage: Super DIMM sum adds up to tasty flash soup (The Register, retrieved 6 February 2018)
- JEDEC Announces Support for NVDIMM Hybrid Memory Modules (JEDEC Press Release, retrieved 6 February 2018)
- JEDEC DDR5 and NVDIMM-P Standards Under Development (JEDEC Press Release, retrieved 6 February 2018)
- Guess who’s developing storage class memory kit? And cooking the chips on the side… (The Register, retrieved 6 February 2018)
- SNIA Persistent Memory Summit 2018 (SNIA website, retrieved 6 February 2018)
- Diablo Technologies Presents at Tech Field Day 10 (Tech Field Day website, retrieved 6 February 2018)
Comments are always welcome; please read our Comments Policy. If you have any related links of interest, please feel free to add them as a comment for consideration.
Copyright (c) 2009-2018 – Post #9A48 – Chris M Evans, first published on https://blog.architecting.it, do not reproduce without permission. Photo credit iStock.