At Pure Storage Accelerate 2019 in Austin, TX, Pure announced the availability of DirectMemory. What is DirectMemory?
An Optane-Powered Module For the FlashArray //X
DirectMemory is a read cache memory module developed for the FlashArray //X which is powered by Storage Class Memory, namely Intel Optane DC NVMe SSD drives (Optane is Intel’s commercial name for 3D XPoint “three-d crosspoint” memory).
Two advantages of 3D XPoint compared to TLC 3D NAND are very low latency (in the 10 μs range) and its durability. This comes however at a much higher $/GB price. This has confined Optane until now either to support very low latency applications, or to act as a cache tier for storage systems, a role that is ideally filled by the attributes of low-latency and very high media endurance.
Caching For The Caching Gods
With DirectMemory, Pure Storage packaged several Intel Optane DC NVMe SSD drives. The packages are 3TB (4x 750 GB drives) or 6 TB (8 x 750 GB drives) and put those on trays compatible with their FlashArray //X modules.
“DirectMemory Cache is a high-speed caching system that reduces read latency for high-locality, performance-critical applications”
This statement by Pure Storage needs to be put into context. According to their definition, performance-critical applications are latency-sensitive, high throughput applications, and high-locality means workloads that often reuse the same dataset.
And the key word in the above statement is “reducing read latency”. Indeed we stated at the beginning of this article that DirectMemory is a read cache module, but its worth stating again that this will not accelerate writes.
The diagrams below describe the cache operation of DirectMemory, and how read operations are different if DirectMemory is present or not.
Benefits of DirectMemory
Compared to a regular a FlashArray //X with no DirectMemory Cache module, improvements of at least 20% lower latency were observed in up to 80% of storage arrays analyzed by Pure Storage (the data comes from Pure1 Meta, their AI-backed analytics engine).
A smaller, but still consistent subset of arrays (40% of the analyzed base) saw latency reduction between 30% to 50% with DirectMemory enabled, compared to “vanilla” //X arrays.
DirectMemory Cache is a “ready, set, go” kind of solution: it’s enabled directly into Purity FA; it requires no configuration and enabling it is non-disruptive.
DirectMemory is interesting not only because it’s a read cache for the FlashArray //X. The value is that it improves performance out of the box, without having to go through a complicated upgrade path. In that, it stays true to Pure Storage’s philosophy of ‘No Forklift Upgrades’ and allows Pure customers to get more value out of their investments.
The other value is that of the cache size. With 3 TB and 6 TB modules and 10 μs latencies, 3D XPoint provides a massive hot cache tier (can we call that a tier?) that would be unthinkable if we only had NVRAM at hand.
Of course, DirectMemory needs to be evaluated for its capabilities. Even if it’s obvious, it will not provide any value for write-intensive workloads, and will deliver the best value for read-intensive workloads which often reuse the same datasets, i.e. where similar data will often be read and thus ideally needs to reside in the cache.
Customers should therefore talk with their Pure Storage sales folks and ideally perform an evaluation of DirectMemory.