On 6th March 2019, NGD Systems, Inc. announced the general availability of their Newport computational storage platform. While the Newport platform looks like a traditional NVMe drive, there is much more to it.
The Newport drive combines flash memory modules combined with an SSD Controller ASIC developed on a 14nm process technology (NGD Systems claims this is an industry first). The initial specifications for the Newport platform is as follows:
- Form factor: U.2 15mm (additional form factors coming soon)
- Max Raw Capacity: 16TB
- Max Active Power: 12W
- Interface / Protocol: PCIe Gen3 x4; NVMe 1.3
NGD Systems also claim that their product isn’t suffering any performance loss due to power throttling. Previously, NGD Systems customers were able to leverage their Catalina-2 and Catalina-1 computational storage devices. They announced that applications developed for the Catalina-2 platform will be fully compatible with the Newport platform.
The reader should know that computational storage devices aren’t just storage capacity, but they include processing capabilities as well as a development environment to be able to execute computational activities on the data directly in-situ (on the flash itself, without having to go outside to the system CPU), hence the terminology “in-situ processing” and “computational storage”.
What is “In-Situ Processing” and why does it matters
Storing data on flash media is now widespread and the entire community of end users has come to appreciate the huge gains in latency and performance offered by flash, but there are still use cases where there is significant room for improvement.
One of the challenges that in-situ processing addresses is the steady growth of data that must be processed in real-time. Whether we are talking about telemetry, optical recognition systems, positioning, flight data or more, there are vast amounts of data that are or will be generated locally at the edge of an organization’s network and that cannot be sent to the core / cloud networks for processing.
A different challenge faced by hyperscalers is the necessity to scale compute capabilities as the amount of data they store grows. As more data is generated, more compute power is needed to process this data. While it may not be an imperious challenge for smaller data centers, hyperscalers (think the Amazon, Google, Facebook, but also many other less known companies) cannot afford to infinitely grow racks and racks of compute nodes.
There ought to be a more efficient way to process data, and computational storage does exactly this by moving (or delegating) a vast portion of the data processing / analysis capabilities directly to the storage device.
Considering how many U.2 storage devices a rackmount server can currently host, you can see that there is a very high potential to dramatically improve data processing capabilities by using less physical servers, with less expensive CPUs, and having very dense storage capabilities at the same time as having a lot of compute power that is efficient from a performance and power consumption perspective.
We have covered the topic of computational storage extensively, releasing a research paper on this topic in October 2018, then covering the emergence of an industry-wide technical working group within the SNIA.
TECHunplugged’s Take
TECHunplugged was first exposed to computational storage in September 2018 and has been enthusiastic ever since about this technology and the promises it brings.
With Newport, NGD Systems (a company built by many semiconductor & flash industry veterans) introduces their third product generation. We expect computational storage’s momentum to continue growing in its two strong areas which are hyperscalers, as well as edge computing.
This is not only a great achievement for NGD Systems, but also for the entire ecosystem of companies developing computational storage technologies as well as their customers. It could very well be that innovations created in this space will benefit the broader storage industry in the years to come.