Last week I attended Cloud Field Day 11, where one company, Pliops, did their second presentation in six months. I was also a delegate at Storage Field Day 21, where Pliops made their debut, and covered their product, the Pliops Storage Processor (PSP) in a dedicated blog post.
As written above, I’ve already covered the Pliops Storage Processor in a dedicated blog post, but I also acknowledge that not everyone was at Storage Field Day 21 (the audience is diverse compared to Cloud Field Day), it was six months ago, etc.
In a nutshell, the PSP is a PCIe card that performs data reduction / optimization – it reduces CPU usage on the hosts, significantly decreases latency while also increasing read/write IOPS. It inserts itself between the application layer and the hardware storage layer. From a deployment perspective, it can run either on a compute node (for local operations in direct attach storage), on a storage node (accessed through a storage engine API) or on the cloud, again through a storage engine API, but as a cloud-based service.
Niche Use Cases and Adoption Risk
The product has its merits, and as Pliops explained it can help improve performance for databases such as MySQL, mongoDB, MariaDB and PostgreSQL, where Pliops substitutes to the storage engines such as LevelDB, InnoDB or RocksDB (pardon my ignorance, I am not an expert in this area). Pliops also mentions use cases such as Ceph, Redis, Cassandra (key-value based APIs) and BI / Analytics platforms such as ElasticSearch, Spark and SAP HANA.
For organizations running those solutions, they usually support critical business processes and as such are incredibly important – therefore, performance / cost optimizations are also important topics to be addressed. But looking at the sheer scale of enterprise IT solutions available globally, performance improvement / data reduction optimizations for a small subset of applications remain a niche use case.
The point here is not that at scale, optimizing SAP HANA or Redis or anything else makes no sense. Of course it does. But in general, an organization is not going to run hundreds or thousands of instances of the same application: they will usually build dedicated infrastructure to support some of those critical workloads, and if they do things well, they will have sized the dedicated infrastructure appropriately to sustain existing workloads, additional demand and also include some overhead to cover organic growth.
As a single customer, the narrow focus of a solution such as Pliops would have to be carefully considered. Is the benefit higher than the risk? What happens if the company ceases to exist? Will a lack of support have an impact on my costs? Removal of accelerator = more hardware (i.e. more CAPEX/OPEX) to sustain performance levels, or a degradation of performance if no investment is made to compensate for the loss of capabilities.
Product or Technology?
So far, you may think that what I wrote absolutely doesn’t bodes well for Pliops, or is some really bad FUD, or makes no sense at all. But please stick around for a couple more paragraphs.
All of the workloads supported by Pliops can also run on the cloud. And while an organization may have a lot of trouble getting their Pliops card installed into one physical server at a cloud provider (say Azure or AWS for example), getting those acceleration and data reduction capabilities available directly on a cloud compute instance (such as AWS EC2) make a lot of sense.
They may not necessarily make a lot of sense for the user who is running their mongoDB, MySQL or SAP HANA Cloud instance (I’m taking shortcuts), but for the public cloud provider, they get improved performance, significant CPU core usage reduction, improved IOPS / latency, and data reduction on the backend as well. Now imagine doing this at scale on dozens / hundreds of physical servers supporting those instances. Or, imagine a private hyperscaler using Pliops across their thousands of servers (think Facebook, Google, Apple, etc.). In fact, if you scroll down to the “Additional Resources” section of this blog post, you will find a demonstration of “Pliops Value for the Cloud”, which clearly indicates the direction.
Suddenly, there’s a lot of value that can be found in a solution such as Pliops. So what matters here – is it the product, or the technology? I’ve come to appreciate Pliops not just as a product that I can order and install in a server to deliver outcomes now, but maybe we IT folks are so accustomed at the enterprise IT context that we fail to see the bigger picture in which hyperscalers operate, because in many cases we can fall for the “installs into physical server” = meant for on-premises infrastructure.
I haven’t answered my previous question about the value of Pliops – is it in the product, or in the technology? I’m convinced that the value resides in the technology, although having a product commercially available is by no means a bad thing. In fact, the product can help to demonstrate the value of the technology and intellectual property developed by the Pliops folks. Enterprises, but also hyperscalers can test those cards at the scale they deem necessary and draw their own conclusions.
While I do not have a crystal ball to make predictions, my take is that while Pliops can be used in the enterprise world, the use cases remain niche and I do not believe that selling the product directly to final customers (enterprises) is a financially viable option. I believe instead that the path to success for Pliops is through an acquisition by one of the hyperscalers, because this is the best way to unlock the potential of the intellectual property developed, make the technology available at scale and achieve significant cost savings + performance improvements.
Whatever the outcome is, TECHunplugged wishes a lot of success to Pliops. If you want to understand Pliops value better, make sure you check out the Cloud Field Day 11 videos in the next section.
The following videos are recordings of the live-streamed Pliops sessions at Cloud Field Day 11: