Containers and cloud-native deployment models are not exactly new, community pundits and experts already debated about this several years ago, way before enterprises would adopt them. At the time, containers were seen primarily as stateless, whereas nowadays we see more and more stateful containerized deployments requiring specific storage solutions (you can find that for example in the GigaOm Key Criteria for Evaluating Kubernetes Data Storage co-authored by Enrico Signoretti, Arjan Timmerman and Max Mortillaro).
But one thing has remained constant: containers or not, what matters is the data. And data needs to be protected, of course.
Specifics of Cloud-Native Data Protection
In the “legacy world”, data protection primarily consists of backing up filesystems or sets of virtual machines. With containers, this becomes somehow more complex, because data protection no longer gravitates around single components (such as monolithic VMs).
The cloud-native world is application-centric, and several components need to be backed up to restore an application: the containers that constitute the application executable part, configuration files that describe the application properties, the way the container orchestrator has to handle things, and of course the data itself, which is more than likely to reside on persistent volumes, as well as more dependencies.
The objective of cloud-native data protection is to entirely protect an application and its state, so that in case of failure, operations can be resumed without delay, and without having to fiddle around trying to understand what part is missing. It should be easy, in theory… but cloud-native deployments consist on the infrastructure side of many components, and a data protection solution should be able to successfully handle that level of complexity.
The massive adoption of cloud-native workloads by the enterprise also means that organizations do not only except leaner operations and shorter time-to-release cycles with their applications. They also rightfully expect enterprise-grade capabilities, notably in terms of data protection, including disaster recovery and data mobility.
Disaster Recovery and Mobility
Backups are a great way to recover from a failure or an application error, but they do not fully protect against a disaster. The goal of the Disaster Recovery discipline is to recover from a disaster, therefore orchestrate all recovery operations (i.e. restoring all applications / services from backups, but also making sure all the components communicate properly) to resume service, most often from a location that is not affected by the disaster.
Let us take a pause here to reinforce one point: organizations always need to look at the broader context before implementing Disaster Recovery – this broader context is called Business Continuity and should be the highest level in the decision-making pyramid. Disaster Recovery consists of the technical steps to resume business operations, but Business Continuity is all about the business perspective, which should be able to dictate when operations recovery makes sense and when it doesn’t.
So, assuming you have a BC plan and several disaster scenarios in place, you need to think about tools that will make disaster recovery possible, i.e. orchestrate the full recovery of the environment (applications and their dependencies) into a DR location, which in many cases is located somewhere in a public cloud.
The same mechanisms used for Disaster Recovery can also apply to application mobility use cases. Sometimes, applications have to be moved around for a variety of reasons, for example an org may decide that they no longer want to run apps on-premises, or they want to change from one public cloud provider to another, or even repatriate their applications. Seamless, application-centric focus (application, data and dependencies) is therefore essential for the success of data mobility operations.
Kasten K10 – A Brief Overview
We’re thrilled to have joined Cloud Field Day 11 as delegates, where Kasten by Veeam did a thorough presentation of Kasten K10, their product dedicated to enterprise-grade data protection for Kubernetes, and explained why data protection for Kubernetes requires a different approach, while still staying true to the good old 3-2-1 backup methodology, i.e. 3 copies of primary data on two different media types, of which one must be kept off-site.
Kubernetes environments are often complex and unique, due to the very rich Kubernetes ecosystem. Kasten customers want to reap the benefits of data protection, but without having to abandon their freedom of choice, i.e. what distributions they are using, how they deploy applications. Kasten K10 supports a broad set of Kubernetes distributions, data services and storage platforms, providing ample yet coverage of diverse environments by abstracting infrastructure complexity.
Most importantly, customers want their entire application state to be protected so that applications can be restored at any time seamlessly, even in case of a disaster. They also want to be able to migrate their apps anywhere, using simple and proven mechanisms. Finally, they also want an a data protection solution that can handle DevOps principles, and which allows operations to be automated. Kasten delivers on all of those requirements, for example by providing a GUI that also provides with CLI code for automations. Of course, security is also key, and Kasten provides ample options there too with RBAC support, encryption, and various IAM integrations.
Kasten K10 enumerates applications and their components through the Kubernetes API, then communicates with infrastructure-level APIs to gather the data and create snapshots where necessary. The data can then be replicated to other premises or to the cloud, providing a true hybrid / multi-cloud experience. The solution also supports disaster recovery across availability zones and regions, making near-zero RTO a reality.
Two other key features of Kasten K10 are the policy-driven data management engine and the solution’s observability capabilities. The policy-driven data management engine allows custom and default policies to be created, which in turn automatically enforces policies and helps organizations achieve a higher level of consistency with their data protection SLAs. The GUI, about which we already talked, not only provides a clear and aesthetically pleasant interface, but also a broad range of metrics, helping administrators understand at a glance if all applications are properly protected, and also provides multi-cluster operations.
Last important point: ransomware protection is no longer an option, and we strongly encourage enterprises to primarily adopt data protection solutions that incorporate ransomware protection or at least immutable snapshots. Kasten can store backups on immutable object storage, with the ability to determine the retention period for those backups, providing protection not only against ransomware, but also against human errors, as we all know that humans love ruining everything (by mistake or by choice). Kasten supports an open ecosystem and therefore allows immutable data to be stored on a growing ecosystem of partners which support immutable object stores, such as AWS or MinIO among others.
Data protection is always about the data. Changing application deployment models, no matter how lean they are, do not give organizations a blank check about whether they should protect data or not. Data protection is an always-on, no opt-out matter. If it’s in production, it should be backed up. I’d even go a step further to say that whatever supports business operations should be covered by a Business Continuity plan, which in turns develops disaster scenarios and disaster recovery plans, as well as data protection mechanisms and requirements.
At Cloud Field Day 11, Kasten demonstrated that they have what it takes to masterfully handle data protection, disaster recovery and mobility in the complex world of cloud-native workloads. Elaborating all of the solution capabilities and merits would be lengthy enough that it would turn this blog post into a white paper, instead we recommend that you watch the related Cloud Field Day 11 videos in the next section.
The following sessions were recorded at Cloud Field Day 11: