GCP – Introducing Hyperdisk Balanced, a new storage option for stateful Kubernetes workloads
Google Kubernetes Engine (GKE) has emerged as a leading container orchestration platform for deploying and managing containerized applications. But GKE is not just limited to running stateless microservices — its flexible design supports workloads that need to persist data as well, via tight integrations with persistent disk or blob storage products including Persistent Disk, Cloud Storage, and FIlestore.
And for organizations that need even stronger throughput and performance, there’s Hyperdisk, Google Cloud’s next-generation block storage service that allows you to modify your capacity, throughput, and IOPS-related performance and tailor it to your workloads, without having to re-deploy your entire stack.
Today, we’re excited to introduce support for Hyperdisk Balanced storage volumes on GKE, which joins the Hyperdisk Extreme and Hyperdisk Throughput options, and is a good fit for workloads that typically rely on persistent SSDs — for example, line-of-business applications, web applications, and databases. Hyperdisk Balanced is supported on 3rd+ generation instance types. For instance type compatibility please reference this page.
Understanding Hyperdisk
First, let’s start with what it means to fine-tune your throughput with Hyperdisk? What about tuning IOPS and capacity?
Tuning IOPS means refining the input/output operations per second (IOPS) performance of a storage device. Hyperdisk allows you to provision only the IOPS you need, and does not share it with other volumes on the same node.
Tuning throughput means enhancing the amount of data or information that can be transferred or processed in a specified amount of time. Hyperdisk allows you to specify exactly how much throughput a given storage volume should have without limitations imposed by other volumes on the same node.
Expanding capacity means you can increase the size of your storage volume. Hyperdisk can be provisioned for the exact capacity you need and extended as your storage needs grow.
These Hyperdisk capabilities translate in to the following benefits:
First, you can transform your environment’s stateful environment through flexibility, ease of use, scalability and management — with a potential cost savings to boost. Imagine the benefit of a storage area network environment without the management overhead.
Second, you can build lower-cost infrastructure by rightsizing the machine types that back your GKE nodes, optimizing your GKE stack while integrating with GKE CI/CD, security and networking capabilities.
Finally, you get predictability — the consistency that comes with fine-grained tuning for each individual node and its required IOPS. You can also use this to fine-tune for ML model building/training/deploying, as Hyperdisk removes the element of throughput and IOPS from all PDs sharing the same node bandwidth, placing it on the provisioned Hyperdisk beneath it.
Compared with traditional persistent disk, Hyperdisk’s storage performance is decoupled from the node your application is running on. This gives you more flexibility with your IOPs and throughput, while reducing the possibility that overall storage performance would be impacted by a noisy neighbor.
On GKE, the following types of Hyperdisk volumes are available:
Hyperdisk Balanced – General-purpose volume type that is the best fit for most workloads, with up to 2.4GBps of throughput and 160k IOPS. Ideal for line-of-business applications, web applications, databases, or boot disks.
Hyperdisk Throughput – Optimized for cost-efficient high-throughput workloads, with up to 3 GBps throughput (>=128 KB IO size). Hyperdisk Throughput is targeted at scale-out analytics (e.g., Kafka, Hadoop, Redpanda) and other throughput-oriented cost-sensitive workloads, and is supported on GKE Autopilot and GKE Standard clusters.
Hyperdisk Extreme – Specifically optimized for IOPS performance, such as large-scale databases that require high IOPS performance. Supported on Standard clusters only.
Getting started with Hyperdisk on GKE
To provision Hyperdisk you first need to make sure your cluster has the necessary StorageClass loaded that references the disk. You can add the necessary IOPS/Throughput to the storage class or go with the defaults, which are 3,600 IOPs and 140MiBps (Docs).
YAML to Apply to GKE cluster
<ListValue: [StructValue([(‘code’, ‘apiVersion: storage.k8s.io/v1rnkind: StorageClassrnmetadata:rnname: balanced-storagernprovisioner: pd.csi.storage.gke.iornvolumeBindingMode: WaitForFirstConsumerrnallowVolumeExpansion: truernparameters:rntype: hyperdisk-balancedrnprovisioned-throughput-on-create: “200Mi” #optionalrnprovisioned-iops-on-create: “5000” #optional’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3ed8d0724f40>)])]>
After you’ve configured the StorageClass, you can create a persistent volume claim that references it.
<ListValue: [StructValue([(‘code’, ‘kind: PersistentVolumeClaimrnapiVersion: v1rnmetadata:rnname: postgresrnspec:rnaccessModes:rn- ReadWriteOncernstorageClassName: balanced-storagernresources:rnrequests:rnstorage: 1000Gi’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3ed8d7aba760>)])]>
Take control of your GKE storage with Hyperdisk
To recap, Hyperdisk lets you take control of your cloud infrastructure storage, throughput, IOPS, and capacity, combining the speed of existing PD SSD volumes with the flexibility to fine-tune to your specific needs, by decoupling disks from the node that your workload is running on.
For more, check out the following sessions from Next ‘24:
Next generation storage: Designing storage for the future
A primer on data on Kubernetes
And be sure to check out these resources:
How to create a Hyperdisk on GKE
Read More for the details.