Redis has continued to grow in popularity, whether that’s to reduce load on a database to save money, improve the end user experience with lower latency, or simply because developers love it. However, in industries like gaming (leaderboards, session stores), finance (fraud detection), ads (ultra-low-latency serving), or retail (checkouts), workloads push the boundaries of a standalone Redis shard and require the performance and horizontal scale of Redis Cluster.
To serve these demanding Redis workloads, we announced the preview of Memorystore for Redis Cluster at Google Cloud Next. Compared to the existing service, Memorystore for Redis Cluster provides 60 times more throughput with microseconds latencies and supports 10 times more data. This new fully-managed service for Redis Cluster is fully open–source compatible, easy to set up, and provides zero-downtime scaling with ultra-low and predictable latency. With Memorystore for Redis Cluster, you get intelligent and automatic zonal distribution of nodes for high-availability and resilience, automated replica management and promotion, and a 99.99% availability SLA upon our General Availability launch.
Ultra-low and predictable latency
Customers depend on Memorystore for its low latency and reliability. Now, with Memorystore for Redis Cluster, we’ve taken scale and performance to new heights. With a single mouse click or gcloud command, you can now easily scale your clusters with zero downtime to terabytes of keyspace, and up to 60 times the throughput of Memorystore for Redis. Best of all, we’ve made scaling improvements directly to the Redis engine to provide an improved and differentiated Redis scaling experience.
Verve Group is an advertising technology ecosystem that relies on ultra-low latency Redis Clusters to deliver real-time ads, and recently tested Memorystore for Redis Cluster. In addition to ease of management, Verve experienced first-hand the scaling improvements we made to the Redis engine to improve slot migrations, de-risk scaling, and directly address the data corruption risks present in open-source Redis Cluster scaling.
“Memorystore for Redis Cluster exceeded our performance expectations. We observed extremely high throughput with very low latency. We were also able to scale down to 10 nodes on the fly, illustrating the ease of zero-downtime scalability of the product.” – Ville Lamminen, DevOps manager, Verve Group
Easy to manage, easy to save money
Lowering Total Cost of Ownership (TCO) and saving you money while reducing operational overhead is fundamental to the value proposition of Memorystore for Redis Cluster. With this new service, you no longer have to self-manage tens or hundreds of Redis nodes on Compute Engine, worrying about high toil and tedious tasks like zonal distribution for high-availability, VM failures, Redis tuning, replica management, and complex scaling operations. Instead, with Memorystore for Redis Cluster’s 10x scale, you can consolidate smaller workloads into a highly performant and resilient cluster to drive cost savings and reduce operational overhead. And with its pay as you go (PAYG) model, you can easily take advantage of zero-downtime scaling to meet the demands of your workloads, transforming events like Black Friday/Cyber Monday into a simple click of the mouse.
“With Memorystore for Redis Cluster, we can easily scale our clusters to meet the demands of our largest workloads. Best of all, we’re able to offload the complexities of cluster management to Google instead of self-managing on Compute Engine and requiring in-house Redis Cluster expertise. Our testing of Memorystore for Redis Cluster reveals this new product to be easy to use, highly performant, and easily scalable.” – Animesh Chaturvedi, distinguished engineer, Palo Alto Networks
High availability, simplified
While conceptually simple, the actual practice of self-managing node and replica placement across availability zones can create significant engineering toil and easily result in costly mistakes. With Memorystore for Redis Cluster, we automatically distribute Redis Cluster nodes across zones to achieve maximum availability and resiliency. When you add Replica nodes, we automatically place them in a different zone than the primary, adding resiliency to the cluster and protecting the data from a zonal outage.
We also take responsibility for provisioning and configuring infrastructure for optimal performance and the lowest possible latency. Behind the scenes, we manage the software, security patches, detect failures and trigger automatic failovers and VM replacements, freeing up engineering organizations to focus on building their applications, rather than managing open source software and Compute Engine infrastructure.
Secure and private access
To simplify your connectivity experience and provide you with a first class managed experience, we’re launching Memorystore for Redis Cluster with Private Service Connect (PSC). PSC offers a private and secure default connection option that ensures traffic never leaves Google’s network. With PSC, Memorystore for Redis Cluster provides a simple onboarding experience with a single-step cluster provisioning process, allowing you to configure private networking without needing to be a networking expert. PSC provides granular connectivity with minimal IP consumption (just two IP addresses per Cluster, even for more than 100 shards). PSC also addresses Virtual Private Cloud (VPC) peering limitations, is accessible from any region, and provides advanced security controls.
GET Started, SET up your cluster
Memorystore for Redis Cluster is available today in preview and we encourage you to experience how it can help you achieve superior scale and performance (link to Google Cloud console). Please visit our documentation to learn more about additional features, such as the integration with Identity and Access Management (IAM), Cloud Monitoring, Audit Logging, and how to enable in-transit encryption. Please don’t hesitate to reach out with questions or feedback to firstname.lastname@example.org.
Read More for the details.