GCP – Google Axion processors boost AlloyDB, Cloud SQL, major customers, and ISVs
Last year we announced Google Axion processors, our first custom Arm®-based CPUs. We built Axion to address our customers’ need for general-purpose processors that maximize performance, reduce infrastructure costs, and help them meet their sustainability goals.
Since then, Axion has shaken up the market for cloud compute. Customers love its price-performance — up to 65% better than current-generation x86 instances. It even outperforms leading Arm-based alternatives by up to 10%. Axion C4A instances were also the first virtual machines to feature new Google Titanium SSDs, with up to 6TB of high-performance local storage, up to 2.4M random read IOPS, up to 10.4 GiB/s of read throughput, and up to 35% lower access latency compared to previous-generation SSDs. In fact, in the months since launch, over 40% of Compute Engine’s top 100 customers are using Axion, thousands of Google’s internal applications are now running on Axion, and we continue to expand integration of C4A and Axion with our most popular Google Cloud’s products and partner solutions.
Today, we are excited to share that Cloud SQL and AlloyDB for PostgreSQL managed databases are available in preview on C4A virtual machines, providing significant price-performance advantages for database workloads. To utilize Axion processors today, you can now choose to host your database on a C4A VM from directly within the console.
- aside_block
- <ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud infrastructure’), (‘body’, <wagtail.rich_text.RichText object at 0x3ea4dcf923d0>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/compute’), (‘image’, None)])]>
Supercharging database workloads
Organizations with business-critical database workloads need high-performance, cost-efficient, available, and scalable infrastructure. However, a surge of data growth alongside higher and more complex processing requirements are creating challenges.
C4A instances bring significant advantages to managed Google Cloud database workloads: improved price-performance compared to x86 instances, translating to more cost-effective database operations. Designed to handle data-intensive workloads requiring real-time processing, C4A is well-suited for high-performance databases and analytics engines.
When running on C4A instances, AlloyDB and Cloud SQL provide nearly 50% better price-performance than N series VMs for transactional workloads, and up to 2x better throughput than Amazon’s equivalent Graviton 4 offerings.
“At Mercari, we deploy thousands of Cloud SQL instances across engines and editions to meet our diverse database workload requirements. At our scale, it is critical to optimize our fleet to run more efficiently. We are excited to see the price-performance improvements on the new Axion-based C4A machine series for Cloud SQL. We look forward to adopting C4A instances in our MySQL and PostgreSQL fleet and taking advantage of the C4A’s high-performance while at the same time reducing our operational costs.” – Takashi Honda, Database Reliability Engineer, Mercari
We’ve also expanded regional availability for Axion and C4A. It is now broadly available across 10 Google Cloud Regions, which will expand to 15 Google Cloud Regions in the coming months. Cloud SQL and AlloyDB on Axion is now available in eight regions, with more to be added before the end of the year.
Google’s internal fleet and top customers choose Axion
Given its price-performance, it’s no surprise that in less than a year, Axion is a popular choice for Google internal applications and top Compute Engine customers, including Spotify:
“As the world’s most popular audio streaming subscription service, reaching over 675 million users, Spotify demands exceptional performance and efficiency. We are in the process of migrating our entire compute infrastructure to Axion and this is yielding remarkable results. We’re witnessing a staggering 250% performance increase, significantly enhancing the user experience, as much as 40% reduction in compute costs, and a drastic reduction in compute management toil, allowing us to reinvest in further innovation and growth.” – Dave Zolotusky, Principal Engineer, Spotify
And we’re only just getting started. We’re also making it easier for Google Cloud customers to benefit from Axion’s price-performance without having to refactor their applications. Google Cloud customers can already use C4A VMs in Compute Engine, Google Kubernetes Engine (GKE), Batch, Dataproc, Dataflow, and more services.
Expanding the Axion and Arm ISV ecosystem
Axion processors are delivering undeniable value for customers and ISVs looking for security, efficiency and competitive price-performance for data processing. We’re pleased to report that ClickHouse, Databricks, Elastic, MongoDB, Palo Alto Networks, Redis Labs, and Starburst have all chosen Axion to power their data processing products — with support from many more ISVs on the way. This commitment is notable as ISVs often choose Axion over alternative processors, including Arm-based processors from other cloud providers.
Enhancing diverse ML inference workloads
Machine learning (ML) inference workloads span traditional embedding models to modern generative AI applications, each with unique price performance needs that defy a one-size-fits-all approach. The range of inference tasks, from low-latency real-time predictions to high-throughput batch processing, necessitates infrastructure designed for specific workload requirements.
Google’s Axion C4A VMs deliver exceptional performance for ML workloads through architectural strengths like the Arm Neoverse V2 compute cores, with high single-threaded performance and memory bandwidth per-core for predictable, high-throughput execution, and Google’s Titanium offload system technology for reduced overhead. With up to 72 vCPUs, 576 GB of DDR5 memory, and advanced SIMD processing capabilities, Axion C4A excels at matrix-heavy inference tasks. Combined with its easy obtainability, operational familiarity, and up to 60% better energy efficiency compared to x86 alternatives, Axion offers a compelling CPU-based ML inference platform, alongside GPUs and TPUs.
In particular, Axion is well-suited for real-time serving of recommendation systems, NLP, threat detection, and image recognition models. As large language models (LLMs) with lower parameter counts (3-8B) grow increasingly capable, Axion can also be a viable platform for serving these models efficiently.
Customers have recognized this Axion strength and are actively deploying ML inference workloads on C4A VMs to capitalize on its blend of performance, cost-effectiveness, and scalability, proving it a worthy complement to GPU-centric strategies. Palo Alto Networks uses C4A as part of their diversified ML infrastructure platform strategy and realized 85% performance TCO efficiency by migrating their threat detection inference application from L4 GPUs to C4A.
“By migrating to Axion on Google Cloud, our testing shows that DNS Security will see a 85% improvement in price-performance, a 2X decrease in latency for DNS queries, and a 85% cost savings compared to instances with mid-range GPUs, enhancing our ML-powered DGA detections for customers.” – Fan Fei, Director of Engineering, Palo Alto Networks
Learn more about Axion-based C4A virtual machines
In their quest to balance performance, efficiency, and cost, more and more organizations are turning to Arm architecture. Axion’s strong price-performance combined with a growing ecosystem of support by mission-critical workloads makes it a compelling choice. We’ve seen incredible excitement for Axion-based C4A virtual machines from our customers and partners, and we can’t wait to see what you can build with Axion, too. Try Cloud SQL and AlloyDB running on C4A virtual machines today.
Read More for the details.