Colab Enterprise is a collaborative, managed notebook environment with the security and compliance capabilities of Google Cloud. Powerful integrated AI, seamless collaboration tools, enterprise readiness, and zero-config flexible compute are some of the many features making Colab Enterprise a trusted tool for developers at companies of all sizes.
Today, we’re excited to announce new productivity boosting capabilities in Colab Enterprise, including:
Code assistance powered by Gemini to improve code development
A Notebook gallery that helps you find sample notebooks to jumpstart your workflows
A UX redesign to improve the editor experience and asset organization
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud AI and ML’), (‘body’, <wagtail.rich_text.RichText object at 0x3e0bacffd9d0>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/vertex-ai/’), (‘image’, None)])]>
Gemini in Colab Enterprise
The latest version of Gemini, Google’s largest and most capable AI model, is now directly available in Colab Enterprise. With this integration, users can now use AI to assist with code completion and generation, increasing their productivity and decreasing time to value:
Code completion: With Code completion, customers can now start coding in their notebook and receive suggestions about potential ways to complete their code.
Code generation: With Code generation, customers can use Gemini to generate code for them based on a prompt.
Explain error: With Explain Error, customers can get an explanation of why errors occurred, giving information that’s helpful in debugging.
Fix error: With Fix Error, customers can ask for help to fix errors in your code so that you don’t have to consult external sources.
Sample Notebook Gallery
The Notebook gallery offers a one-stop shop to effortlessly discover, search, and build off of sample notebooks. These samples, code snippets, and getting started guides provide a practical, hands-on approach to learning new techniques, understanding best practices, and jumpstarting projects with ready-to-use templates and examples.
Notebook categories
Notebooks in the gallery are organized by categories including “Getting Started”, “Partner Models”, and “RAG” making it easy to find relevant samples to build off of and accelerate your workflows. Use the dropdown arrows to explore notebooks within each category, and simply click to open.
Notebook tags and metadata
See detailed information about sample notebooks before opening them, including a short description of the notebook’s contents, what modalities the notebook covers (e.g. text, image, video), and which AI models are used.
Search
Use the gallery search bar to find sample notebooks using freeform text. Search based on keywords such as the name of the notebook or any of the listed metadata like type of model, and modality.
A UX refresh
We’ve redesigned Colab Enterprise to improve developer productivity. You can now access a new centralized dashboard to manage all your assets, an expanded editor for a more focused coding experience, a new dark mode, and integrations with other Vertex AI services such as Experiments, Model Evaluations, Tuning, Scheduler and Ray.
The new centralized dashboard which includes:
Your private and shared notebooks
Runtimes, templates, executions, and schedules
A sample notebook gallery.
The new dark mode in Colab Enterprise boosts developer productivity by creating a more comfortable coding environment that minimizes eye fatigue during extended work periods.
We’ve also enhanced the core editor experience, which now includes:
Expanded editor real estate, giving you more room to focus on what matters most: – writing code.
A deeply integrated editor with MLOps tooling, so you can access your experiments, see model evaluation results, connect to Ray clusters, schedule a notebook run and much more, all accessible in a single MLOps panel.
An easily accessible File menu system to find all the quick actions related to your notebook file and the editor.
A stateful UI, so you can browse all your assets on the dashboard without losing all your open notebooks.
Get started today
Check these features out in Vertex AI Colab Enterprise today (console, documentation).
The AI era has supercharged expectations: users now issue more complex queries and demand pinpoint results, meaning there’s an 82% chance of losing a customer if they can’t quickly find what they need. Similarly, AI agents require ultra-relevant context for reliable task execution. However, when traditional search methods deliver noise – with generally up to 70% of retrieved passages lacking a true answer – both agentic workflows and user experiences suffer from untrustworthy and unreliable results.
To help businesses meet these rising expectations, we’re launching our new state-of-the-art Vertex AI Ranking API. It makes it easy to boost the precision of information surfaced within search, agentic workflows, and retrieval-augmented generation (RAG) systems. This means you can elevate your legacy search system and AI application in minutes, not months.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud AI and ML’), (‘body’, <wagtail.rich_text.RichText object at 0x3e0bae7740a0>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/vertex-ai/’), (‘image’, None)])]>
Go beyond simple retrieval
This is where precise ranking becomes essential. Think of the Vertex AI Ranking API as the precision filter at the crucial final stage of your retrieval pipeline. It intelligently sifts through the initial candidate set, identifying and elevating only the most pertinent information. This refinement step is key to unlocking higher quality, more trustworthy, and more efficient AI applications.
Vertex AI Ranking API acts as this powerful, yet easy-to-integrate, refinement layer. It takes the candidate list from your existing search or retrieval system and re-orders it based on deep semantic understanding, ensuring the best results rise to the top. Here’s how it helps you uplevel your systems:
Upgrade legacy search systems: Easily add state-of-the-art relevance scoring to existing search outputs, improving user satisfaction and business outcomes on commercial searches without overhauling your current stack.
Strengthen RAG systems: Send fewer, more relevant documents to your generative models. This improves answer trustworthiness while reducing latency and operating costs by optimizing context window usage.
Support intelligent agents: Guide AI agents with highly relevant information, streamlining their context and traces, and significantly improving the success rate of task completion.
Figure 1: Ranking API usage in a typical search and retrieval flow
What’s new in Ranking API
Today, we’re launching our new semantic reranker models:
semantic-ranker-default-004 – our most accurate model for any use case
semantic-ranker-fast-004 – our fastest model for latency-critical use cases
Our model establishing a new benchmark for ranking performance:
State-of-the-art ranking: Based on evaluations using the industry-standard BEIR dataset, our model leads in accuracy among competitive standalone reranking API services. The nDCG is a metric that’s used to evaluate the quality of a ranking system by assessing how well ranked items align with their actual relevance and prioritizes relevant results at the top. We’ve published our evaluation scripts to ensure reproducibility of results.
Figure 2: semantic-ranker-default-004 leads in NDCG@5 on BEIR datasets compared to other rankers.
Industry-leading low latency: Our default model (semantic-ranker-default-004) is at least 2x faster than competitive reranking API services at any scale. Our fast model (semantic-ranker-fast-004) is tuned for latency-critical applications and typically exhibits 3x lower latency than our default model.
We’re also launching long context ranking with a limit of 200k total tokens per API request. Providing longer documents to the Ranking API allows it to better understand nuanced relationships between queries and information such as for customer reviews or product specifications in Retail.
Real-world impact across domains
The benefits aren’t just theoretical. Benchmarks on industry-specific datasets demonstrate that integrating the Ranking API can significantly boost the quality of search results across diverse high-value domains such as retail, news, finance, and healthcare.
Figure 3: nDCG@5 performance improvement with semantic-ranker-default-004 in various high-value domains based on internal datasets. Lexical & Semantic search baseline uses the best result of Vertex AI text-embedding-004 and BM25 based retrieval.
Elevate your search results in minutes
We designed the Vertex AI Ranking API for seamless integration. Adding this powerful relevance layer is straightforward, with several options:
Try it live: Experience the difference on real-world data by enabling our Ranking API in the interactive Vertex Vector Search demo (link)
Build with Vertex AI: Integrate directly into any existing system for maximum flexibility (link)
Enable it in RAG Engine: Select Ranking API in your RAG Engine to get more robust and accurate answers from your generative AI applications (link)
Use it in AlloyDB: For a truly streamlined experience, leverage the built-in ai.rank() SQL function directly within AlloyDB – a novel integration simplifying search use cases with AlloyDB (link)
AI Frameworks: Use our native integrations with popular AI frameworks like GenKit and LangChain (link)
Use it in Elasticsearch: Quickly boost accuracy with our built-in Ranking API integration in Elasticsearch (link)
Want to turn your generative AI ideas into real web applications with one click?
Any developer knows it’s a complex process to build shareable, interactive applications: you have to set up infrastructure, wire APIs, and build a front-end. It’s usually a complex process. What if you could skip the heavy lifting and turn your generative AI concept into a working web app with just a few clicks?
Today, we’re thrilled to introduce a streamlined workflow within Google Cloud’s Vertex AI: one-click deployment of your generative AI prompts directly to Cloud Run as interactive web applications. We’ll show you how it works, and how you can get started.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Vertex AI Studio’), (‘body’, <wagtail.rich_text.RichText object at 0x3e2efe6d5700>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/vertex-ai/’), (‘image’, None)])]>
Bridging the gap from prompt to prototype
Vertex AI provides a helpful environment for experimenting with and refining generative AI prompts. You can test different models, tune parameters, and craft the perfect instructions. However, sharing that interactive experience beyond the console often means exporting code, setting up hosting, managing dependencies, and building a user interface.
Finding the right path to deployment isn’t always straightforward. The missing piece – a clear option to transform the prompt into a shareable prototype.
The solution: Simple, fast deployment with “Deploy as App”
Based on this feedback and our goal to make generative AI accessible, we’ve integrated a seamless deployment path:
Craft your prompt: Perfect your generative AI idea within the familiar Vertex AI Studio interface. Add system instructions, examples, and test until you’re happy.
Click “Deploy as App”: We’ve introduced a clear, primary “Deploy as App” button. No more ambiguity – this is your direct path to creating an application.
Configure as needed: Select your authentication preference (public or authenticated) as you choose.
Vertex AI and Cloud Run do the heavy lifting: Click “Deploy application,” and Vertex AI works with Cloud Run behind the scenes. Vertex AI packages your prompt and builds the interactive UI (powered by Gradio), while Cloud Run handles building and hosting the app in its fully managed infrastructure. You get real-time status updates along the way.
Share your app. Once complete, you get a direct link to your live, functional web application powered by your prompt. You can easily share this URL with colleagues, stakeholders, or testers.
Seamlessly iterate: Easily return to Vertex AI Studio, refine your prompt, and redeploy the application with your changes.
Customize with Cloud Run
Your app is deployed to Cloud Run, Google Cloud’s fully-managed application platform. This means you get automatic scaling (including to zero for cost savings) without managing infrastructure – perfect for quickly deploying and sharing your apps. . To customize your app and take it to production, you can edit the application code directly in Cloud Run’s source editor to make it your own. You can also download the code, and use your IDE of choice, pushing updates with Cloud Run’s git integration.
Why this matters:
Speed: Go from concept to a shareable proof-of-concept in minutes, not hours or days.
Simplicity: Focus on your AI prompt and idea, not on complex infrastructure setup.
Iteration: Easily return to Vertex AI Studio, refine your prompt, and redeploy the application with your changes.
Shareability: Instantly get a working web interface to demonstrate your generative AI’s capabilities.
The Cloud Run integration lowers the barrier to entry for creating and sharing generative AI applications. Whether you’re building a quick prototype, an internal tool, or a demo for stakeholders, Vertex AI Studio now provides an incredibly efficient path from prompt engineering to a live, interactive experience powered by Cloud Run.
Get started
Ready to bring your generative AI ideas to life? Head over to Vertex AI Studio in the Google Cloud console and look for the “Deploy as App” button. We can’t wait to see what you build!
Data management is changing. Enterprises need flexible, open, and interoperable architectures that allow multiple engines to operate on a single copy of data. Apache Iceberg has emerged as the leading open table format, but in real-world deployments, customers often face a dilemma: embrace the openness of Apache Iceberg but compromise on fully managed, enterprise-grade storage management, or choose managed storage but sacrifice the flexibility of open formats.
This week, we announced innovations in BigLake, a storage engine that provides a foundation for building open data lakehouses on Google Cloud that bring the best of Google’s infrastructure to Apache Iceberg, eliminating the trade-off between open-format flexibility and high-performance enterprise-grade managed storage. These innovations include:
Open interoperability across analytical and transactional systems: Formerly known as BigQuery metastore, the fully managed, serverless, scalable BigLake Metastore, now generally available (GA), simplifies runtime metadata management and works across BigQuery as well as other Iceberg compatible engines. Powered by Google’s planet-scale metadata management infrastructure, it removes the need to manage custom metastore deployments. We are also introducing support for the Iceberg REST Catalog API (Preview). The BigLake metastore provides the foundation for interoperability, allowing you to access all your Cloud Storage and BigQuery storage data across multiple runtimes including BigQuery, AlloyDB (preview), and open-source, Iceberg-compatible engines such as Spark and Flink.
New, high-performance Iceberg-native Cloud Storage: We are simplifying lakehouse management with automatic table maintenance (including compaction and garbage collection) and integration with Google Cloud Storage management tools, including auto-class tiering and encryption. Supercharge your lakehouse by combining open formats with BigQuery’s highly scalable, real-time metadata through the general availability (GA) of BigLake tables for Apache Iceberg in BigQuery, enabling high-throughput streaming, auto-reclustering, multi-table transactions (coming soon), and native integration with Vertex AI, so that you can harness the power of Google Cloud AI with your lakehouse.
AI-powered governance across Google Cloud: These BigLake updates are natively supported with Dataplex Universal Catalog, providing unified and fine-grained access controls across all supported engines and enabling end-to-end governance complete with comprehensive lineage, data quality, and discoverability capabilities.
With these changes, we’re evolving BigLake into a comprehensive storage engine designed to help you build open, high-performance, and enterprise-grade lakehouses on Google Cloud using Google Cloud services, open-source, and third-party Iceberg-compatible engines, eliminating trade-offs between open and managed solutions to accelerate your data and AI innovation.
“We wanted teams across the organization to access data in a consistent and secure way — no matter where it lived or what tools they were using. Google’s BigLake was a natural choice. It provides a unified layer to access data and fully managed experience with enterprise capabilities via BigQuery — whether it’s in open table formats like Apache Iceberg or traditional tables — all without the need to move or duplicate data. Metadata quality is essential as we continue to explore potential gen AI use cases. We are utilizing BigLake Metastore and Data Catalog to help maintain high quality metadata.” – Zenul Pomal, Executive Director, CME Group
Open and interoperable
The BigLake metastore is central to BigLake’s interoperability, providing two primary catalog interfaces to connect your data across Cloud Storage and BigQuery Storage:
The Iceberg REST Catalog (Preview) provides a standard REST interface for wider compatibility. This allows Spark users, for instance, to utilize the BigLake metastore as a serverless Iceberg catalog.
The Custom Iceberg Catalog (GA) enables Spark and other open-source engines to work with BigLake tables for Apache Iceberg and interoperate with BigQuery. Its implementation is directly integrated with public Iceberg libraries, removing the need for extra JAR files.
code_block
<ListValue: [StructValue([(‘code’, ‘# Spark session configured to use Iceberg REST Catalog (preview)rnspark = ( SparkSession.builder.appName(“iceberg-rest-catalog”)rn # … other Spark configurations …rn .config(“spark.sql.catalog.iceberg.type”, “rest”)rn.config(“spark.sql.catalog.iceberg.uri”, “https://biglake.googleapis.com/iceberg/v1beta/restcatalog”)rn # … authentication and project configurations …rn .getOrCreate()rn )rnspark.sql(“CREATE NAMESPACE IF NOT EXISTS my_namespace”)rnspark.sql(“CREATE TABLE IF NOT EXISTS my_namespace.my_table (id int, data string) USING iceberg”)rnspark.sql(“INSERT INTO my_namespace.my_table VALUES (1, ‘example’)”)rnspark.sql(“SELECT * FROM my_namespace.my_table”).show()’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e2f14378cd0>)])]>
BigLake tables for Apache Iceberg created within BigQuery can be queried by open-source and third party engines using native Apache Iceberg libraries. To enable this, BigLake automatically generates an Apache Iceberg V2 specification-compliant metadata snapshot. This snapshot is registered in the BigLake metastore, allowing open-source engines to query the data through the custom Iceberg catalog integration. Importantly, these metadata snapshots are kept current by automatically refreshing after any table modification, for example DML operations, data loads, streaming updates, or optimizations, helping to ensure that external engines work with the latest data.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud data analytics’), (‘body’, <wagtail.rich_text.RichText object at 0x3e2f1436bca0>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/bigquery/’), (‘image’, None)])]>
A key aspect of this enhanced interoperability is bridging analytical and transactional workloads. This is particularly powerful for AlloyDB users. Now, you can seamlessly consume your analytical BigLake tables for Apache Iceberg directly within AlloyDB (Preview). This enables PostgreSQL users to combine this rich analytical data with up-to-the-second transactional data from AlloyDB, powering AI-driven applications and real-time operational use cases by leveraging advanced AlloyDB features like semantic search, natural language interfaces, and an integrated AI query engine. This unified approach across BigQuery, AlloyDB, and open-source engines unlocks the platform value of your Iceberg data.
BigLake metastore
Supported tables
BigLake tables for Apache Iceberg
BigLake tables for Apache Iceberg in BigQuery
BigQuery tables
Storage
Cloud Storage
BigQuery
Management
Google-managed
Read / Write capabilities (R/W)
OSS engines (R/W)
BigQuery (R)
BigQuery (R/W)
OSS engines (R/W) using BigQuery Storage API
OSS engines (R) using Iceberg libraries
BigQuery (R/W)
OSS engines (R/W) using
BigQuery Storage API
Use cases
Open lakehouse
Open lakehouse with enterprise-grade storage for advanced analytics, streaming and AI
Enterprise-grade storage for advanced analytics, streaming and AI
New high-performance Iceberg-native storage
BigLake tables for Apache Iceberg deliver an Iceberg-native storage experience directly on Cloud Storage. Whether these tables are created using open-source engines like Spark or directly from BigQuery, they help to extend Cloud Storage management capabilities for your Iceberg data. This simplifies lakehouse management by enabling advanced Cloud Storage features such as auto-class tiering and Customer-Managed Encryption Keys (CMEK). To take full advantage of Cloud Storage management capabilities for your Iceberg data, refer to ourbest practices guide.
code_block
<ListValue: [StructValue([(‘code’, “–Use Spark to create a BigLake table for Apache Iceberg, registered in BigLake MetastorernCREATE TABLE orders_spark (id BIGINT, item STRING, amount DECIMAL(10,2))rnUSING icebergrnLOCATION ‘gs://my_lake_bucket/orders_spark_data’;rnrnINSERT INTO orders_spark VALUES (1, ‘Laptop’, 1200.00);rn“`bashrn# Optimize GCS storage costs for your Iceberg data (CLI)rngsutil autoclass set on gs://my_lake_bucket”), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e2f14378a90>)])]>
Beyond the foundational Cloud Storage integration, you can leverage BigLake tables for Apache Iceberg in BigQuery. These tables, now generally available, combine open formats with BigQuery’s highly scalable, real-time metadata. This powerful combination unlocks a suite of advanced capabilities, including:
High-throughput streaming ingestion from various sources (like Spark, Flink, Dataflow, Pub/Sub, and Kafka) via BigQuery’s Write API, scaling to tens of GiB/second with zero-latency reads
Native integration with Vertex AI
Automated table management features like compaction and garbage collection
Performance optimizations such as auto-reclustering
Fine-grained DML and multi-table transactions (coming soon in preview).
This enterprise-ready, fully managed table experience, familiar to BigQuery users, maintains the openness and interoperability of Apache Iceberg to deliver the best of both worlds.
code_block
<ListValue: [StructValue([(‘code’, “– Create BigLake table for Apache Iceberg in BigQuery, stored on GCSrnCREATE OR REPLACE TABLE my_lake_ds.inventory_bq (item_id STRING, qty INT64)rnWITH CONNECTION `us.my_bl_connection`rnOPTIONS (rn storage_uri = ‘gs://my_lake_bucket/inventory_bq_data’,rn table_format = ‘ICEBERG’,rn file_format = ‘PARQUET’rn);rnrnINSERT INTO my_lake_ds.inventory_bq VALUES (‘Laptop’, 50);rnUPDATE my_lake_ds.inventory_bq SET qty = 49 WHERE item_id = ‘Laptop’;rnrn– Perform multi-table transactionsrnBEGIN TRANSACTION;rn — Example: Record a new orderrn INSERT INTO my_lake_ds.orders_bq (id, item, amount) VALUES (2, ‘Mouse’, 25.00);rn — Example: Update inventory for the ordered itemrn UPDATE my_lake_ds.inventory_bq SET qty = qty – 1 WHERE item_id = ‘Mouse’;rnCOMMIT TRANSACTION;”), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e2f14378310>)])]>
AI-powered governance across Google Cloud
BigLake integrates natively with Dataplex Universal Catalog, helping to ensure that governance policies defined centrally in Dataplex are consistently enforced across multiple engines. This integration supports table-level access control for direct Cloud Storage access. Fine-grained access control is automatically available for queries within BigQuery; for open-source engines, it can be achieved using Storage API connectors.
Beyond access management, BigLake’s Dataplex integration significantly enriches overall governance for BigQuery tables and BigLake tables for Apache Iceberg (created via the custom Iceberg catalog). Key capabilities include:
Comprehensive data understanding: Native support for search, discovery, profiling, data quality checks, and end-to-end data lineage within a multi-runtime architecture.
AI-powered exploration: Dataplex simplifies data exploration with AI-powered semantic search. Its knowledge graph also automatically suggests relevant questions using AI generated insights for your BigQuery and Iceberg data, helping to jumpstart analysis.
Crucially, Dataplex’s end-to-end governance benefits apply to your Iceberg data seamlessly through BigLake’s native integration, without requiring separate registration or enablement steps.
What’s next
At Google Cloud Next ‘25 we demonstrated how fine-grained DML, multi-statement transactions, and change data capture support let you simplify your Apache Iceberg lakehouse for advanced data-processing use cases. These features will be launching soon and support for remaining capabilities will continue to roll out in upcoming months. Or, explore BigLake capabilities and watch the latest demos on our webpage or get started with BigLake tables for Apache Iceberg and BigLake metastore using this guide.
Google Cloud is pleased to announce the general availability of committed use discounts (‘CUDs’) for Red Hat Enterprise Linux. If you run consistent and predictable workloads on Compute Engine, you can utilize CUDs to save on Red Hat Enterprise Linux subscription costs by as much as 20%1 compared to on-demand (or “PAYG”) prices.
“Red Hat Enterprise Linux on Google Cloud provides a consistent foundation for hybrid cloud environments and a reliable, high-performance operating environment for applications and cloud infrastructure. The introduction of committed use discounts for Red Hat Enterprise Linux for Google Cloud makes it even easier for customers to deploy on the world’s leading enterprise Linux platform to unlock greater business value in the cloud.” – Gunnar Hellekson, Vice President and General Manager, Red Hat Enterprise Linux Business Unit, Red Hat
What are committed use discounts for RHEL?
Red Hat Enterprise Linux committed use discounts (collectively referred to as ‘Red Hat Enterprise Linux CUDs’ or ‘RHEL CUDs’) are resource-based commitments available for purchase in one year terms. When you purchase Red Hat Enterprise Linux CUDs, you are committing to paying the monthly Red Hat Enterprise Linux subscription fees for the duration you’ve selected for the number of subscriptions you specify. In exchange, you can save as much as 20% on Red Hat Enterprise Linux subscription costs compared to on-demand rates. CUDs are ideal for your predictable and steady-state usage, allowing you to maximize savings and simplify budget planning.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud infrastructure’), (‘body’, <wagtail.rich_text.RichText object at 0x3e2f14392d90>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/compute’), (‘image’, None)])]>
How do RHEL committed use discounts work?
RHEL CUDs are project- and region-specific. That means you will need to purchase them in the same region and project as the instances consuming these subscriptions. After you make a purchase, discounts automatically apply to any running virtual machine (VM) instances within a selected project in the specified region. If you have multiple projects under the same billing account, commitments can also be shared across projects by turning on billing account sharing.
When commitments expire, your running VMs continue to run at on-demand rates. It is important to note that after you purchase a commitment, you cannot edit or cancel it. You must pay the agreed-upon monthly amount for the duration of the commitment. Refer to Purchasing commitments for licenses for more information.
How much can I save with RHEL committed use discounts?
You can save as much as 20% on one-year commitments compared to the current on-demand prices. However, it is important to remember that you will be charged the monthly subscription fees, even if your actual RHEL usage is lower. Therefore, to maximize the discounts you can receive, we recommend purchasing CUDs for steady and predictable workloads. Here is a helpful comparison between the maximum discounts possible versus their relative on-demand prices:
License Family
Machine Size
On-demand
1 Year CUD
Price2
Price3
Discount4
Red Hat Enterprise Linux
1-8 vCPU
$0.0144 / vCPU-hour
$0.0115 / vCPU-hour
$100.92 / vCPU-year
~20%
9-127 vCPU
$0.0108 / vCPU-hour
$0.0086 / vCPU-hour
$75.69 / vCPU-year
128+ vCPU
$0.0096 / vCPU-hour
$0.0077 / vCPU-hour
$67.28 / vCPU-year
Based on our research, CUDs are a good fit for many Red Hat Enterprise Linux VMs, the majority of which run 24/7 workloads. When evaluating whether they are right for you, consider the following: Based on list prices for a one-year term, Red Hat Enterprise Linux CUDs can help you save on subscription costs if you utilize a Red Hat Enterprise Linux instance for ~80% or more of the time within the one-year CUD term.
*Savings are estimates only. This analysis assumes only one Red Hat Enterprise Linux instance with 8 vCPU running under the CUD project and region.
How do I purchase RHEL committed use discounts?
The easiest way to purchase Red Hat Enterprise Linux CUDs is through the Google Cloud console.
Click Purchase commitment to purchase a new commitment.
Click New license committed use discount to purchase a new license commitment.
Name your commitment and choose the region where you want it to apply.
Choose a License family.
Choose the License type and quantity.
Choose the Number of licenses.
Click Purchase.
You can also purchase Red Hat Enterprise Linux commitments using the Google Cloud CLI or the Compute Engine API.
For more information, refer to Purchasing commitments for licenses. We hope this helps you find the most cost-optimal plan for your Red Hat Enterprise Linux deployment needs.
1. Refer to the table in the “How much can I save with RHEL committed use discounts?” section of this blog for details on how discounts are calculated 2. Price as of this article’s publish date. 3. Hourly costs are approximate. Calculations are derived based on the full CUD prices (as of this article’s publish date), assuming VMs running 730 hours per month, 12 months per year. Yearly costs have been rounded to the nearest whole cent. 4. Discounts compared to current on-demand pricing are rounded to the nearest whole number.
At Google I/O 2025 we unveiled a suite of groundbreaking AI advancements, signaling a new frontier in how technology will empower organizations everywhere. The new era of innovation is here. Over the two day event, Google showcased everything from industry leading reasoning models to AI-assistants in your glasses. For public sector agencies, these innovations promise to fundamentally reshape operations, enhance service delivery, and advance critical missions.
There were several announcements made at Google I/O that public sector organizations can take advantage of in order to deliver on their efficiency and mission objectives.
Gemma 3n, our powerful generative AI model is now optimized for use in everyday devices such as phones, laptops, and tablets. Gemma 3n can run on as low as 2GB of RAM and it’s multimodal. This will enhance on-the-go productivity across the public sector, and will be valuable for endpoint solutions and distributed offerings.
Gemini 2.5 Flash, our powerful and most efficient workhorse model designed for speed and low-cost, continues to position Google as the best price for AI intelligence you can get in the market. This is especially important amidst a focus on overall efficiency and cost effectiveness across the public sector.
The MedGemma collection contains Google’s most capable open models for medical text and image comprehension, built on Gemma 3. Developers can use MedGemma to accelerate building healthcare-based AI applications. We believe this will be critical for agencies focused on supporting and delivering healthcare services.
Google Beam is our AI-first video communications platform for immersive 3D experiences. Bringing digital and physical experiences closer together is a significant differentiator that Google is uniquely positioned to deliver. We believe this will revolutionize secure remote collaboration including virtual training for teams across the public sector – no matter where they are.
Gemini Live and the Agent Mode in the Gemini app introduce sophisticated AI assistants capable of brainstorming and even completing complex tasks across applications. By including Agent Mode in Gemini with the click of a button, we are democratising agentic workflows and making them more accessible to enterprises and the public sector. We believe these offerings provide unparalleled potential for automating administrative workflows, enhancing internal knowledge access across the agency, and vastly improving citizen self-service.
AI Mode in Search will deliver more intelligent and personalized information retrieval, providing researchers, analysts, and even policymakers faster access to critical data.
Gemini 2.5 Pro with Deep Think offers enhanced reasoning for complex data analysis and predictive analytics, which is crucial for a number of domains including public health, higher education, research, security, resource management and more.
Our latest generative media models, Veo 3 and Imagen 4, open doors for everything from creating realistic training simulations, to impactful public service announcements and information campaigns, as well as engaging educational content.
FireSat, a partnership led by Earth Fire Alliance, is using AI to create a breakthrough in wildfire detection. FireSat uses high-res multispectral satellite imagery and AI to provide near real-time insights on wildfires, enabling faster detection, improved situational awareness for first responders, and ultimately helping to reduce the devastating impacts of wildfires.
We believe all of these innovations will enable public sector agencies and drive greater efficiencies by automating manual and time-consuming tasks, providing new insights that empower decision makers, facilitating more seamless and secure communication, and ultimately delivering more impactful services to citizens. This is truly a new era of innovation, and we’re passionate about applying the latest Google technologies to support your mission and accelerate your impact.
Visit the Google booth #906 and attend our Innovation Talks at AI Expo from June 2-4 in Washington, D.C, and learn more about how we can help empower your agency and accelerate mission impact by signing up for the Google Public Sector newsletter.
In the event of a cloud incident, everyone wants swift and clear communication from the cloud provider, and to be able to leverage that information effectively. Personalized Service Health in the Google Cloud console addresses this need with fast, transparent, relevant, and actionable communications about Google Cloud service disruptions, customized to your specific footprint. This helps you to quickly identify the source of the problem, helping you answer the question, “Is it Google or is it me?” You can then integrate this information into your incident response workflows to resolve the incident more efficiently.
We’re excited to announce that you can prompt Gemini Cloud Assist to pull real-time information about active incidents, powered by Personalized Service Health, providing you with streamlined incident management, including discovery, impact assessment, and recovery. By combining Gemini’s guidance with Personalized Service Health insights and up-to-the-minute information, you can assess the scope of impact and begin troubleshooting – all within a single, AI-driven Gemini Cloud Assist chat. Further, you can initiate this sort of incident discovery from anywhere within the console, offering immediate access to relevant incidents without interrupting your workflow. You can also check for active incidents impacting your projects, gathering details on their scope and the latest updates directly sourced from Personalized Service Health.
aside_block
<ListValue: [StructValue([(‘title’, ‘Try Google Cloud for free’), (‘body’, <wagtail.rich_text.RichText object at 0x3e5b5afcab20>), (‘btn_text’, ‘Get started for free’), (‘href’, ‘https://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>
Using Gemini Cloud Assist with Personalized Service Health
We designed Gemini Cloud Assist with a user-friendly layout and a well-organized information structure. Crucial details, including dynamic timelines, latest updates, symptoms, and workarounds sourced directly from Personalized Service Health, are now presented in the console, enabling conversational follow-ups. Gemini Cloud Assist highlights critical insights from Personalized Service Health, helping you refine your investigations and understand the impact of incidents.
To illustrate the power of this integration, the following demo showcases a typical incident response workflow leveraging the combined capabilities of Gemini and Personalized Service Health.
Incident discovery and triage In the crucial first moments of an incident, Gemini Cloud Assist helps you answer “Is it Google or is it me?” Gemini Cloud Assist accesses data directly from Personalized Service Health, and provides feedback on which projects and at what locations are affected by a Google Cloud incident, speeding up the triage process.
To illustrate how you can start this process, try asking Gemini Cloud Assist questions like:
Is my project impacted by a Google Cloud incident?
Are there any incidents impacting Google Cloud at the moment?
Investigating and evaluating impact Once you’ve identified a relevant Google Cloud incident, you can use Gemini Cloud Assist to delve deeper into the specifics and evaluate its impact on your environment. Furthermore, by asking follow-up questions, Gemini Cloud Assist can retrieve updates from Personalized Service Health about the incident as it evolves. You can then further investigate by asking Gemini to pinpoint exactly which of your apps or projects, and at what locations, might be affected by the reported incident.
Here are examples of prompts you might pose to Gemini Cloud Assist:
Tell me more about the ongoing Incident ID [X] (Replace [X] with the Incident ID)
Is [X] impacted? (Replace [X] with your specific location or Google Cloud product)
What is the latest update on Incident ID [X]?
Show me the details of Incident ID [X].
Can you guide me through some troubleshooting steps for [impacted Google Cloud product]?
Mitigation and recovery Finally, Gemini Cloud Assist can also act as an intelligent assistant during the recovery phase, providing you with actionable guidance. You can gain access to relevant logs and monitoring data for more efficient resolution. Additionally, Gemini Cloud Assist can help surface potential workarounds from Personalized Service Health and direct you to the tools and information you need to restore your projects or applications. Here are some sample prompts:
What are the workarounds for the incident ID [X]? (Replace [X] with the Incident ID)
Can you suggest a temporary solution to keep my application running?
How can I find logs for this impacted project?
From these prompts, Gemini retrieves relevant information from Personalized Service Health to provide you with personalized insights into your Google Cloud environment’s health — both for ongoing events and incidents from up to one year in the past. This helps when investigating an incident to narrow down its impact, as well as assisting in recovery.
Next steps
Looking ahead, we are excited to provide even deeper insights and more comprehensive incident management with Gemini Cloud Assist and Personalized Service Health, extending these AI-driven capabilities beyond a single project view. Ready to get started?
Get started with Gemini Cloud Assist. Refine your prompts to ask about your specific regions or Google Cloud products, and experiment to discover how it can help you proactively manage incidents.
The Google Data Cloud is a uniquely integrated platform built on Google’s planet-scale infrastructure, infused with AI, and features an open lakehouse architecture for multimodal data. Already, organizations like Snap Inc. credit Google’s Data Cloud and open lakehouse architecture with empowering their data engineers and data scientists to do more with their data assets.
“Partnering with Google Cloud has been instrumental in our journey to build Snap’s next-generation, open lakehouse and democratize Spark and Iceberg in our developer community!” – Zhengyi Liu, Senior Manager – Software Engineering, Snap Inc.
Today, we’re excited to announce a series of innovations to our AI-powered lakehouse that sets a new standard for openness, intelligence, and performance. These innovations include:
BigLake Iceberg native storage: leverages Google’s Cloud Storage (GCS) to provide an enterprise-grade experience for managing and interoperating with Iceberg data. This includes BigLake tables for Apache Iceberg (GA) and BigLake metastore with a new REST Catalog API (Preview).
United operational and analytical engines: building on the BigLake foundation, customers can seamlessly interoperate on the same Iceberg open data foundation using BigQuery for analytical workloads (GA) and AlloyDB for PostgreSQL (Preview) to target operational needs.
Performance acceleration for BigQuery SQL: delivering a suite of automated SQL engine enhancements for significantly faster and more agile data processing, featuring the BigQuery advanced runtime, a low-latency query API, column metadata indexing, and an order of magnitude speedup for fine-grained updates/deletes.
High-performance Lightning Engine for Apache Spark: our new Lightning Engine (Preview) is designed to supercharge Apache Spark, leveraging optimized data connectors, efficient columnar shuffle operations, in-built caching, and vectorized execution.
Dataplex Universal Catalog: extends AI-powered intelligence and unified governance across the Google Cloud data estate by automatically discovering and organizing metadata from data to AI (including BigLake Iceberg, BigQuery, Spanner, Vertex AI models), enabling central policy enforcement via BigLake, and supporting AI-driven curation, data insights and semantic search.
AI-native notebooks and tooling: developer experiences are improved with Gemini-powered notebooks, PySpark code generation, and code extensions for JupyterLab and Visual Studio Code. Additionally, third-party notebook interfaces now offer enhanced and integrated experiences.
Let’s explore these new innovations.
Expanded BigLake services: Open, unified, and interoperable
We are actively reimagining BigLake into a comprehensive storage runtime for Google Data Cloud using Google’s Cloud Storage. This approach lets you build open, managed and high-performance lakehouses that span Google native storage and data stored in open formats. As part of BigLake, we are announcing our new Iceberg native storage, which provides enterprise-grade support for Iceberg on Google’s Cloud Storage through BigLake tables for Apache Iceberg (GA). BigLake natively supports Google’s Cloud Storage management capabilities and extends these to Iceberg data, enabling you to use storage Autoclass for efficient data tiering to colder storage classes and apply customer-managed encryption keys (CMEK) to your storage buckets. BigLake is also natively supported in our Dataplex Universal Catalog, helping to ensure that centralized governance is consistently enforced across your entire data estate.
Underlying BigLake, the new BigLake metastore (GA) with an Apache Iceberg REST Catalog API (Preview), allows you to achieve true openness and interoperability across your data ecosystem while simplifying management and governance. BigLake metastore is built on Google’s planet-scale infrastructure, offering a unified, managed, serverless, and scalable offering, bringing together enterprise metadata that spans BigQuery, Iceberg native storage, and self managed open formats to support analytics, operational querying, streaming, and AI. The BigLake solution enables universal engine interoperability, supporting a range of query engines — including first-party Google Cloud services such as BigQuery, AlloyDB, and Google Cloud Serverless for Apache Spark, as well as third party and open-source engines— to consistently operate on Iceberg data managed by BigLake.
In addition, it is now easier than ever to bring data into the Iceberg native storage through our enhanced Migration Services that feature automated Iceberg table and metadata migration from Hadoop/Cloudera (Preview) and a push-button Delta to Iceberg service (Preview).
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud data analytics’), (‘body’, <wagtail.rich_text.RichText object at 0x3e5b86fb3fd0>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/bigquery/’), (‘image’, None)])]>
Analytical and operational engines unite on open data
When you need to perform deep analytics, BigQuery can now read and write Iceberg data using BigLake tables for Apache Iceberg. BigQuery further enhances Iceberg tables with features traditionally associated with proprietary data warehouses, offering high-throughput streaming for zero-latency queries, enhanced table management with automatic data reclustering, and the ability to build advanced ETL use cases with support for multi-table transactions (Preview). In addition, you can leverage BigQuery’s built-in AI capabilities (BQML, AI Query Engine, multimodal analysis) directly on your open datasets. Through this integration, you benefit from the openness and data ownership associated with native Iceberg storage, while simultaneously gaining access to BigQuery’s expansive capabilities. In fact, customer adoption of BigLake Iceberg usage with BigQuery has grown nearly 3x in 18 months, now managing hundreds of petabytes.
Unified data management extends beyond analytics into the operational heart of your business, with AlloyDB for PostgreSQL, our high-performance operational database, which can now natively query the same BigLake-managed Iceberg data. Now, your operational applications can tap into the richness of BigLake without complex ETL, and you can apply AlloyDB AI capabilities such as semantic search and natural language querying to your Iceberg data.
Customers like Bayer modernized their data cloud to store and analyze vast amounts of observational data using a combination of AlloyDB and BigQuery. They use BigQuery to produce real-time analytics and insights which are operationalized by AlloyDB, delivering 50% better response rates and 5x more throughput than their previous solution.
Unleashing high-performance BigQuery SQL and serverless Spark on open data
We’re also excited to deliver new high-performance data processing, so that all data can be activated quickly and intelligently. We continue to innovate on BigQuery’s SQL engine with a suite of unique, automated performance enhancements. The BigQuery advanced runtime (Preview), can automatically accelerate analytical workloads, using enhanced vectorization and short query optimized mode, without requiring any user action or code changes. This is complemented by the BigQuery API optional job creation mode (GA), which optimizes query paths for short-duration, interactive queries, reducing latency. Further query efficiency is unlocked by the BigQuery column metadata index (CMETA) (GA), which helps process queries on large tables through more efficient, system-managed data pruning. Other architectural improvements also mean that BigQuery fine-grained updates/deletes (Preview) now operate an order of magnitude faster, increasing agility for large-scale data operations, including on open formats.
Simultaneously, we’re launching an accelerated Apache Spark experience with our new Lightning Engine (Preview) for Apache Spark. The Lightning Engine accelerates Apache Spark performance through highly optimized data connectors for Cloud Storage and BigQuery storage, efficient columnar shuffle operations, and intelligent in-built caching mechanisms. Furthermore, our Lightning Engine leverages vectorized execution built with native C++ libraries (Velox and Gluten), optimized for Apache Spark. This powerful combination delivers 3.6x faster Spark performance for TPC-H like benchmarks. In addition, our Spark offering is AI/ML-ready, providing pre-packaged AI libraries, updated ML runtimes, and easy GPU support, establishing Apache Spark–available via our Google Cloud Serverless for Apache Spark offering or via Dataproc cluster deployments–as a first-class, high-performance citizen in a Google Data Cloud lakehouse environment.
Dataplex Universal Catalog: AI-powered intelligence across Google Cloud
An effective AI-driven data strategy hinges on having an intelligent and active universal catalog that can operate at any scale. This is what Dataplex Universal Catalog now provides for the Google Data Cloud, transforming your entire distributed data estate into trusted, discoverable, and actionable resources.
Dataplex Universal Catalog automatically discovers, understands, and organizes metadata across your whole analytical and operational landscape. This comprehensive view now includes BigLake-native Iceberg storage, other open formats like Delta and Hudi on Cloud Storage, analytical data in BigQuery, transactional data from databases like Spanner, and metadata from machine learning models in Vertex AI—showcasing pervasive governance across Google’s Data Cloud.
This is also integral to the lakehouse by enabling users to define governance policies centrally and enforce them consistently across multiple data engines through BigLake. This integration supports fine-grained access controls and strengthens governance, across all engines of choice in Google’s Data Cloud. The BigLake solution supports credential vending, which allows users to securely extend centrally defined policies all the way to data in Cloud Storage.
Dataplex Universal Catalog is powered by AI, with a Gemini-enhanced knowledge graph, transforming metadata into dynamic, actionable intelligence. Here, AI automates metadata curation, infers hidden relationships between data elements, proactively recommends insights from data backed by complex queries, and enables semantic search with natural language. It also fuels new AI-powered experiences and autonomous agents. For instance, Gemini-powered assistance using Dataplex Universal Catalog shows 50% greater precision in identifying datasets, significantly accelerating insights. Dataplex Universal Catalog is also the foundation of an open ecosystem with seamless metadata federation to platforms like Collibra, and ensures broad connectivity through Dataplex Universal Catalog APIs.
Empowering practitioners with AI-native notebooks and tooling
At Google Cloud, our goal is to revolutionize the data practitioner’s experience by embedding sophisticated AI and lakehouse integrations directly into their preferred tools and workflows. This commitment to an open, flexible, and intelligent environment lets data scientists, engineers, and analysts unlock new levels of productivity and innovation.
Making this possible are our next-gen, AI-native BigQuery Notebooks, which offer a unified and interoperable development experience across SQL, Python, and Apache Spark. This experience is enhanced by deeply embedded Gemini assistive capabilities. Gemini acts as an intelligent collaborator, offering advanced PySpark code generation, insightful explanations of complex code, and direct integration with Cloud Assist Investigations for serverless Spark troubleshooting (Preview), dramatically reducing development friction and accelerating the path from data to insight.
Furthermore, new JupyterLab and Visual Studio Code extensions for BigQuery, Dataproc and Google Cloud Serverless for Apache Spark (Preview) allow developers to connect to Google Cloud’s open lakehouse capabilities directly from their preferred IDEs with minimal setup. Users can start developing within minutes with access to all their lakehouse datasets and files in their preferred tool, supporting their end-to-end journey from development to deployment. The consumption of notebooks using serverless Spark more than quadrupled from Q1 2024 to Q1 2025.
Together, these integrated advancements help deliver an adaptable, intelligent, high-performance Data Cloud anchored on the lakehouse architecture, equipping organizations to connect all of their data to Google’s AI, unlock its full potential, and define innovation in the AI era. Click here to learn more and sign up for early access to these new capabilities. We’re excited to see the solutions you’ll build.
Google Threat Intelligence Group’s (GTIG) mission is to protect Google’s billions of users and Google’s multitude of products and services. In late October 2024, GTIG discovered an exploited government website hosting malware being used to target multiple other government entities. The exploited site delivered a malware payload, which we have dubbed “TOUGHPROGRESS”, that took advantage of Google Calendar for command and control (C2). Misuse of cloud services for C2 is a technique that manythreatactorsleverage in order to blend in with legitimate activity.
We assess with high confidence that this malware is being used by the PRC based actor APT41 (also tracked as HOODOO). APT41’s targets span the globe, including governments and organizations within the global shipping and logistics, media and entertainment, technology, and automotive sectors.
Overview
In this blog post we analyze the malware delivery methods, technical details of the malware attack chain, discuss other recent APT41 activities, and share indicators of compromise (IOCs) to help security practitioners defend against similar attacks. We also detail how GTIG disrupted this campaign using custom detection signatures, shutting down attacker-controlled infrastructure, and protections added to Safe Browsing.
Figure 1: TOUGHPROGRESS campaign overview
Delivery
APT41 sent spear phishing emails containing a link to the ZIP archive hosted on the exploited government website. The archive contains an LNK file, masquerading as a PDF, and a directory. Within this directory we find what looks like seven JPG images of arthropods. When the payload is executed via the LNK, the LNK is deleted and replaced with a decoy PDF file that is displayed to the user indicating these species need to be declared for export.
The files “6.jpg” and “7.jpg” are fake images. The first file is actually an encrypted payload and is decrypted by the second file, which is a DLL file launched when the target clicks the LNK.
Malware Infection Chain
This malware has three distinct modules, deployed in series, each with a distinct function. Each module also implements stealth and evasion techniques, including memory-only payloads, encryption, compression, process hollowing, control flow obfuscation, and leveraging Google Calendar for C2.
PLUSDROP – DLL to decrypt and execute the next stage in memory.
PLUSINJECT – Launches and performs process hollowing on a legitimate “svchost.exe” process, injecting the final payload.
TOUGHPROGRESS – Executes actions on the compromised Windows host. Uses Google Calendar for C2.
TOUGHPROGRESS Analysis
TOUGHPROGRESS begins by using a hardcoded 16-byte XOR key to decrypt embedded shellcode stored in the sample’s “.pdata” region. The shellcode then decompresses a DLL in memory using COMPRESSION_FORMAT_LZNT1. This DLL layers multiple obfuscation techniques to obscure the control flow.
Register-based Indirect Calls
Dynamic Address Arithmetic
64-bit register overflow
Function Dispatch Table
The registered-based indirect call is used after dynamically calculating the address to store in the register. This calculation involves two or more hardcoded values that intentionally overflow the 64-bit register. Here is an example calling CreateThread.
Figure 2: Register-based indirect call with dynamic address arithmetic and 64-bit overflow
We can reproduce how this works using Python “ctypes” to simulate 64-bit register arithmetic. Adding the two values together overflows the 64-bit address space and the result is the address of the function to be called.
Figure 3: Demonstration of 64-bit address overflow
Figure 4: CreateThread in Dispatch Table
These obfuscation techniques manifest as a Control Flow Obfuscation tactic. Due to the indirect calls and arithmetic operations, the disassembler cannot accurately recreate a control flow graph.
Calendar C2
TOUGHPROGRESS has the capability to read and write events with an attacker-controlled Google Calendar. Once executed, TOUGHPROGRESS creates a zero minute Calendar event at a hardcoded date, 2023-05-30, with data collected from the compromised host being encrypted and written in the Calendar event description.
The operator places encrypted commands in Calendar events on 2023-07-30 and 2023-07-31, which are predetermined dates also hardcoded into the malware. TOUGHPROGRESS then begins polling Calendar for these events. When an event is retrieved, the event description is decrypted and the command it contains is executed on the compromised host. Results from the command execution are encrypted and written back to another Calendar event.
In collaboration with the Mandiant FLARE team, GTIG reverse engineered the C2 encryption protocol leveraged by TOUGHPROGRESS. The malware uses a hardcoded 10-byte XOR key and generates a per-message 4-byte XOR key.
Append the 4-byte key at the end of a message header (10 bytes total)
Encrypt the header with the 10-byte XOR key
Prepend the encrypted header to the front of the message
The combined encrypted header and message is the Calendar event description
Figure 5: TOUGHPROGRESS encryption routine for Calendar Event Descriptions
Figure 6: Example of a Calendar event created by TOUGHPROGRESS
Disrupting Attackers to Protect Google, Our Users, and Our Customers
GTIG’s goal is not just to monitor threats, but to counter and disrupt them. At Google, we aim to protect our users and customers at scale by proactively blocking malware campaigns across our products.
To disrupt APT41 and TOUGHPROGRESS malware, we have developed custom fingerprints to identify and take down attacker-controlled Calendars. We have also terminated attacker-controlled Workspace projects, effectively dismantling the infrastructure that APT41 relied on for this campaign. Additionally, we updated file detections and added malicious domains and URLs to the Google Safe Browsing blocklist.
In partnership with Mandiant Consulting, GTIG notified the compromised organizations. We provided the notified organizations with a sample of TOUGHPROGRESS network traffic logs, and information about the threat actor, to aid with detection and incident response.
Protecting Against Ongoing Activity
GTIG has been actively monitoring and protecting against APT41’s attacks using Workspace apps for several years. This threat group is known for their creative malware campaigns, sometimes leveraging Workspace apps.
Google Cloud’s Office of the CISO published the April 2023 Threat Horizons Report detailing HOODOO’s use of Google Sheets and Google Drive for malware C2.
In October 2024, Proofpoint published a report attributing the VOLDEMORT malware family to APT41.
In each case, GTIG identified and terminated the attacker-controlled Workspace projects and infrastructure APT41 relied on for these campaigns.
Free Web Hosting Infrastructure
Since at least August 2024, we have observed APT41 using free web hosting tools for distributing their malware. This includes VOLDEMORT, DUSTTRAP, TOUGHPROGRESS and likely other payloads as well. Links to these free hosting sites have been sent to hundreds of targets in a variety of geographic locations and industries.
APT41 has used Cloudflare Worker subdomains the most frequently. However, we have also observed use of InfinityFree and TryCloudflare. The specific subdomains and URLs here have been observed in previous campaigns, but may no longer be in use by APT41.
APT41 has also been observed using URL shorteners in their phishing messages. The shortened URL redirects to their malware hosted on free hosting app subdomains.
https[:]//lihi[.]cc/6dekU
https[:]//tinyurl[.]com/hycev3y7
https[:]//my5353[.]com/nWyTf
https[:]//reurl[.]cc/WNr2Xy
All domains and URLs in this blog post have been added to the Safe Browsing blocklist. This enables a warning on site access and prevents users from downloading the malware.
Indicators of Compromise
The IOCs in this blog post are also available as a collection in Google Threat Intelligence.
Today, we’re thrilled to announce another significant milestone for our Google Public Sector business: Google Distributed Cloud (GDC) & GDC air-gapped appliance achieved Department of Defense (DoD) Impact Level 6 (IL6) authorization. Google Public Sector is now able to provide DoD customers with a secure, compliant, and cutting-edge cloud environment at IL6, enabling them to leverage the full power of GDC for their most sensitive Secret classified data and applications. This accreditation builds on our existing IL5 and Top Secret accreditations, and solidifies Google Cloud’s ability to deliver secure solutions for digital sovereignty, critical national security and defense missions for the U.S. government.
Secure, distributed cloud for critical missions
This authorization comes at a crucial time, as the digital landscape is becoming increasingly complex, and the need for robust security measures is growing more urgent. Google’s collaboration with the U.S. Navy under the JWCC contract exemplifies its commitment to providing advanced infrastructure and cloud services for a resilient hybrid-cloud environment. Google Distributed Cloud provides a fully-managed solution designed specifically to uphold stringent security requirements, allowing U.S. intelligence and DoD agencies to host, control, and manage their infrastructure and services.
GDC can operate within Google’s trusted, secure, and managed data centers, or in forward deployed locations to provide the DoD and Intelligence Community with a comprehensive suite of secure cloud solutions. This platform unlocks the power of advanced cloud capabilities like data analytics, machine learning (ML), and artificial intelligence (AI). The isolated platform, physically located and managed by Google, ensures customers can trust the foundation of their sensitive workloads.
Google has accelerated AI services dramatically to support the DoD. Vertex AI and Google’s state of the art Gemini models are available now at IL6 and TS, supporting missions at the highest classification levels.
Next-gen cloud and AI capabilities at the tactical edge
In harsh, disconnected, or mobile environments, organizations face significant challenges in providing computing capabilities. The Google Distributed Cloud air-gapped appliance brings Google Cloud and AI capabilities to tactical edge environments. These capabilities unlock real-time local data processing for use cases such as cyber analysis, predictive maintenance, tactical communications kits, sensor kits, or field translation. The appliance includes Vertex AI and Pre-Trained Model APIs (Speech to Text, Translate, and OCR).
The appliance can be conveniently transported in a rugged case or mounted in a rack within customer-specific local operating environments and remain disconnected indefinitely based on mission need.
Enabling efficiency through digital transformation
Customers throughout the federal government today are using Google Cloud to help achieve their missions. For example, the Defense Innovation Unit (DIU) is using Google Cloud technology to develop AI models to assist augmented reality microscope (ARM) detection of certain types of cancer; the U.S. Air Force is using Vertex AI to overhaul their manual processes; and the U.S. Air Force Rapid Sustainment Office (RSO) is using Google Cloud technology for aircraft maintenance.
Learn more about how Google Cloud solutions can empower your agency and accelerate mission impact and stay up to date with our latest innovations by signing up for the Google Public Sector newsletter.
Everyone’s talking about AI agents, but the real magic happens when they collaborate to tackle complex tasks. Think: complex processes, data analysis, content creation, and customer support. In this hackathon, you’ll build autonomous multi-agent AI systems using Google Cloud and the open source Agent Development Kit (ADK).
This is your chance to dive deep into cutting-edge AI, showcase your skills, and contribute to the future of agent development.
Hands-on learning with the ADK: This is your chance to try out and contribute to Agent Development Kit (ADK). We’ll provide you with the resources, support, and expert guidance you need to build sophisticated multi-agent systems.
Real-world impact: Tackle real world problems that directly impact how work gets done, from automating complex processes and deriving data insights to changing customer service and content creation.
A showcase for your talent: Present your project to a panel of judges and demonstrate your expertise to a wide audience. Build working agents that can help your workflows and be the foundation for a future product.
And the rewards? Exciting prizes await!
We’re offering a range of exciting prizes:
Overall grand prize: $15,000 in USD, $3,000 in Google Cloud Credits for use with a Cloud Billing Account, 1 year of Google Developer Program Premium subscription at no-cost, virtual coffee with a Google team member, and social promo
Regional winners: $8,000 in USD, $1,000 in Google Cloud Credits for use with a Cloud Billing Account, virtual coffee with a Google team member, and social promo
Honorable mentions: $1000 in USD and $500 in Google Cloud Credits for use with a Cloud Billing Account
Unleash the power of the Agent Development Kit (ADK):
ADK is a flexible and modular framework designed for developing and deploying AI agents. It’s an open-source framework that offers tight integration with the Google ecosystem and Gemini models. ADK makes it easy to get started with simple agents powered by Gemini models and Google AI tools, while also providing the control needed for more complex agent architectures and orchestration.
What to build
Your project should demonstrate how to design and orchestrate interactions between multiple autonomous agents using ADK. Build in one of these categories:
Automation of complex processes: Design multi-agent workflows to automate complex, multi-step business processes, software development lifecycle, or manage intricate tasks.
Data analysis and insights: Create multi-agent systems that autonomously analyze data from various sources, derive meaningful insights using tools like BigQuery, and collaboratively present findings.
Customer service and engagement: Develop intelligent virtual assistants or support agents built with ADK as multi-agent systems to handle complex customer inquiries, provide personalized support, and proactively engage with customers.
Content creation and generation: Build multi-agent systems that can autonomously generate different forms of content, such as marketing materials, reports, or code, by orchestrating agents with specialized content generation capabilities.
Crucial note: Your project must be built using the Agent Development Kit (ADK), focusing on the design and interactions between multiple agents. Think ADK first, but feel free to supercharge your solution by integrating with other awesome Google Cloud technologies!
Ready to start building?
Head over to our hackathon website and watch our webinar to learn more, review the rules, and register.
Google Cloud’s Vertex AI platform makes it easy to experiment with and customize over 200 advanced foundation models – like the latest Google Gemini models, and third-party partner models such as Meta’s Llama and Anthropic’s Claude. And now, thanks to a major refresh focused on developer feedback, it’s even more efficient and intuitive.
The redesigned, developer-first experience will be your source for generative AI media models across all modalities. You’ll have access to Google’s powerfulgenerative AI media models such as Veo, Imagen, Chirp and Lyria in the Vertex AI Media Studio.These aren’t just cosmetic changes; they translate directly into five workflow benefits, from accelerated prototyping to experimentation:
Stay cutting-edge: Get hands-on experience with Google’s latest AI models and features as soon as they’re available.
Easier to start with AI in Cloud: The new design makes it easier for developers of all experience levels to start building with generative AI.
Accelerated prototyping: Quickly test ideas, iterate on prompts, and prototype applications faster than before.
Integrated end-to-end workflow: Move easily from ideation and prompting to grounding, tuning, code generation, and even test deployment – all within a single, cohesive environment…with a couple of clicks! Less tool-switching, more building!
Efficient experimentation: Vertex AI Studio provides a place to explore different models, parameters, and prompting techniques.
Dive in to see the key improvements.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Vertex AI Studio’), (‘body’, <wagtail.rich_text.RichText object at 0x3ee588102dc0>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/vertex-ai/’), (‘image’, None)])]>
What’s new and how it works for you
We heard you wanted features to explore, iterate and boost your productivity. That’s why we’re making things easier and more powerful in three ways: faster prompting, easier ways to build, and a fresh interface.
Enhanced prompting capabilities:
Faster prompting: Get prompting faster. Our revamped overview provides quick access to samples and tools, complemented by a unified UI combining Chat and Freeform prompting for a smoother workflow.
Prompt management & enhancement: Simplify your prompt engineering by easily managing the lifecycle (create, refine, compare, save, track history) while simultaneously improving prompt quality and capabilities through techniques like variables, function calling, and adding examples.
Integrated prompt engineering: Access to tuning, evaluation, and batch prediction, all designed to optimize model performance.
Prompt with gen AI models in Vertex AI Studio
Better ways to build
Build with Gemini: Access and experiment with the latest Gemini models such as Gemini 2.5 to test:
Text generation
Image creation
Audio generation
Multimodal capabilities
and Live API directly within the Studio.
Build trust with grounded AI: Easily connect models to real-world, up-to-date information or your specific private data. Grounding with Google Search or Google Maps is simpler than ever. Need custom knowledge? Integrate effortlessly with your data via Vertex AI RAG Engine or Vertex AI Search. This dramatically improves the reliability and factual accuracy of model outputs, letting you build applications your users can trust.
Code generation & app deployment: Get sample code (Python, Android, Swift, Web, Flutter, cUrl), including direct integration to open Python in Colab Enterprise. You can also deploy the prompt as a test web application for quick proof-of-concept validation.
Fresher interface
Dark mode is here: Recognizing that many developers prefer darker interfaces for extended sessions, you can now experience dark mode across the entire Vertex AI platform for improved visual comfort and focus. Activate it easily in your Cloud profile user preferences.
Get started with Vertex AI today
We’re committed to continually refining Vertex AI Studio based on your feedback, which you can share right in the console, ensuring you have the tools you need for building the next generation of AI applications.
Since November 2024, Mandiant Threat Defense has been investigating an UNC6032 campaign that weaponizes the interest around AI tools, in particular those tools which can be used to generate videos based on user prompts. UNC6032 utilizes fake “AI video generator” websites to distribute malware leading to the deployment of payloads such as Python-based infostealers and several backdoors. Victims are typically directed to these fake websites via malicious social media ads that masquerade as legitimate AI video generator tools like Luma AI, Canva Dream Lab, and Kling AI, among others. Mandiant Threat Defense has identified thousands of UNC6032-linked ads that have collectively reached millions of users across various social media platforms like Facebook and LinkedIn. We suspect similar campaigns are active on other platforms as well, as cybercriminals consistently evolve tactics to evade detection and target multiple platforms to increase their chances of success.
Mandiant Threat Defense has observed UNC6032 compromises culminating in the exfiltration of login credentials, cookies, credit card data, and Facebook information through the Telegram API. This campaign has been active since at least mid-2024 and has impacted victims across different geographies and industries. Google Threat Intelligence Group (GTIG) assesses UNC6032 to have a Vietnam nexus.
Mandiant Threat Defense acknowledges Meta’s collaborative and proactive threat hunting efforts in removing the identified malicious ads, domains, and accounts. Notably, a significant portion of Meta’s detection and removal began in 2024, prior to Mandiant alerting them of additional malicious activity we identified.
Threat actors haven’t wasted a moment capitalizing on the global fascination with Artificial Intelligence. As AI’s popularity surged over the past couple of years, cybercriminals quickly moved to exploit the widespread excitement. Their actions have fueled a massive and rapidly expanding campaign centered on fraudulent websites masquerading as cutting-edge AI tools. These websites have been promoted by a large network of misleading social media ads, similar to the ones shown in Figure 1 and Figure 2.
Figure 1: Malicious Facebook ads
Figure 2: Malicious LinkedIn ads
As part of Meta’s implementation of the Digital Services Act, the Ad Library displays additional information (ad campaign dates, targeting parameters and ad reach) on all ads that target people from the European Union. LinkedIn has also implemented a similar transparency tool.
Our research through both Ad Library tools identified over 30 different websites, mentioned across thousands of ads, active since mid 2024, all displaying similar ad content. The majority of ads which we found ran on Facebook, with only a handful also advertised on LinkedIn. The ads were published using both attacker-created Facebook pages, as well as by compromised Facebook accounts. Mandiant Threat Defense performed further analysis of a sample of over 120 malicious ads and, from the EU transparency section of the ads, their total reach for EU countries was over 2.3 million users. Table 1 displays the top 5 Facebook ads by reach. It should be noted that reach does not equate to the number of victims. According to Meta, the reach of an ad is an estimated number of how many Account Center accounts saw the ad at least once.
Ad Library ID
Ad Start Date
Ad End Date
EU Reach
1589369811674269
14.12.2024
18.12.2024
300,943
559230916910380
04.12.2024
09.12.2024
298,323
926639029419602
07.12.2024
09.12.2024
270,669
1097376935221216
11.12.2024
12.12.2024
124,103
578238414853201
07.12.2024
10.12.2024
111,416
Table 1: Top 5 Facebook ads by reach
The threat actor constantly rotates the domains mentioned in the Facebook ads, likely to avoid detection and account bans. We noted that once a domain is registered, it will be referenced in ads within a few days if not the same day. Moreover, most of the ads are short lived, with new ones being created on a daily basis.
On LinkedIn, we identified roughly 10 malicious ads, each directing users to hxxps://klingxai[.]com. This domain was registered on September 19, 2024, and the first ad appeared just a day later. These ads have a total impression estimate of 50k-250k. For each ad, the United States was the region with the highest percentage of impressions, although the targeting included other regions such as Europe and Australia.
Ad Library ID
Ad Start Date
Ad End Date
Total Impressions
% Impressions in the US
490401954
20.09.2024
20.09.2024
<1k
22
508076723
27.09.2024
28.09.2024
10k-50k
68
511603353
30.09.2024
01.10.2024
10k-50k
61
511613043
30.09.2024
01.10.2024
10k-50k
40
511613633
30.09.2024
01.10.2024
10k-50k
54
511622353
30.09.2024
01.10.2024
10k-50k
36
Table 2: LinkedIn ads
From the websites investigated, Mandiant Threat Defense observed that they have similar interfaces and offer purported functionalities such as text-to-video or image-to-video generation. Once the user provides a prompt to generate a video, regardless of the input, the website will serve one of the static payloads hosted on the same (or related) infrastructure.
The payload downloaded is the STARKVEIL malware. It drops three different modular malware families, primarily designed for information theft and capable of downloading plugins to extend their functionality. The presence of multiple, similar payloads suggests a fail-safe mechanism, allowing the attack to persist even if some payloads are detected or blocked by security defences.
In the next section, we will delve deeper into one particular compromise Mandiant Threat Defense responded to.
Luma AI Investigation
Infection Chain
Figure 3: Infection chain lifecycle
This blog post provides a detailed analysis of our findings on the key components of this campaign:
Lure: The threat actors leverage social networks to push AI-themed ads that direct users to fake AI websites, resulting in malware downloads.
Malware: It contains several malware components, including the STARKVEIL dropper, which deploys the XWORM and FROSTRIFT backdoors and the GRIMPULL downloader.
Execution: The malware makes extensive use of DLL side-loading, in-memory droppers, and process injection to execute its payloads.
Persistence: It uses AutoRun registry key for its two Backdoors (XWORM and FROSTRIFT).
Anti-VM and Anti-analysis: GRIMPULL checks for commonly used artifactsfeatures from known Sandbox and analysis tools.
Reconnaissance
Host reconnaissance: XWORM and FROSTRIFT survey the host by collecting information, including OS, username, role, hardware identifiers, and installed AV.
Software reconnaissance: FROSTRIFT checks the existence of certain messaging applications and browsers.
Command-and-control (C2)
Tor: GRIMPULL utilizes a Tor Tunnel to fetch additional .NET payloads.
Telegram: XWORM sends victim notification via telegram including information gathered during host reconnaissance.
TCP: The malware connects to its C2 using ports 7789, 25699, 56001.
Information stealer
Keylogger: XWORM log keystrokes from the host.
Browser extensions: FROSTRIFT scans for 48 browser extensions related to Password managers, Authenticators, and Digital wallets potentially for data theft.
Backdoor Commands: XWORM supports multiple commands for further compromise.
The Lure
This particular case began from a Facebook Ad for “Luma Dream AI Machine”, masquerading as a well-known text-to-video AI tool – Luma AI. The ad, as seen in Figure 4, redirected the user to an attacker-created website hosted at hxxps://lumalabsai[.]in/.
Figure 4: The ad the victim clicked on
Once on the fake Luma AI website, the user can click the “Start Free Now” button and choose from various video generation functionalities. Regardless of the selected option, the same prompt is displayed, as shown in the GIF in Figure 5.
This multi-step process, made to resemble any other legitimate text-to-video or image-to-video generation tool website, creates a sense of familiarity to the user and does not give any immediate indication of malicious intent. Once the user hits the generate button, a loading bar appears, mimicking an AI model hard at work. After a few seconds, when the new video is supposedly ready, a Download button is displayed. This leads to the download of a ZIP archive file on the victim host.
Figure 5: Fake AI video generation website
Unsurprisingly, the ready-to-download archive is one of many payloads already hosted on the same server, with no connection to the user input. In this case, several archives were hosted at the path hxxps://lumalabsai[.]in/complete/. Mandiant determined that the website will serve the archive file with the most recent “Last Modified” value, indicating continuous updates by the threat actor. Mandiant compared some of these payloads and found them to be functionally similar, with different obfuscation techniques applied, thus resulting in different sizes.
Figure 6: Payloads hosted at hxxps://lumalabsai[.]in/complete
Execution
The previously downloaded ZIP archive contains an executable with a double extension (.mp4 and.exe) in its name, separated by thirteen Braille Pattern Blank (Unicode: U+2800, UTF-8: E2 A0 80)characters. This is a special whitespace character from the Braille Pattern Block in Unicode.
Figure 7: Braille Pattern Blank characters in the file name
The resulting file name, Lumalabs_1926326251082123689-626.mp4⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀.exe, aims to make the binary less suspicious by pushing the.exe extension out of the user view. The number of Braille Pattern Blank characters used varies across different samples served, ranging from 13 to more than 30. To further hide the true purpose of this binary, the default .mp4 Windows icon is used on the malicious file.
Figure 8 shows how the file looks on Windows 11, compared to a legitimate.mp4 file.
Figure 8: Malicious binary vs legitimate .mp4 file
STARKVEIL
The binary Lumalabs_1926326251082123689-626.mp4⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀.exe, tracked by Mandiant as STARKVEIL, is a dropper written in Rust. Once executed, it extracts an embedded archive containing benign executables and its malware components. These are later utilized to inject malicious code into several legitimate processes.
Executing the malware displays an error window, as seen in Figure 9, to trick the user into trying to execute it again and into believing that the file is corrupted.
Figure 9: Error window displayed when executing STARKVEIL
For a successful compromise, the executable needs to run twice; the initial execution results in the extraction of all the embedded files under the C:winsystem directory.
Figure 10: Files in the winsystem directory
During the second execution, the main executable spawns the Python Launcher, py.exe, with an obfuscated Python command as an argument. The Python command decodes an embedded Python code, which Mandiant tracks as COILHATCHdropper. COILHATCH performs the following actions (note that the script has been deobfuscated and renamed for improved readability):
The command takes a Base85-encoded string, decodes it, decompresses the result using zlib, deserializes the resulting data using the marshalmodule, and then executes the final deserialized data as Python code.
Figure 11: Python command
The decompiled first-stage Python code combines RSA, AES, RC4, and XOR techniques to decrypt the second stage Python bytecode.
Figure 12: First-stage Python
The decrypted second-stage Python script executes C:winsystemheifheif.exe, which is a legitimate, digitally signed executable, used to side-load a malicious DLL. This serves as the launcher to execute the other malware components.
As mentioned, the STARKVEIL malware drops its components during its first execution and executes a launcher on its second execution. The complete analysis of all the malware components and their roles is provided in the next sections.
Each of these DLLs operates as an in-memory dropper and spawns a new victim process to perform code injection through process replacement.
Launcher
The execution of C:winsystemheifheif.exe results in the side-loading of the malicious heif.dll, located in thesame directory. This DLL is an in-memory dropper that spawns a legitimate Windows process (which may vary) and performs code injection through process replacement.
The injected code is a .NET executable that acts as a launcher and performs the following:
Moves multiple folders from C:winsystem to %APPDATA%. The destination folders are:
%APPDATA%python
%APPDATA%pythonw
%APPDATA%ffplay
%APPDATA%Launcher
Launches three legitimate processes to side-load associated malicious DLLs. The malicious DLLs for each process are:
python.exe: %APPDATA%pythonavcodec-61.dll
pythonw.exe: %APPDATA%pythonwheif.dll
ffplay.exe: %APPDATA%ffplaylibde265.dll
Establishes persistence via AutoRun registry key.
value: Dropbox
key: SOFTWAREMicrosoftWindowsCurrentVersionRun
root: HKCU
value data: "cmd.exe /c "cd /d "<exePath>" && "Launcher.exe""
Figure 14: Main function of launcher
The AutoRun Key executes %APPDATA%LauncherLauncher.exe that sideloads the DLL file libde265.dll. This DLL spawns and injects its payload into AddInProcess32.exe via PE hollowing. The injected code’s main purpose is to execute the legitimate binaries C:winsystemheif2rgbheif2rgb.exe and C:winsystemheif-infoheif-info.exe, which, in turn, sideload the backdoors XWORM and FROSTRIFT,respectively.
GRIMPULL
Of the three executables, the launcher first executes %APPDATA%pythonpython.exe, which side-loads the DLL avcodec-61.dll and injects the malware GRIMPULLinto a legitimate Windows process.
GRIMPULLis a .NET-based downloader that incorporates anti-VM capabilities and utilizes Tor for C2 server connections.
Anti-VM and Anti-Analysis
GRIMPULL begins by checking for the presence of the mutex value aff391c406ebc4c3, and terminates itself if this is found. Otherwise, the malware proceeds to perform further anti-VM checks, exiting in case any of the mentioned checks succeeds.
Anti-VM and Anti-Analysis Checks
Module Detection
Checks for sandbox/analysis tool DLLs:
SbieDll.dll (Sandboxie)
cuckoomon.dll (Cuckoo Sandbox)
BIOS Information Checks
Queries Win32_BIOS via WMI and checks version and serial number for:
VMware
VIRTUAL
A M I (AMI BIOS)
Xen
Parent Process Check
Checks if parent process is cmd (command line)
VM File Detection
Checks for existence of vmGuestLib.dll in the System folder
System Manufacturer Checks
Queries Win32_ComputerSystem via WMI and checks manufacturer and model for:
Microsoft (Hyper-V)
VMWare
Virtual
Display and System Configuration Checks
Checks for specific screen resolutions:
1440×900
1024×768
1280×1024
Checks if the OS is 32-bit
Username Checks
Checks for common analysis environment usernames:
john
anna
Any username containing xxxxxxxx
Table 4: Anti-VM and Anti-analysis checks
Download Function
GRIMPULLverifies the presence of a Tor process. If a Tor process is not detected, it proceeds to download, decompress, and execute Tor from the following URL:
GRIMPULL then attempts to connect to the following C2 server via the Tor tunnel over TCP.
strokes[.]zapto[.]org:7789
The malware maintains this connection and periodically checks for .NET payloads. Fetched payloads are decrypted using TripleDES in ECB mode with the MD5 hash of the campaign ID aff391c406ebc4c3 as the decryption key, decompressed with GZip (using a 4-byte length prefix), reversed, and then loaded into memory as .NET assemblies.
Malware Configuration
The configuration elements are encoded as base64 strings, as shown in Figure 16.
Figure 16: Encoded malware configuration
Table 5 shows the extracted malware configuration.
GRIMPULL Malware Configuration
C2 domain/server
strokes[.]zapto[.]org
Port number
7789
Unique identifier/campaign ID
aff391c406ebc4c3
Configuration profile name
Default
Table 5: GRIMPULL configuration
XWORM
Secondly, the launcher executes the file %APPDATA%pythonwpythonw.exe, which side-loads the DLL heif.dll and injects XWORM into a legitimate Windows process.
XWORM is a .NET-based backdoor that communicates using a custom binary protocol over TCP. Its core functionality involves expanding its capabilities through a plugin management system. Downloaded plugins are written to disk and executed. Supported capabilities include keylogging, command execution, screen capture, and spreading to USB drives.
XWORM Configuration
The malware begins by decoding its configuration using the AES algorithm.
Figure 17: Decryption of configuration
Table 6 shows the extracted malware configuration.
XWORM Malware Configuration
Host
artisanaqua[.]ddnsking[.]com
Port number
25699
KEY
<123456789>
SPL
<Xwormmm>
Version
XWorm V5.2
USBNM
USB.exe
Telegram Token
8060948661:AAFwePyBCBu9X-gOemLYLlv1owtgo24fcO0
Telegram ChatID
-1002475751919
Mutex
ZMChdfiKw2dqF51X
Table 6: XWORM configuration
Host Reconnaissance
The malware then performs a system survey to gather the following information:
Bot ID
Username
OS Name
If it’s running on USB
CPU Name
GPU Name
Ram Capacity
AV Products list
Sample of collected information:
☠ [KW-2201]
New Clinet : <client_id_from_machine_info_hash>
UserName : <victim_username>
OSFullName : <victim_OS_name>
USB : <is_sample_name_USB.exe>
CPU : <cpu_description>
GPU : <gpu_description>
RAM : <ram_size_in_GBs>
Groub : <installed_av_solutions>
Then the sample waits for any of the following supported commands:
Command
Description
Command
Description
pong
echo back to server
StartDDos
Spam HTTP requests over TCP to target
rec
restart bot
StopDDos
Kill DDOS threads
CLOSE
shutdown bot
StartReport
List running processes continuously
uninstall
self delete
StopReport
Kill process monitoring threads
update
uninstall and execute received new version
Xchat
Send C2 message
DW
Execute file on disk via powershell
Hosts
Get hosts file contents
FM
Execute .NET file in memory
Shosts
Write to file, likely to overwrite hosts file contents
LN
Download file from supplied URL and execute on disk
DDos
Unimplemented
Urlopen
Perform network request via browser
ngrok
Unimplemented
Urlhide
Perform network request in process
plugin
Load a Bot plugin
PCShutdown
Shutdown PC now
savePlugin
Save plugin to registry and load it HKCUSoftware<victim_id><plugin_name>=<plugin_bytes>
PCRestart
Restart PC now
RemovePlugins
Delete all plugins in registry
PCLogoff
Log off
OfflineGet
Read Keylog
RunShell
Execute CMD on shell
$Cap
Get screen capture
Table 7: Supported commands
FROSTRIFT
Lastly, the launcher executes the file %APPDATA%ffplayffplay.exe to side-load the DLL %APPDATA%ffplaylibde265.dll and inject FROSTRIFT into a legitimate Windows process.
FROSTRIFT is a .NET backdoor that collects system information, installed applications, and crypto wallets. Instead of receiving C2 commands, it receives .NET modules that are stored in the registry to be loaded in-memory. It communicates with the C2 server using GZIP-compressed protobuf messages over TCP/SSL.
Malware Configuration
The malware starts by decoding its configuration, which is a Base64-encoded and GZIP-compressed protobuf message embedded within the strings table.
Figure 18: FROSTRIFT configuration
Table 8 shows the extracted malware configuration.
Field
Value
Protobuf Tag
38
C2 Domain
strokes.zapto[.]org
C2 Port
56001
SSL Certificate
<Base64 encoded SSL certificate>
Unknown
Default
Installation folder
APPDATA
Mutex
7d9196467986
Table 8: FROSTRIFT configration
Persistence
FROSTRIFT can achieve persistence by running the command:
The sample copies itself to %APPDATA% and adds a new registry value under HKCUSOFTWAREMicrosoftWindowsCurrentVersionRun with the new file path as data to ensure persistence at each system startup.
Host Reconnaissance
The following information is initially collected and submitted by the malware to the C2:
Collected Information
Host information
Installed Anti-Virus
Web camera
Hostname
Username and Role
OS name
Local time
Victim ID
HEX digest of the MD5 hash for the following combined:
Sample process ID
Disk drive serial number
Physical memory serial number
Victim user name
Malware Version
4.1.8
Software Applications
com.liberty.jaxx
Foxmail
Telegram
Browsers (see Table 10)
Standalone Crypto Wallets
Atomic, Bitcoin-Qt, Dash-Qt, Electrum, Ethereum, Exodus, Litecoin-Qt, Zcash, Ledger Live
Browser Extension
Password managers, Authenticators, and Digital wallets (see Table 11)
Others
5th entry from the Config (“Default” in this sample)
Malware full file path
Table 9: Collected information
FROSTRIFT checks for the existence of the following browsers:
FROSTRIFT also checks for the existence of 48 browser extensions related to Password managers, Authenticators, and Digital wallets. The full list is provided in Table 11.
String
Extension
ibnejdfjmmkpcnlpebklmnkoeoihofec
TronLink
nkbihfbeogaeaoehlefnkodbefgpgknn
MetaMask
fhbohimaelbohpjbbldcngcnapndodjp
Binance Chain Wallet
ffnbelfdoeiohenkjibnmadjiehjhajb
Yoroi
cjelfplplebdjjenllpjcblmjkfcffne
Jaxx Liberty
fihkakfobkmkjojpchpfgcmhfjnmnfpi
BitApp Wallet
kncchdigobghenbbaddojjnnaogfppfj
iWallet
aiifbnbfobpmeekipheeijimdpnlpgpp
Terra Station
ijmpgkjfkbfhoebgogflfebnmejmfbml
BitClip
blnieiiffboillknjnepogjhkgnoapac
EQUAL Wallet
amkmjjmmflddogmhpjloimipbofnfjih
Wombat
jbdaocneiiinmjbjlgalhcelgbejmnid
Nifty Wallet
afbcbjpbpfadlkmhmclhkeeodmamcflc
Math Wallet
hpglfhgfnhbgpjdenjgmdgoeiappafln
Guarda
aeachknmefphepccionboohckonoeemg
Coin98 Wallet
imloifkgjagghnncjkhggdhalmcnfklk
Trezor Password Manager
oeljdldpnmdbchonielidgobddffflal
EOS Authenticator
gaedmjdfmmahhbjefcbgaolhhanlaolb
Authy
ilgcnhelpchnceeipipijaljkblbcobl
GAuth Authenticator
bhghoamapcdpbohphigoooaddinpkbai
Authenticator
mnfifefkajgofkcjkemidiaecocnkjeh
TezBox
dkdedlpgdmmkkfjabffeganieamfklkm
Cyano Wallet
aholpfdialjgjfhomihkjbmgjidlcdno
Exodus Web3
jiidiaalihmmhddjgbnbgdfflelocpak
BitKeep
hnfanknocfeofbddgcijnmhnfnkdnaad
Coinbase Wallet
egjidjbpglichdcondbcbdnbeeppgdph
Trust Wallet
hmeobnfnfcmdkdcmlblgagmfpfboieaf
XDEFI Wallet
bfnaelmomeimhlpmgjnjophhpkkoljpa
Phantom
fcckkdbjnoikooededlapcalpionmalo
MOBOX WALLET
bocpokimicclpaiekenaeelehdjllofo
XDCPay
flpiciilemghbmfalicajoolhkkenfel
ICONex
hfljlochmlccoobkbcgpmkpjagogcgpk
Solana Wallet
cmndjbecilbocjfkibfbifhngkdmjgog
Swash
cjmkndjhnagcfbpiemnkdpomccnjblmj
Finnie
knogkgcdfhhbddcghachkejeap
Keplr
kpfopkelmapcoipemfendmdcghnegimn
Liquality Wallet
hgmoaheomcjnaheggkfafnjilfcefbmo
Rabet
fnjhmkhhmkbjkkabndcnnogagogbneec
Ronin Wallet
klnaejjgbibmhlephnhpmaofohgkpgkd
ZilPay
ejbalbakoplchlghecdalmeeeajnimhm
MetaMask
ghocjofkdpicneaokfekohclmkfmepbp
Exodus Web3
heaomjafhiehddpnmncmhhpjaloainkn
Trust Wallet
hkkpjehhcnhgefhbdcgfkeegglpjchdc
Braavos Smart Wallet
akoiaibnepcedcplijmiamnaigbepmcb
Yoroi
djclckkglechooblngghdinmeemkbgci
MetaMask
acdamagkdfmpkclpoglgnbddngblgibo
Guarda Wallet
okejhknhopdbemmfefjglkdfdhpfmflg
BitKeep
mijjdbgpgbflkaooedaemnlciddmamai
Waves Keeper
Table 11: List of browser extensions
C2 Communication
The malware expects the C2 to respond by sending GZIP-compressed Protobuf messages with the following fields:
registry_val: A registry value under HKCUSoftware<victim_id> to store the loader_bytes.
loader_bytes: Assembly module to load the loaded_bytes (stored at registry in reverse order).
loaded_bytes: GZIP-compressed assembly module to be loaded in-memory.
The sample receives loader_bytes only in the first message as it stores it under the registry value HKCUSoftware<victim_id>registry_val. For the subsequent messages, it only receives registry_val which it uses to fetch loader_bytes from the registry.
The sample sends empty GZIP-compressed Protobuf messages as a keep-alive mechanism until the C2 sends another assembly module to be loaded.
The malware has the ability to download and execute extra payloads from the following hardcoded URLs (this feature is not enabled in this sample):
The files are WebDrivers for browsers that can be used for testing, automation, and interacting with the browser. They can also be used by attackers for malicious purposes, such as deploying additional payloads.
Conclusion
As AI has gained tremendous momentum recently, our research highlights some of the ways in which threat actors have taken advantage of it. Although our investigation was limited in scope, we discovered that well-crafted fake “AI websites” pose a significant threat to both organizations and individual users. These AI tools no longer target just graphic designers; anyone can be lured in by a seemingly harmless ad. The temptation to try the latest AI tool can lead to anyone becoming a victim. We advise users to exercise caution when engaging with AI tools and to verify the legitimacy of the website’s domain.
Acknowledgements
Special thanks to Stephen Eckels, Muhammad Umair, and Mustafa Nasser for their assistance in analyzing the malware samples. Richmond Liclican for his inputs and attribution. Ervin Ocampo, Swapnil Patil, Muhammad Umer Khan, and Muhammad Hasib Latif for providing the detection opportunities.
Detection Opportunities
The following indicators of compromise (IOCs) and YARA rules are also available as a collection and rule pack in Google Threat Intelligence (GTI).
rule G_Backdoor_FROSTRIFT_1 {
meta:
author = "Mandiant"
strings:
$guid = "$23e83ead-ecb2-418f-9450-813fb7da66b8"
$r1 = "IdentifiableDecryptor.DecryptorStack"
$r2 = "$ProtoBuf.Explorers.ExplorerDecryptor"
$s1 = "\User Data\" wide
$s2 = "SELECT * FROM AntiVirusProduct" wide
$s3 = "Telegram.exe" wide
$s4 = "SELECT * FROM Win32_PnPEntity WHERE (PNPClass =
'Image' OR PNPClass = 'Camera')" wide
$s5 = "Litecoin-Qt" wide
$s6 = "Bitcoin-Qt" wide
condition:
uint16(0) == 0x5a4d and (all of ($s*) or $guid or all of ($r*))
}
YARA-L Rules
Mandiant has made the relevant rules available in the Google SecOps Mandiant Intel Emerging Threats curated detections rule set. The activity discussed in the blog post is detected under the rule names:
At Google Cloud, we’re committed to providing the most open and flexible AI ecosystem for you to build solutions best suited to your needs. Today, we’re excited to announce our expanded AI offerings with Mistral AI on Google Cloud:
Le Chat Enterprise on Google Cloud Marketplace: An AI assistant that offers enterprise search, agent builders, custom data and tool connectors, custom models, document libraries, and more in a unified platform.
Available today on Google Cloud Marketplace, Mistral AI’s Le Chat Enterprise is a generative AI work assistant designed to connect tools and data in a unified platform for enhanced productivity.
Use cases include:
Building agents: With Le Chat Enterprise, you can customize and deploy a variety of agents that understand and synchronize with your unique context, including no-code agents.
Accelerating research and analysis: WithLe Chat Enterprise, you can quickly summarize lengthy reports, extract key data from documents, and perform rapid web searches to gather information efficiently.
Generating actionable insights: With Le Chat Enterprise, industries — like finance — can convert complex data into actionable insights, generate text-to-SQL queries for financial analysis, and automate financial report generation.
Accelerating software development: With Le Chat Enterprise, you can debug and optimize existing code, generate and review code, or create technical documentation.
Enhancing content creation: With Le Chat Enterprise, you can help marketers generate and refine marketing copy across channels, analyze campaign performance data, and collaborate on visual content creation through Canvas.
By deploying Le Chat Enterprise through Google Cloud Marketplace, organizations can leverage the scalability and security of Google Cloud’s infrastructure, while also benefiting from a simplified procurement process and integrations with existing Google Cloud services such as BigQuery and Cloud SQL.
Mistral OCR 25.05 excels in document understanding and can comprehend elements of content-rich papers—like media, text, charts, tables, graphs, and equations—with powerful accuracy and cognition. More example use cases include:
Digitizing scientific research: Research institutions can use Mistral OCR 25.05 to accelerate scientific workflows by converting scientific papers and journals into AI-ready formats, making them accessible to downstream intelligence engines.
Preserving historical and cultural heritage: Digitizing historical documents and artifacts to assist with preservation and making them more accessible to a broader audience.
Streamlining customer service: Customer service departments can reduce response times and improve customer satisfaction by using Mistral OCR 25.05 to transform documentation and manuals into indexed knowledge.
Making literature across design, education, legal, etc. AI ready: Mistral OCR 25.05 can discover insights and accelerate productivity across a large volume of documents by helping companies convert technical literature, engineering drawings, lecture notes, presentations, regulatory filings and more into indexed, answer-ready formats.
When building with Mistral OCR 25.05 as a Model-as-a-Service (MaaS) on Vertex AI, you get a comprehensive AI platform to scale with fully managed infrastructure and build confidently with enterprise-grade security and compliance. Mistral OCR 25.05 joins a curated selection of over 200 foundation models in Vertex AI Model Garden, empowering you to choose the ideal solution for your specific needs.
To start building with Mistral OCR 25.05 on Vertex AI, visit the Mistral OCR 25.05 model card in Vertex AI Model Garden, select “Enable”, and follow the proceeding instructions.
Today, we’re expanding the choice of third-party models available in Vertex AI Model Garden with the addition of Anthropic’s newest generation of the Claude model family: Claude Opus 4 and Claude Sonnet 4. Both Claude Opus 4 and Claude Sonnet 4 are hybrid reasoning models, meaning they offer modes for near-instant responses and extended thinking for deeper reasoning.
Claude Opus 4 is Anthropic’s most powerful model to date. Claude Opus 4 excels at coding, with sustained performance on complex, long-running tasks and agent workflows. Use cases include advanced coding work, autonomous AI agents, agentic search and research, tasks that require complex problem solving, and long-running tasks that require precise content management.
Claude Sonnet 4 is Anthropic’s mid-size model that balances performance with cost. It surpasses its predecessor, Claude Sonnet 3.7, across coding and reasoning while responding more precisely to steering. Use cases include coding tasks such as code reviews and bug fixes, AI assistants, efficient research, and large-scale content generation and analysis.
Claude Opus 4 and Claude Sonnet 4 are generally available as a Model-as-a-Service (MaaS) offering on Vertex AI. For more informationon the newest Claude models, visit Anthropic’s blog.
Build advanced agents on Vertex AI
Vertex AI is Google Cloud’s comprehensive platform for orchestrating your production AI workflows across three pillars: data, models, and agents—a combination that would otherwise require multiple fragmented solutions. A key component of the model pillar is Vertex AI Model Garden, which offers a curated selection of over 200 foundation models, including Google’s models, third-party models, and open models—empowering you to choose the ideal solution for your specific needs.
You can leverage Vertex AI’s Model-as-a-Service (MaaS) to rapidly deploy and scale Claude-powered intelligent agents and applications, benefiting from integrated agentic tooling, fully managed infrastructure, and enterprise-grade security.
By building on Vertex AI, you can:
Orchestrate sophisticated multi-agent systems: Build agents with an open approach using Google’s Agent Development Kit (ADK) or your preferred framework. Deploy your agents to production with enterprise-grade controls directly in Agent Engine.
Harness the power of Google Cloud integrations: You can connect Claude directly within BigQuery ML to facilitate functions like text generation, summarization, translation, and more.
Optimize performance with provisioned throughput: Reserve dedicated capacity and prioritized processing for critical production workloads with Claude models at a fixed fee. To get started with provisioned throughput, contact your Google Cloud sales representative.
Maximize Claude model utilization: Reduce latency and costs while increasing throughput by employing Vertex AI’s advanced features for Claude models such asbatch predictions, prompt caching, token counting, and citations. For detailed information, refer to our documentation.
Scale withfully managed infrastructure: Vertex AI’s fully managed and AI-optimized infrastructure simplifies how you deploy your AI workloads in production. Additionally, Vertex AI’s new global endpoints for Claude (public preview) enhance availability by dynamically serving traffic from the nearest available region.
Build confidently with enterprise-grade security and compliance: Benefit from Vertex AI’s built-in security and compliance measures that satisfy stringent enterprise requirements.
Customers achieving real impact with Claude on Vertex AI
To date, more than 4,000 customers have started using Anthropic’s Claude models on Vertex AI. Here’s a look at how top organizations are driving impactful results with this powerful integration:
Augment Codeis running its AI coding assistant, which specializes in helping developers navigate and contribute to production-grade codebases, with Anthropic’s Claude models on Vertex AI.
“What we’re able to get out of Anthropic is truly extraordinary, but all of the work we’ve done to deliver knowledge of customer code, used in conjunction with Anthropic and the other models we host on Google Cloud, is what makes our product so powerful.” – Scott Dietzen, CEO, Augment Code
Palo Alto Networks is accelerating software development and security by deploying Claude on Vertex AI.
“With Claude running on Vertex AI, we saw a 20% to 30% increase in code development velocity. Running Claude on Google Cloud’s Vertex AI not only accelerates development projects, it enables us to hardwire security into code before it ships.” – Gunjan Patel, Director of Engineering, Office of the CPO, Palo Alto Networks
Replit leverages Claude on Vertex AI to power Replit Agent, which empowers people across the world to use natural language prompts to turn their ideas into applications, regardless of coding experience.
“Our AI agent is made more powerful through Anthropic’s Claude models running on Vertex AI. This integration allows us to easily connect with other Google Cloud services, like Cloud Run, to work together behind the scenes to help customers turn their ideas into apps.” – Amjad Masad, Founder and CEO, Replit
Get started
To get started with the new Claude models on Vertex AI, navigate to the Claude Opus 4 or the Claude Sonnet 4 model card in Vertex AI Model Garden, select “Enable”, and follow the proceeding instructions.
In today’s data-driven world, understanding large datasets often requires numerous, complex non-additive1 aggregation operations. But as the size of the data becomes massive2, these types of operations become computationally expensive and time-consuming using traditional methods. That’s where Apache DataSketches come in. We’re excited to announce the availability of Apache DataSketches functions within BigQuery, providing powerful tools for approximate analytics at scale.
Apache DataSketches is an open-source library of sketches, specialized streaming algorithms that efficiently summarize large datasets. Sketches are small probabilistic data structures that enable accurate estimates of distinct counts, quantiles, histograms, and other statistical measures – all with minimal memory, minimal computational overhead, and with a single pass through the data. All but a few of these sketches provide mathematically proven error bounds, i.e., the maximum possible difference between a true value and its estimated or approximated value. These error bounds can be adjusted by the user as a trade-off between the size of the sketch and the size of the error bounds. The larger the configured sketch, the smaller will be the size of the error bounds.
With sketches, you can quickly gain insights from massive datasets, especially when exact computations are impractical or impossible. The sketches themselves can be merged, making them additive and highly parallelizable, so you can combine sketches from multiple datasets for further analysis. This combination of small size and mergeability can translate into orders-of-magnitude improvement in speed of computational workload compared to traditional methods.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud data analytics’), (‘body’, <wagtail.rich_text.RichText object at 0x3e6130342520>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/bigquery/’), (‘image’, None)])]>
Why DataSketches in BigQuery?
BigQuery is known for its ability to process petabytes of data, and DataSketches are a natural fit for this environment. With DataSketches functions, BigQuery lets you:
Perform rapid approximate queries: Get near-instantaneous results for distinct counts, quantile analysis, adaptive histograms and other non-additive aggregate calculations on massive datasets.
Save on resources: Reduce query costs and storage requirements by working with compact sketches instead of raw data.
Move between systems: DataSketches have well-defined stored binary representations that let sketches be transported between systems and interpreted by three major languages: Java, C++, and Python, all without losing any accuracy.
Apache DataSketches come to BigQuery through custom C++ implementations using the Apache DataSketches C++ core library compiled to WebAssembly (WASM) libraries, and then loaded within BigQuery Javascript user-defined aggregate functions (JS UDAFs).
How BigQuery customers use Apache DataSketches
Yahoo started the Apache DataSketches project in 2011, open-sourced it in 2015, and still uses the Apache DataSketches library. They use approximate results in various analytic query operations such as count distinct, quantiles, and most frequent items (a.k.a. Heavy Hitters). More recently, Yahoo adapted the DataSketches library to leverage the large scale of BigQuery, using the Google-defined JavaScript User Defined Aggregate Functions (UDAF) interface to the Google Cloud and BigQuery platform.
“Yahoo has successfully used the Apache DataSketches library to analyze massive data in our internal production processing systems for more than 10 years. Data sketching has allowed us to respond to a wide range of queries summarizing data in seconds, at a fraction of the time and cost of brute-force computation. As an early innovator in developing this powerful technology, we are excited about this fast, accurate, large-scale, open-source technology becoming available to those already working in a Google Cloud BigQuery environment.” – Matthew Sajban, Director of Software Development Engineering, Yahoo
Featured sketches
So, what can you do with Apache DataSketches? Let’s take a look at the sketches integrated with BigQuery.
Cardinality sketches
Hyper Log Log Sketch (HLL): The DataSketches library implements this historically famous sketch algorithm with lots of versatility. It is best suited for straightforward distinct counting (or cardinality) estimation. It can be adapted to a range of sizes from roughly 50 bytes to about 2MB depending on the accuracy requirements. It also comes in three flavors: HLL_4, HLL_6, HLL_8 that enable additional tuning of speed and size.
Theta Sketch: This sketch specializes in set expressions and allows not only normal additive unions but also full set expressions between sketches with set-intersection and set-difference. Because of its algebraic capability, this sketch is one of the most popular sketches. It has a range of sizes from a few hundred bytes to many megabytes, depending on the accuracy requirements.
CPC Sketch: This cardinality sketch takes advantage of recent algorithmic research and enables smaller stored size, for the same accuracy, than the classic HLL sketch. It is targeted for situations where accuracy per stored size is the most critical metric.
Tuple Sketch: This extends Theta Sketch to enable the association of other values with each unique item retained by the sketch. This allows the computation of summaries of attributes like impressions or clicks as well as more complex analysis of customer engagement, etc.
Quantile sketches
KLL Sketch: This Sketch is designed for quantile estimation (e.g., median, percentiles), and ideal for understanding distributions, creating density and histogram plots, and partitioning large data sets. The KLL algorithm used in this sketch has been proven to have statistically optimal quantile approximation accuracy for a given size. The KLL Sketch can be used with any kind of data that is comparable, i.e., has a defined sorting order between items. The accuracy of KLL is insensitive to the input data distribution.
REQ Sketch: This quantile sketch is designed for situations where accuracy at the ends of the rank domain is more important than at the median. In other words, if you’re most interested in accuracy at the 99.99th percentile and not so interested in the accuracy at the 50th percentile, this is the sketch to choose. Like the KLL Sketch, this sketch has mathematically proven error bounds. The REQ sketch can be used with any kind of data that is comparable, i.e., has a defined sorting order between items. By design, the accuracy of REQ is sensitive to how close an item is to the ends of the normalized rank domain (i.e., close to rank 0.0 or rank 1.0), otherwise it is insensitive to the input distribution.
T-Digest Sketch: This is also a quantile sketch, but it’s based on a heuristic algorithm and doesn’t have mathematically proven error properties. It is also limited to strictly numeric data. The accuracy of the T-Digest Sketch can be sensitive to the input data distribution. However, it’s a very good heuristic sketch, fast, has a small footprint, and can provide excellent results in most situations.
Frequency sketches
Frequent Items Sketch: This sketch is also known as a Heavy-Hitter sketch. Given a stream of items, this sketch identifies, in a single pass, the items that occur more frequently than a noise threshold, which is user-configured by the size of the sketch. This is especially useful in real-time situations. For example, what are the most popular items from a web site that are being actively queried, over the past hour, day, or minute? Its output is effectively an ordered list of the most frequently visited items. This list changes dynamically, which means you can query the sketch, say, every hour to help you understand the query dynamics over the course of a day. In static situations, for example, it can be used to discover the largest files in your database in a single pass and with only a modest amount of memory.
How to get started
To leverage the power of DataSketches in BigQuery, you can find the new functions within the bqutil.datasketches dataset (for US multi-region location) or bqutil.datasketches_<bq_region> dataset (for any other regions and locations). For detailed information on available functions and their usage, refer to the DataSketches README. You can also find demo notebooks in our GitHub repo for the KLL Sketch, Theta Sketch, and FI Sketch.
Example: Obtaining estimates of Min, Max, Median, 75th, 95th percentiles and total count using the KLL Quantile Sketch
Suppose you have 1 million comparable3 records in 100 different partitions or groups. You would like to understand how the records are distributed by their percentile or rank, without having to bring them all together in memory or even sort them.
SQL:
code_block
<ListValue: [StructValue([(‘code’, ‘## Creating sample data with 1 million records split into 100 groups of nearly equal sizernrnCREATE TEMP TABLE sample_data ASrnSELECTrn CONCAT(“group_key_”, CAST(RAND() * 100 AS INT64)) AS group_key,rn RAND() AS xrnFROMrn UNNEST(GENERATE_ARRAY(1, 1000000));rnrn## Creating KLL merge sketches for a group keyrnrnCREATE TEMP TABLE agg_sample_data ASrnSELECTrn group_key,rn count(*) AS total_count,rn bqutil.datasketches.kll_sketch_float_build_k(x, 250) AS kll_sketchrnFROM sample_datarnGROUP BY group_key;rnrn## Merge group based sketches into a single sketch and then get approx quantilesrnrnWITH agg_data AS (rn SELECTrn bqutil.datasketches.kll_sketch_float_merge_k(kll_sketch, 250) rnAS merged_kll_sketch,rn SUM(total_count) AS total_countrn FROM agg_sample_datarn)rnSELECTrn bqutil.datasketches.kll_sketch_float_get_quantile(merged_kll_sketch, 0.0, true) AS mininum,rn bqutil.datasketches.kll_sketch_float_get_quantile(merged_kll_sketch, 0.5, true) AS p50,rn bqutil.datasketches.kll_sketch_float_get_quantile(merged_kll_sketch, 0.75, true) AS p75,rn bqutil.datasketches.kll_sketch_float_get_quantile(merged_kll_sketch, 0.95, true) AS p95,rn bqutil.datasketches.kll_sketch_float_get_quantile(merged_kll_sketch, 1.0, true) AS maximum,rn total_countrnFROM agg_data;’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e6130342640>)])]>
The DataSketches Tuple Sketch is a powerful tool to analyze properties that have a natural association with unique identifiers.
For example, imagine you have a large-scale web application that records user identifiers and their clicks on various elements. You would like to analyze this massive dataset efficiently to obtain approximate metrics for clicks per unique user. The Tuple Sketch computes the number of unique users and allows you to track additional properties that are naturally associated with the unique identifiers as well.
SQL:
code_block
<ListValue: [StructValue([(‘code’, ‘## Creating sample data with 100M records (1 through 100M) split in 10 nearly equal sized groups of 10M values eachrnrnrnCREATE TEMP TABLE sample_data_100M ASrnSELECTrn CONCAT(“group_key_”, CAST(RAND() * 10 AS INT64)) AS group_key,rn 1000000 * x2 + x1 AS user_id, rn X2 AS clicksrnFROM UNNEST(GENERATE_ARRAY(1, 1000000)) AS x1,rn UNNEST(GENERATE_ARRAY(0, 99)) AS x2;rnrn## Creating Tuple sketches for a group key ( group key can be any dimension for example date, product, location etc ) rnrnrnCREATE TEMP TABLE agg_sample_data_100M ASrnSELECTrn group_key, count(distinct user_id) AS exact_uniq_users_ct,rn sum(clicks) AS exact_clicks_ct,rn bqutil.datasketches.tuple_sketch_int64_agg_int64(user_id, clicks) rn AS tuple_sketchrnFROM sample_data_100MrnGROUP BY group_key;rnrn## Merge group based sketches into a single sketch and then extract relevant metrics like distinct count estimate as well as the estimate of the sum of clicks and its upper and lower bounds.rnrnrnWITHrnagg_data AS (rn SELECTrn bqutil.datasketches.tuple_sketch_int64_agg_union(tuple_sketch)rn AS merged_tuple_sketch, SUM(exact_uniq_users_ct) rn AS total_uniq_users_ct, rn FROM agg_sample_data_100Mrn)rnSELECTrn total_uniq_users_ct,rn bqutil.datasketches.tuple_sketch_int64_get_estimate(merged_tuple_sketch)rn AS distinct_count_estimate,rn bqutil.datasketches.tuple_sketch_int64_get_sum_estimate_and_boundsrn (merged_tuple_sketch, 2)rn AS sum_estimate_and_boundsrnFROM agg_data;rnrnrn## The average clicks / unique user can be obtained by simple divisionrn## Note: the number of digits of precision in the estimates above are due to the fact that the returned values are floating point.’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3e6130342280>)])]>
In short, DataSketches in BigQuery unlocks a new dimension of approximate analytics, helping you gain valuable insights from massive datasets quickly and efficiently. Whether you’re tracking website traffic, analyzing user behavior, or performing any other large-scale data analysis, DataSketches are your go-to tools for fast, accurate estimations.
To start using DataSketches in BigQuery, refer to the DataSketches-BigQuery repository README for building, installing and testing the DataSketches-BigQuery library in your own environment. In each sketch folder there is a README that details the specific function specifications available for that sketch.
If you are working in a BigQuery environment, the DataSketches-BigQuery library is already available for you to use in all regional public BigQuery datasets.
1. Examples include distinct counting, quantiles, topN, K-means, density estimation, graph analysis, etc. The results from one parallel partition cannot be simply “added” to the results of another partition – thus the term non-additive (a.k.a. non-linear operations). 2. Massive ~ typically, much larger than what can be conveniently kept in random-access memory. 3. Any two items can be compared to establish their order, i.e. if A < B, then A precedes B.
Want to save some money on large AI training? For a typical PyTorch LLM training workload that spans thousands of accelerators for several weeks, a 1% improvement in ML Goodput can translate to more than a million dollars in cost savings1. Therefore, improving ML Goodput is an important goal for model training — both from an efficiency perspective, as well as for model iteration velocity.
However, there are several challenges to improving ML Goodput today: frequent interruptions that necessitate restarts from the latest checkpoint, slow inline checkpointing that interrupts training, and limited observability that makes it difficult to detect failures. These issues contribute to a significant increase in the time-to-market (TTM) and cost-to-train. There have been several industry publications articulating these issues, e.g., this Arxiv paper.
Improving ML Goodput
In order to improve ML Goodput, you need to minimize the impact of disruptive events on the progress of the training workload. To resume a job quickly, you can automatically scale down the job, or swap failed resources from spare capacity. At Google Cloud, we call this elastic training. Further, you can reduce workload interruptions during checkpointing and speed up checkpoint loads on failures from the nearest available storage location. We call these capabilities asynchronous checkpointing and multi-tier checkpointing.
The following picture illustrates how these techniques provide an end-to-end remediation workflow to improve ML Goodput for training. An example workload of nine nodes is depicted with three-way data parallelism (DP) and three-way pipeline parallelism (PP), with various remediation actions shown based on the failures and spare capacity.
You can customize the remediation policy for your specific workload. For example, you can choose between a hotswap and a scaling-down remediation strategy, or to configure checkpointing frequency, etc. A supervisor process receives failure, degradation, and straggler signals from a diagnostic service. The supervisor uses the policy to manage these events. In case of correctable errors, the supervisor might request an in-job restart, potentially restoring from a local checkpoint. For uncorrectable hardware failures, a hot swap can replace the faulty node, potentially restoring from a peer checkpoint. If no spare resources are available, the system can scale down. These mechanisms ensure training is more resilient and adaptable to resource changes. When a replacement node is available, training scales up automatically to maximize GPU utilization. During scale down and scale up, user-defined callbacks help adjust hyperparameters such as learning rate and batch size. You can set remediation policies using a Python script.
Let’s take a deeper look at the key techniques you can use when optimizing ML Goodput.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud AI and ML’), (‘body’, <wagtail.rich_text.RichText object at 0x3e612fbff4f0>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/vertex-ai/’), (‘image’, None)])]>
Elastic training
Elastic training enhances the resiliency of LLM training by enabling failure sensing and mitigation capabilities for workloads. This allows jobs to automatically continue with remediation strategies including GPU reset, node hot swap, and scaling down the data-parallel dimension of a workload to avoid using faulty nodes, thereby reducing job interruption time and improving ML Goodput. Furthermore, elastic training enables automatic scaling up of data-parallel replicas when replacement nodes become available, maximizing training throughput.
Watch this short video to see elastic training techniques in action:
Sub-optimal checkpointing can lead to unnecessary overhead during training and significant loss of training productivity when interruptions occur and previous checkpoints are restored. You can substantially reduce these impacts by defining a dedicated asynchronous checkpointing process and optimizing it to quickly offload the training state from GPU high-bandwidth memory to host memory. Tuning the checkpoint frequency — based on factors such as the job interruption rate and the asynchronous overhead — is vital, as the best interval may range from several hours to mere minutes, depending on the workload and cluster size. An optimal checkpoint frequency minimizes both checkpoint overhead during training operation and computational loss during unexpected interruptions.
A robust way to meet the demands of frequent checkpointing is to leverage three levels of storage: local node storage, e.g., local SSD; peer node storage in the same cluster; and Google Cloud Storage. This multi-tiered checkpointing approach automatically replicates data across these storage tiers during save and restore operations via the host network interface or NCCL (the NVIDIA Collective Communications Library), allowing the system to use the fastest accessible storage option. By combining asynchronous checkpointing with a multi-tier storage strategy, you can achieve quicker recovery times and more resilient training workflows while maintaining high productivity and minimizing the loss of computational progress.
Watch this short video to see optimized checkpointing techniques in action :
These ML Goodput improvement techniques leverage NVIDIA Resiliency Extension, which provides failure signaling and in-job restart capabilities, as well as recent improvements to PyTorch’s distributed checkpointing, which support several of the previously mentioned checkpoint-related optimizations. Further, these capabilities are integrated with Google Kubernetes Engine (GKE) and the NVIDIA NeMo training framework, pre-packaged into a container image and available with an ML Goodput optimization recipe for easy deployment.
Elastic training in action
In a recent internal case study with 1,024 A3 Mega GPU-accelerated instances (built on NVIDIA Hopper), workload ML Goodput improved from 80%+ to 90%+ using a combination of these techniques. While every workload may not benefit in the same way, this table shows the specific metric improvements and ML Goodput contribution of each of the techniques.
Example: Case study experiment used an A3 Mega cluster with 1024 GPUs running ~40hr jobs with ~5 simulated interruptions per day
Conclusion
In summary, elastic training and optimized checkpointing, along with easy deployment options, are key strategies to maximize ML Goodput for large PyTorch Training workloads. As seen from the case study above, they can contribute to meaningful ML Goodput improvements and provide significant efficiency savings. These capabilities are customizable and composable through a python script. If you’re running PyTorch GPU training workloads on Google Cloud today, we encourage you to try out our ML Goodput optimization recipe, which provides a starting point with recommended configurations for elastic training and checkpointing. We hope you have fun building and share your feedback!
Various teams and individuals within Google Cloud contributed to this effort. Special thanks to – Jingxin Ye, Nicolas Grande, Gerson Kroiz, and Slava Kovalevskyi, as well as our collaborative partners – Jarek Kazmierczak, David Soto, Dmitry Kakurin, Matthew Cary, Nilay Goyal and Parmita Mehta for their immense contributions to developing all of the components that made this project a success.
1. Assuming A3 Ultra pricing for 20,000 GPUs with jobs spanning 8 weeks or longer
Confidential Computing has redefined how organizations can securely process their sensitive workloads in the cloud. The growth in our hardware ecosystem is fueling a new wave of adoption, enabling customers to use Confidential Computing to support cutting-edge uses such as building privacy-preserving AI and securing multi-party data analytics.
We are thrilled to share our latest Confidential Computing innovations, highlighting the creative ways our customers are using Confidential Computing to protect their most sensitive workloads including AI workloads.
Building on our foundational work last year, we’ve seen remarkable progress through our deep collaborations with industry leaders including Intel, AMD, and NVIDIA. Together, we’ve significantly expanded the reach of Confidential Computing, embedding critical security features across the latest generations of CPUs, and also extending them to high-performance GPUs.
Confidential VMs and GKE Nodes with NVIDIA H100 GPUs for AI workloads, in preview
An ongoing, top goal for Confidential Computing is to expand our capabilities for secure computation.
We unveiled Confidential Virtual Machines on the accelerator-optimized A3 machine series with NVIDIA H100 GPUs last year, which extends hardware-based data protection from the CPU to GPUs. Confidential VMs can help ensure the confidentiality and integrity of artificial intelligence, machine learning, and scientific simulation workloads using protected GPUs while the data is in use.
“AI and Agentic workflows are accelerating and transforming every aspect of business. As these technologies are integrated into the fabric of everyday operations — data security and protection of intellectual property are key considerations for businesses, researchers and governments,” said Daniel Rohrer, vice president, software product security, NVIDIA. “Putting data and model owners in direct control of their data’s journey — NVIDIA’s Confidential Computing brings advanced hardware-backed security for accelerated computing providing more confidence when creating and adopting innovative AI solutions and services.”
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud security products’), (‘body’, <wagtail.rich_text.RichText object at 0x3e612fe7d370>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>
Confidential Vertex AI Workbench, in preview
We are expanding Confidential Computing support on Vertex AI. Vertex AI Workbench customers can now use Confidential Computing to enhance their data privacy needs, and is now in preview. This integration offers greater privacy and confidentiality with just a few clicks.
How to enable Confidential VMs in Vertex AI Workbench instances.
Confidential Space with Intel TDX (generally available) and NVIDIA H100 GPUs, in preview
We are excited to announce that Confidential Space is now generally available on the general-purpose C3 machine series with Intel® Trust Domain Extensions (Intel® TDX) technology, and coming soon in preview on the accelerator-optimized A3 machine series with NVIDIA H100 GPUs.
Built on our Confidential Computing portfolio, Confidential Space provides a secure enclave, also known as a Trusted Execution Environment (TEE), that Google Cloud customers can use for privacy-focused applications such as joint data analysis, joint machine learning (ML) model training or secure sharing of proprietary ML models.
Importantly, Confidential Space is designed to protect data from all parties involved — including removing the operator of the environment from the trust boundary along with hardened protection against cloud service provider access. These properties can help organizations harden their products from insider threats, and ultimately provide stronger data privacy guarantees to their own customers.
Confidential Space enables secure collaboration.
Confidential GKE Nodes on C3 machines with Intel TDX and built-in acceleration, generally available
Confidential GKE Nodes are now generally available with Intel TDX. These nodes are powered by the general purpose C3 machine series, which run on the 4th generation Intel Xeon Scalable processors (code-named Sapphire Rapids) and have the Intel Advanced Matrix Extensions (Intel AMX) built in and on by default.
Confidential GKE Nodes with Intel TDX offers nodes an additional isolation layer from the host and hypervisor to protect nodes against a broad range of software and hardware attacks.
“Intel Xeon processors deliver outstanding performance and value for many machine learning and AI inference workloads, especially with Intel AMX acceleration,” said Anand Pashupathy, vice president and general manager, Security Software and Services, Intel. “Google Cloud’s C3 machine series will not only impress with their performance on AI and other workloads, but also protect the confidentiality of the user’s data.”
How to enable Confidential GKE Nodes with Intel TDX.
Confidential GKE Nodes on N2D machines with AMD SEV-SNP, generally available
Confidential GKE nodes are also now generally available with AMD Secure Encrypted Virtualization-Secure Nested Paging (AMD SEV-SNP) technology. These nodes use the general purpose N2D machine series and run on the 3rd generation AMD EPYC™ (code-named Milan) processors. Confidential GKE nodes with AMD SEV-SNP provides security for cloud workloads through assurance that workloads are running and encrypted on secured hardware.
Confidential VMs on C4D machines with AMD SEV, in Preview
The C4D machine series are powered by the 5th generation AMD EPYC™ (code-named Turin) processors and designed to deliver optimal, reliable, and consistent performance with Google’s Titanium hardware.
Today, we offer global availability of Confidential Compute on AMD machine families such as N2D, C2D, and C3D. We’re happy to share that Confidential VMs on general purpose C4D machine series with AMD Secure Encrypted Virtualization (AMD SEV) technology are in preview today, and will be generally available soon.
Unlocking new use cases with Confidential Computing
We’re seeing impact across all major verticals where organizations are using Confidential Computing to unlock business innovations.
AiGenomix AiGenomix is leveraging Google Cloud Confidential Computing to deliver highly differentiated infectious disease surveillance, early detection of cancer, and therapeutics intelligence with a global ecosystem of collaborators in the public and private sector.
“Our customers are dealing with extremely sensitive data about pathogens. Adding relevant data sets like patient information and personalized therapeutics further adds to the complexity of compliance. Preserving privacy and security of pathogens, patients’ genomic and related health data assets is a requirement for our customers and partners,” said Dr. Jonathan Monk, head of bioinformatics, AiGenomix.
“Our Trusted AI for Healthcare solutions leveraging Google Cloud Confidential Computing overcome the barriers to accelerated global adoption by making sure that our assets and processes are secure and compliant. With this, we are able to contribute towards the mitigation of the ever-growing risk emerging from infectious diseases and drug resistance resulting in loss of lives and livelihood,” said Dr. Harsh Sharma, chief AI strategist, AiGenomix.
Google Ads Google Ads has introduced confidential matching to securely connect customers’ first-party data for their marketing. This marks the first use of Confidential Computing in Google Ads products, and there are plans to bring this privacy-enhancing technology to more products over time.
“Confidential matching is now the default for any data connections made for Customer Match including Google Ads Data Manager — with no action required from you. For advertisers with very strict data policies, it also means the ability to encrypt the data yourself before it ever leaves your servers,” said Kamal Janardhan, senior director, Product Management, Measurement, Google Ads.
Google Ads plans to further integrate Confidential Computing across more services, such as the new Google tag gateway for advertisers. This update will give marketers conversion tag data encrypted in the browser, by default, and at no extra cost. The Google tag gateway for advertisers can help drive performance improvements and strengthen the resilience of advertisers’ measurement signals, while also boosting security and increasing transparency on how data is collected and processed.
Swift Swift is using Confidential Computing to ensure that sensitive data from some of the largest banks remains completely private while powering a money laundering detection model.
“We are exploring how to leverage the latest technologies to build a global anomaly detection model that is trained on the historic fraud data of an entire community of institutions in a secure and scalable way. With a community of banks we are exploring an architecture which leverages Google Cloud Confidential Computing and verifiable attestation, so participants can ensure that their data is secure even during computation as they locally train the global model and rely on verifiable attestation to ensure the security posture of every environment in the architecture,” said Rachel Levi, head of artificial intelligence, Swift.
Expedite your Confidential Compute journey with Gemini Cloud Assist, in preview
To make it easy for you to use Confidential Computing we’re providing AI-powered assistance directly in existing configuration workflows by integrating Gemini Cloud Assist across Confidential Compute, now in preview.
Through natural language chat, Google Cloud administrators can get tailored explanations, recommendations, and step-by-step guidance for many security and compliance tasks. One such example is Confidential Space, where Gemini Cloud Assist can guide you through the journey of setting up the environment as a Workload Author, Workloads Operator, or a Data Collaborator. This significantly reduces the complexity and the time to set up such an environment for organizations.
Gemini Cloud Assist for Confidential Space
Next steps
By continuously innovating and collaborating, we’re committed to making Confidential Computing the cornerstone of a secure and thriving cloud ecosystem.
Our latest video covers several creative ways organizations are using Confidential Computing to move their AI journeys forward. You can watch it here.
Welcome to the first Cloud CISO Perspectives for May 2025. Today, Iain Mulholland, senior director, Security Engineering, pulls back the curtain on how Google Cloud approaches security engineering and how we take secure by design from mindset to production.
As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.
aside_block
<ListValue: [StructValue([(‘title’, ‘Get vital board insights with Google Cloud’), (‘body’, <wagtail.rich_text.RichText object at 0x3e7f61c98580>), (‘btn_text’, ‘Visit the hub’), (‘href’, ‘https://cloud.google.com/solutions/security/board-of-directors?utm_source=cloud_sfdc&utm_medium=email&utm_campaign=FY24-Q2-global-PROD941-physicalevent-er-CEG_Boardroom_Summit&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
How Google Cloud’s security team helps engineers build securely
By Iain Mulholland, senior director, Security Engineering
Velocity is a chief concern in every executive office, but it falls to CISOs to balance the tension between keeping the business secure and ensuring the business keeps up. At Google, we’re constantly thinking about how to enable both resilience and innovation.
For decades, we’ve been taking a holistic approach to how security decision-making can work better. We believe that the success we’ve seen with our security teams is achievable at many organizations, and can help lead to better security and business outcomes.
My team is responsible for ensuring Google Cloud is the most secure cloud, and we approach security as an engineering function. It’s a different lens than traditional IT or compliance views, two parts of the business where security priorities are often set, which results in improved decision-making and security outcomes.
Our Office of the CISO security engineering team partners with product team software engineers at all stages of the software development lifecycle to find paths to ship secure software — all while maintaining product-release velocity and adhering to secure-by-design principles.
We’re still seeing too many organizations rely on defenses that were designed for the desktop era — despite successful efforts to convince business leaders to invest in more modern security tools, as Phil Venables and Andy Wen noted last year.
“To be truly resilient in today’s security landscape, organizations must consider an IT overhaul and rethink their strategy toward solutions with modern, secure-by-design architectures that nullify classes of vulnerabilities and attack vectors,” they said.
To turn this core security philosophy into reality, we’ve used it to guide how we build our teams. Cloud security engineers are embedded with product teams to help the entire organization “shift left” and take an engineering-centered approach to security. Our Office of the CISO security engineering team partners with product team software engineers at all stages of the software development lifecycle (SDLC) to find paths to ship secure software — all while maintaining product-release velocity and adhering to secure-by-design principles.
You can see this in action with our threat modelling practice. Security engineers and software development teams work closely to analyze potential threats to the product and to identify actions and product capabilities that can mitigate risks. Because this happens in the design phase, the team can eliminate these threats early in the SDLC, ensuring our products are secure by design.
With engineering as our security foundation, we can build capabilities at breadth, at depth, and in clear relationship to each other, so that our total power exceeds the sum of these parts.
Instead of simulating risk, we deploy our researchers to consider the whole cloud as an attack surface. They chain vulnerabilities in novel ways to improve our overall security architecture.
Protecting against threats is a great example of the impact of this approach. We characterize the vast cloud threat landscape in three specific areas: outbound network attacks (such as DDoS, outbound intrusion attempts, and vulnerability scans); resource misuse (such as cryptocurrency mining, illegal video streaming, and bots); and content-based threats (such as phishing and malware).
Across that landscape, threat actors often use similar techniques and exploit similar vulnerabilities. To combat these tactics, the team generates intelligence to prevent, detect, and mitigate risk in Google Cloud offerings before they become problems to our customers.
We “shift left” on threats, too: Identifying this systemic risk feeds into the lifecycle of software and product development. Once we identify a threat vector, we work closely with our security and product engineers to harden product defenses to help eliminate threats before they can take root.
We use AI, advanced data science, and analytics solutions to protect Google Cloud and our customers from future threats by focusing on three key capabilities: predicting future user behavior, proactively identifying risky security patterns, and improving the efficiency and measurability of threats and security operations.
It’s vital to our mission that we find attack paths before attackers do, reducing unknown security risks by finding vulnerabilities in our products and services before they are made available to customers. In addition to simulating risk, we push our researchers to consider the whole cloud as an attack surface. They chain vulnerabilities in novel ways to improve our overall security architecture.
Responding to threats is a critical third element of our engineering environment’s interlocking capabilities. Our security response operations assess and implement remediation strategies that come from external parties, and we frequently participate in comprehensive, industry-wide responses. Regular collaboration with Google Cloud’s Vulnerability Rewards Program has been a major driver of our success in this area.
Across all of these areas, there is incredible complexity, but the philosophy that guides the work is simple: By baking security into engineering processes, you can secure systems better and earlier than bolting security on at the end. Investing in a deep engineering bench coupled with embedding security personnel, processes, and procedures as early as possible in the development lifecycle can strengthen decision-making confidence and business resilience across the organization.
You can learn more about how you can incorporate security best practices into your organization’s engineering environment from our Office of the CISO.
aside_block
<ListValue: [StructValue([(‘title’, ‘Join the Google Cloud CISO Community’), (‘body’, <wagtail.rich_text.RichText object at 0x3e7f61c98940>), (‘btn_text’, ‘Learn more’), (‘href’, ‘https://rsvp.withgoogle.com/events/ciso-community-interest?utm_source=cgc-blog&utm_medium=blog&utm_campaign=2024-cloud-ciso-newsletter-events-ref&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
In case you missed it
Here are the latest updates, products, services, and resources from our security teams so far this month:
How boards can boost resiliency with the updated U.K. cyber code: Here’s how Google Cloud can help your organization and board of directors adapt to the newly updated U.K. cyber code. Read more.
What’s new in IAM, Access Risk, and Cloud Governance: A core part of our mission is to help you meet your policy, compliance, and business objectives. Here’s what’s new for IAM, Access Risk, and Cloud Governance. Read more.
3 new ways to use AI as your security sidekick: Generative AI is already providing clear and impactful security results. Here’s three decisive examples that organizations can adopt right now. Read more.
Expanding our Risk Protection Program with new insurance partners and AI coverage: We unveiled at Next ‘25 major updates to our Risk Protection Program, an industry-first collaboration between Google and cyber insurers. Here’s what’s new. Read more.
From insight to action: M-Trends, agentic AI, and how we’re boosting defenders at RSAC 2025: From the latest M-Trends report to updates across Google Unified Security, our product portfolio, and our AI capabilities, here’s what’s new from us at RSAC. Read more.
The dawn of agentic AI in security operations: Agentic AI promises a fundamental, tectonic shift for security teams, where intelligent agents work alongside human analysts. Here’s our vision for the agentic future. Read more.
What’s new in Android security and privacy in 2025: We’re announcing new features and enhancements that build on our industry-leading protections to help keep you safe from scams, fraud, and theft on Android. Read more.
Please visit the Google Cloud blog for more security stories published this month.
COLDRIVER using new malware to steal data from Western targets and NGOs: Google Threat Intelligence Group (GTIG) has attributed new malware to the Russian government-backed threat group COLDRIVER (also known as UNC4057, Star Blizzard, and Callisto) that has been used to steal data from western governments and militaries, as well as journalists, think tanks, and NGOs. Read more.
Cybercrime hardening guidance from the frontlines: The U.S. retail sector is currently being targeted in ransomware operations that GTIG suspects is linked to UNC3944, also known as Scattered Spider. UNC3944 is a financially-motivated threat actor characterized by its persistent use of social engineering and brazen communications with victims. Here’s our latest proactive hardening recommendations to combat their threat activities. Read more.
Please visit the Google Cloud blog for more threat intelligence stories published this month.
Now hear this: Podcasts from Google Cloud
How cyber-savvy is your board: We’ve long extolled the importance of bringing boards of directors up to speed on cybersecurity challenges both foundational and cutting-edge, which is why we’ve launched “Cyber Savvy Boardroom,” a new monthly podcast from our Office of the CISO’s David Homovich, Alicja Cade, and Nick Godfrey. Our first three episodes feature security and business leaders known for their intuition, expertise, and guidance, including Karenann Terrell, Christian Karam, and Don Callahan. Listen here.
From AI agents to provenance in MLSecOps: What is MLSecOps, and what should CISOs know about it? Diana Kelley, CSO, Protect AI, goes deep on machine-learning model security with hosts Anton Chuvakin and Tim Peacock. Listen here.
What we learned at RSAC 2025: Anton and Tim discuss their RSA Conference experiences this year. How did the show floor hold up to the complicated reality of today’s information security landscape? Listen here.
Deconstructing this year’s M-Trends: Kirstie Failey, GTIG, and Scott Runnels, Mandiant Incident Response, chat with Anton and Tim about the challenges of turning standard incident reports into bigger-picture review found in this year’s M-Trends. Listen here.
Defender’s Advantage: How UNC5221 targeted Ivanti Connect Secure VPNs: Mandiant’s Matt Lin and Ivanti’s Daniel Spicer join host Luke McNamara as they dive into the research and response of UNC5221’s campaigns against Ivanti. Listen here.
To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in a few weeks with more security-related updates from Google Cloud.
The telecommunications industry is undergoing a profound transformation, with AI and generative AI emerging as key catalysts. Communication service providers (CSPs) are increasingly recognizing that these technologies are not merely incremental improvements but fundamental drivers for achieving strategic business and operational objectives. This includes enabling digital transformation, fostering service innovation, optimizing monetization strategies, and enhancing customer retention.
To provide a comprehensive and data-driven analysis of this evolving landscape, Google Cloud partnered with Analysys Mason to conduct an in-depth study “ Gen AI in the network: CSP progress in adopting gen AI for network operations. This research examines CSPs’ progress, priorities, challenges, and best practices in leveraging gen AI to reshape their networks, offering quantifiable insights into this critical transformation.
aside_block
<ListValue: [StructValue([(‘title’, ‘Try Google Cloud for free’), (‘body’, <wagtail.rich_text.RichText object at 0x3e7f6108bc40>), (‘btn_text’, ‘Get started for free’), (‘href’, ‘https://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>
Key findings: A data-driven roadmap
The Analysys Mason study offers valuable insights into the current state of gen AI adoption in telecom, providing a data-driven roadmap for CSPs seeking to navigate this transformative journey:
1. Widespread gen AI adoption and future intentions
Demonstrating the strong momentum behind gen AI, 82% of CSPs surveyed are currently trialing or using it in at least one network operations area, and this adoption is set to expand further, with an additional 9% planning to implement it within the next 2 years.
2. Strategic importance of gen AI
Gen AI empowers CSPs to achieve strategic goals within the network: 57% surveyed see it as a key enabler of autonomous, cloud-based network transformation initiatives, 52% for the transition to new business models like NetCo/ServCo and more digitally driven organizations, and all with the aim of enhancing customer experience and driving broader transformation.
3. Key drivers for gen AI investment
CSPs are strategically prioritizing gen AI investments to achieve a range of network objectives, including optimizing network performance and reliability, enhancing application quality of experience (QoE), and improving network resource utilization, recognizing gen AI’s potential to move beyond a productivity tool and become a cornerstone of future network operations and automation..
4. Challenges in achieving model accuracy
While gen AI offers significant potential, the study found that 80% of CSPs face challenges in achieving the expected accuracy from gen AI models, a hurdle that impacts use case scaling and ROI. These accuracy issues are linked to data-related problems, which many CSPs across different maturity levels are still working to resolve, and the complexity of customizing models for specific network operations.
5. Addressing the skills gap
With over 50% of CSPs citing it as a key concern, employee skillsets represent a major challenge, highlighting the urgent imperative for CSPs to invest in upskilling and reskilling initiatives to cultivate in-house expertise in AI, gen AI, and data science related fields.
6. Gen AI implementation strategies
While many CSPs begin their gen AI implementation by utilizing vendor-provided applications with embedded gen AI capabilities (the most common approach), the study emphasizes that to fully address their diverse network needs, CSPs also seek to customize models using techniques like fine-tuning and prompt engineering; this customization, however, is heavily reliant on a strong data strategy to overcome challenges such as data silos and data quality issues, which significantly impact the accuracy and effectiveness of the resulting gen AI solutions.
7. Deployment preferences
While 51% of CSPs indicated hybrid cloud environments as the predominant deployment choice for gen AI platforms in network operations, reflecting the need for flexibility and control, a significant 39% of CSPs show a strong preference for private cloud-only deployments specifically for their data platforms, driven by the critical importance of data security and control. Public cloud deployments are preferred for AI model deployments.
Recommendations for CSPs
In summary, to secure a competitive edge, CSPs will need to prioritize gen AI use caseswith clear ROI by adopting early-win gen AI use cases while developing a long-term strategy, transform their organizational structure and invest in upskilling initiatives, develop and implement a robust data strategy to support all AI initiatives and cultivate strong partnerships with expert vendors to accelerate their gen AI journey.
Google Cloud: Your partner for network transformation
Google Cloud empowers CSPs’ data-driven transformation by providing expertise in operating planetary-scale networks, a unified data platform, AI model optimization, professional services for gen AI, hybrid cloud solutions, and a rich partner ecosystem. This is further strengthened by Google Cloud’s proven success in driving network transformation for major telcos, leveraging infrastructure, platforms, and tools that deliver the required near real-time processing and scale.