At Google Cloud, we believe that being at the forefront of driving secure innovation and meeting the evolving needs of customers includes working with partners. The reality is that the security landscape should be interoperable, and your security tools should be able to integrate with each other.
Google Unified Security, our AI-powered, converged security solution, has been designed to support greater customer choice. To further this vision, today we’re announcing Google Unified Security Recommended, a new program that expands strategic partnerships with market-leading security solutions trusted by our customers.
We welcome CrowdStrike, Fortinet, and Wiz as inaugural Google Unified Security Recommended partners. These integrations are designed to meet our customers where they are today and ensure their end-to-end deployments are built to scale with Google in the future.
Google Unified Security and our Recommended program partner solutions.
Building confidence through validated integrations
As part of the Google Unified Security Recommended program, partners agree to adhere to comprehensive technical integration across Google’s security product portfolio; a collaborative, customer-first support model that reflects our intent to collectively protect our customers; and invest jointly in AI innovation. This program offers our customers:
Enhanced confidence: Select partner products that have undergone evaluation and validation to ensure optimal integration with Google Unified Security.
Accelerated discovery: Streamline your evaluation process with a carefully curated selection of market-leading solutions addressing specific enterprise challenges.
Prioritize outcomes: Minimize integration overhead, allowing your team to allocate resources towards building security solutions that deliver business outcomes.
We’re working to ensure that customers can use solutions that are powerful today — and designed for future advancements. Learn more about the product-level requirements that define the Google Unified Security Recommended designation here.
Our inaugural partners: Unifying your defenses
Our collaborations with CrowdStrike, Fortinet and Wiz exemplify our “better together” philosophy by addressing tangible security challenges.
CrowdStrike Falcon (endpoint protection): Integrations between the AI-native CrowdStrike Falcon® platform, Google Security Operations, Google Threat Intelligence, and Mandiant Threat Defense can enable customers to detect, investigate, and respond to threats faster across hybrid and multicloud environments.
Customers can use Falcon Endpoint risk signals to define Context-Aware access policies enforced by Google Chrome Enterprise. The collaboration also supports integrations that secure the AI lifecycle — and extends through the model context protocol (MCP) to advance AI for security operations. Together, CrowdStrike and Google Cloud deliver unified protection across endpoint, identity, cloud, and data.
“CrowdStrike and Google Cloud share a vision for an open, AI-powered future of security. Together, we’re uniting our leading AI-native platforms – Google Security Operations and the CrowdStrike Falcon® platform – to help customers harness the power of generative AI and stay ahead of modern threats,” said Daniel Bernard, chief business officer, CrowdStrike.
Fortinet cloud-delivered SASE and Next-Generation Firewall (network protection): Integrating Fortinet’s Security Fabric with Google Security Operations combines AI-driven FortiGuard Threat Intelligence with rich network and web telemetry to deliver unified visibility and control across users, applications, and network edges.
Customers can integrate FortiSASE and FortiGate solutions into Google Security Operations to correlate activity across their environments, apply advanced detections, and automate coordinated response actions that contain threats in near real-time. This collaboration can help reduce complexity, streamline operations, and strengthen protection across hybrid infrastructures.
“Customers are demanding simplified security architectures that reduce complexity and strengthen protection,” said Nirav Shah, senior vice president, Product and Solutions, Fortinet. “As an inaugural partner in the Google Cloud Unified Security Recommended program, we are combining the power of FortiSASE and the Fortinet Security Fabric with Google Cloud’s security capabilities to converge networking and security across environments. This approach gives SecOps and NetOps shared visibility and coordinated controls, helping teams eliminate tool sprawl, streamline operations, and accelerate secure digital transformation.”
Wiz (multicloud CNAPP): Customers can integrate Wiz’s cloud security findings with Google Security Operations to help teams identify, prioritize, and address their most critical cloud risks in a unified platform.
In addition, Wiz and Security Command Center integrate to provide complete visibility and security for Google Cloud environments, including threat detection, AI security, and in-console security for application owners. Wiz is actively developing a new Google Threat Intelligence (GTI) integration that allows existing GTI customers to access threat intelligence seamlessly in the Wiz console, enabling threat intelligence-driven detection and response processes.
“Achieving secure innovation in the cloud requires unified visibility and radical risk prioritization. Our inclusion in the Google Unified Security Recommended program recognizes the power of Wiz to deliver code-to-cloud security for Google Cloud customers. By integrating our platform with Google Security Operations and Security Command Center, we enable customers to see their multicloud attack surface, prioritize the most critical risks, and automatically accelerate remediation. Together, we are simplifying the most complex cloud security challenges and making it easier for you to innovate securely,” said Anthony Belfiore, chief strategy officer, Wiz.
Powering the agentic SOC with MCP
A critical aspect of Google Unified Security Recommended is our shared dedication to strategic AI initiatives, including MCP support. Because it enables AI models to interact with and use security tools, MCP can enhance security workflows by ensuring Gemini models possess contextual awareness across multiple downstream services.
MCP can help facilitate an enhanced, cross-platform agentic experience. With MCP, our new AI agents — such as the alert triage agent in Google Security Operations that autonomously investigates alerts — can query partner tools for telemetry, enrich investigations with third-party data, and orchestrate response actions across your entire security stack.
We are proud to confirm that all of our inaugural launch partners support MCP and have developed recommended approaches for how to activate MCP-supported agentic workflows across our products, a crucial step towards realizing our vision of an agentic SOC where AI functions as a virtual security assistant, proactively identifying threats and guiding you to faster, more effective responses.
Our open future on Google Cloud Marketplace
The introduction of the Google Unified Security Recommended program is only the beginning. We are dedicated to expanding this program to include a wider array of most trusted partner solutions with substantial investment across the Google Unified Security product suite, helping our customers build a more scalable, effective, and interoperable security architecture.
For simplified procurement and deployment, all qualified Google Unified Security Recommended solutions are available in the Google Cloud Marketplace. We offer Google Unified Security and Google Cloud customers streamlined purchasing of third-party offerings, all consolidated into one Google Cloud bill.
To learn more about the program and explore Google-validated solutions from our partners, visit the Google Unified Security Recommended page. Tech partners interested in program consideration are encouraged to reach out for guidance.
AI agents are transforming the nature of work by automating complex workflows with speed, scale, and accuracy. At the same time, startups are constantly moving, growing, and evolving – which means they need clear ways to implement agentic workflows, not piles of documentation that send precious resources into a tailspin.
Today, we’ll share a simple four-step framework to help startups build multi-agent systems. Multi-agentic workflows can be complicated, but there are easy ways to get started and see real gains without spending weeks in production.
In this post, we’ll show you a systematic, operations-driven roadmap for navigating this new landscape, using one of our projects to provide concrete examples for the concepts laid out in the official startups technical guide: AI agents.
Step #1: Build your foundation
The startups technical guide outlines three primary paths for leveraging agents:
Pre-built Google agents
Partner agents
Custom-built agents (agents you build on your own).
To build our Sales Intelligence Agent, we needed to automate a highly specific, multi-step workflow that involved our own proprietary logic and would eventually connect to our own data sources. This required comprehensive orchestration control and tool definition that only a “code-first” approach could provide.
That’s why we chose Google’s Agent Development Kit (ADK) as our framework. It offered the balance of power and flexibility necessary to build a truly custom, defensible system, combined with high-level abstractions for agent composition and orchestration that accelerated our development.
Step #2: Build out the engine
We took a hybrid approach when building our agent architecture, which is managed by a top-level root_agent in orchestrator.py. Its primary role is to act as an intelligent controller using an LLM Agent for flexible user interaction, while delegating the core processing loop to [more deterministic ADK components like LoopAgent and custom BaseAgent classes.
Conversational onboarding: The LLM Agent starts by acting as a conversational “front-door,” interacting with the user to collect their name and email.
Workflow delegation: Once it has the user’s information, it delegates the main workflow to a powerful LoopAgent defined in its sub_agents list.
Data loading: The first step inside the LoopAgent is a custom agent called the CompanyLoopController. On the very first iteration of the loop, its job is to call our crm_tool to fetch the list of companies from the Google Sheet and load them into the session state.
Tool-based execution in a loop: The loop processes each company by calling two key tools: the research_pipeline tool that encapsulates our complex company_researcher_agent and the sales_briefing_agent tool that encapsulates the sales_briefing_agent. This “Agent-as-a-Tool” pattern is crucial for state isolation (more in Step 3).
This hybrid pattern gives us the best of both worlds: the flexibility of an LLM for user interaction and the structured, reliable control of a workflow agent with isolated, tool-based execution.
Step #3: Tools, state, and reliability
An agent is only as powerful as the tools it can wield. To be truly useful, our system needed to connect to live data, not just a static local file. To achieve this, we built a custom tool, crm_tool.py, to allow our agent to read its list of target companies directly from a Google Sheet.
To build our read_companies_from_sheet function, we focused on two key areas:
Secure authentication: We used a Google Cloud Service Account for authentication, a best practice for production systems. Our code includes a helper function, get_sheets_service(), that centralizes all the logic for securely loading the service account credentials and initializing the API client.
Configuration management: All configuration, including the SPREADSHEET_ID, is managed via our .env file. This decouples the tool’s logic from its configuration, making it portable and secure.
This approach transformed our agent from one that could only work with local data to one that could securely interact with a live, cloud-based source of truth.
Managing state in loops: The “Agent-as-a-Tool” Pattern A critical challenge in looping workflows is ensuring state isolation between iterations. ADK’s session.state persists, which can cause ‘context rot’ if not managed. Our solution was the “Agent-as-a-Tool” pattern. Instead of running the complex company_researcher_agent directly in the loop, we encapsulated its entire SequentialAgent pipeline into a single, isolated AgentTool (company_researcher_agent_tool).
Every time the loop calls this tool, the ADK provides a clean, temporary context for its execution. All internal steps (planning, QA loop, compiling) happen within this isolated context. When the tool returns the final compiled_report, the temporary context is discarded, guaranteeing a fresh start for the next company. This pattern provides perfect state isolation by design, making the loop robust without manual cleanup logic.
Step 4: Go from Localhost to a scalable deployed product
Here is our recommended three-step blueprint for moving from a local prototype to a production-ready agent on Google Cloud.
1. Adopt a production-grade project template Our most critical lesson was that a simple, local-first project structure is not built for the rigors of the cloud. The turning point for our team was adopting Google’s official Agent Starter Pack. This professional template is not just a suggestion; for any serious project, we now consider it a requirement. It provides three non-negotiable foundations for success out of the box:
Robust dependency management: It replaces the simplicity of local tools like Poetry with the production-grade power of PDM and uv, ensuring that every dependency is locked and every deployment is built from a fast, deterministic, and repeatable environment.
A pre-configured CI/CD pipeline: It comes with a ready-to-use continuous integration and deployment pipeline for Google Cloud Build, which automates the entire process of testing, building, and deploying your agent.
Multi-environment support: The template is pre-configured for separate staging and production environments, a best practice that allows you to safely test changes in an isolated staging environment before promoting them to your live users.
The process begins by using the official command-line tool to generate your project’s local file structure. This prompts you to choose a base template; we used the “ADK Base Template” and then moved our agent logic into the newly created source code files ( App) .
code_block
<ListValue: [StructValue([(‘code’, ‘# Ensure pipx is installedrnpip install –user pipxrnrn# Run the project generator to create the local file structurernpipx run agent-starter-pack create your-new-agent-project’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x7fd594f984f0>)])]>
The final professional project structure:
code_block
<ListValue: [StructValue([(‘code’, ‘final-agent-project/rn├── .github/ # Contains the automated CI/CD workflow configurationrn│ └── workflows/rn├── app/ # Core application source code for the agentrn│ ├── __init__.pyrn│ ├── agent_engine_app.pyrn│ ├── orchestrator.py # The main agent that directs the workflowrn│ ├── company_researcher/ # Sub-agent for performing researchrn│ ├── briefing_agent/ # Sub-agent for drafting emailsrn│ └── tools/ # Custom tools the agents can usern├── tests/ # Automated tests for your agentrn├── .env # Local environment variables (excluded from git)rn├── pyproject.toml # Project definition and dependenciesrn└── uv.lock # Locked dependency versions for speed and consistency’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x7fd594f98550>)])]>
With the local files created, the next step is to provision the cloud infrastructure. From inside the new project directory, you run the setup-cicd command. This interactive wizard connects to your Google Cloud and GitHub accounts, then uses Terraform under the hood to automatically build your entire cloud environment, including the CI/CD pipeline.
code_block
<ListValue: [StructValue([(‘code’, ‘# Navigate into your new project directoryrncd your-new-agent-projectrnrn# Run the interactive CI/CD setup wizardrnpipx run agent-starter-pack setup-cicd’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x7fd594f982b0>)])]>
2. Cloud Build Once the setup is complete with the starter pack, your development workflow becomes incredibly simple. Every time a developer pushes a new commit to the main branch of your GitHub repository:
Google Cloud Build fetches your latest code.
It builds your agent into a secure, portable container image. This process includes installing all the dependencies from your uv.lock file, guaranteeing a perfect, repeatable build every single time.
It deploys this new version to your staging environment. Within minutes, your latest code is live and ready for testing in a real cloud environment.
It waits for your approval. The pipeline is configured to require a manual “Approve” click in the Cloud Build console before it will deploy that exact same, tested version to your production environment. This gives you the perfect balance of automation and control.
3. Deploy on Agent Engine and Cloud Run The final piece of the puzzle is where the agent actually runs. Cloud Build deploys your agent to Vertex AI Agent Engine, which provides the secure, public endpoint and management layer for your agent.
Crucially, Agent Engine is built on top of Google Cloud Run, a powerful serverless platform. This means you don’t have to manage any servers yourself. Your agent automatically scales up to handle thousands of users, and scales down to zero when not in use, meaning you only pay for the compute you actually consume.
Get started
Ready to build your own?
Explore the code for our Sales Intelligence Agent on GitHub.
The technical journey and insights detailed in this blog post were the result of a true team effort. I want to extend my sincere appreciation to the core collaborators whose work provided the foundation for this article: Luis Sala, Isaac Attuah, Ishana Shinde, Andrew Thankson, and Kristin Kim. Their hands-on contributions to architecting and building the agent were essential to the lessons shared here.
AWS Transform for VMware now allows customers to automatically generate network configurations that can be directly imported into the Landing Zone Accelerator on AWS solution (LZA). Building on AWS Transform’s existing support for infrastructure-as-code generation in AWS CloudFormation, AWS CDK, and Terraform formats, this new capability enables automatic transformation of VMware network environments into LZA-compatible network configuration YAML files. The YAML files can be deployed through LZA’s deployment pipeline, streamlining the process of setting up cloud infrastructure.
AWS Transform for VMware is an agentic AI service that automates the discovery, planning, and migration of VMware workloads, accelerating infrastructure modernization with increased speed and confidence. Landing Zone Accelerator on AWS solution (LZA) automates the setup of a secure, multi-account AWS environment using AWS best practices. Migrating workloads to AWS traditionally requires you to manually recreate network configurations while maintaining operational and compliance consistency. The service now automates the generation of LZA network configurations, reducing manual effort and deployment time to better manage and govern your multi-account environment.
Customers using Amazon EventBridge can now setup rules for AWS Health events with multi-region redundancy, or choose a simplified path by creating a single rule to capture all Health events. With this enhancement, Health sends all events simultaneously to US West (Oregon) as well as the individual region of impact. For more information customers can go to Creating EventBridge rules for AWS Region coverage.
Sending Health events to two regions gives customers an option to increase the resilience of their integration by creating a backup rule. US West (Oregon) is the backup for all regions in commercial partition, while US East (N. Virginia) is the backup for US West (Oregon). Plus, this change also enables a simplified integration path, where customers can now setup a single rule in US West (Oregon) to capture all Health events from across commercial partition, as opposed to needing to configure rules in individual regions. Customers now have greater flexibility in their integration approach for receiving Health events.
This update is available in all AWS regions. In China, all Health events get delivered simultaneously to both China (Beijing) and China (Ningxia). In AWS GovCloud (US), all Health events get delivered to AWS GovCloud (US-West) and AWS GovCloud (US-East).
AWS IoT Core Device Location announces location resolution capabilities for Internet of Things (IoT) devices connected to Amazon Sidewalk network, enabling developers to build asset tracking and geo-fencing applications more efficiently by eliminating the need for GPS hardware in low-power devices. Amazon Sidewalk provides a secure community network through Amazon Sidewalk Gateways (compatible Amazon Echo and Ring devices) to deliver cloud connectivity for IoT devices. AWS IoT Core for Amazon Sidewalk facilitates connectivity and message transmission between Amazon Sidewalk-connected IoT devices and AWS cloud services. The integration of Amazon Sidewalk with AWS IoT Core, enables you to easily provision, onboard, and monitor your Amazon Sidewalk devices in the AWS cloud.
With the new enhancement, you can now use AWS IoT Core’s Device Location feature to resolve the approximate location of your Amazon Sidewalk enabled devices, using input payloads like WiFi access point, Global Navigation Satellite System data, or Bluetooth Low Energy data. AWS IoT Core Device Location uses these inputs to resolve the geo-coordinate data, and delivers the geo-coordinate data to your desired AWS IoT rules or MQTT topics for integration with backend applications. To get started, install Sidewalk SDK v1.19 (or a later version) in your Sidewalk-enabled devices, provision the devices in AWS IoT Core for Amazon Sidewalk, and enable location during the provisioning.
This new feature is available in AWS US-East (N. Virginia) Region of AWS cloud where AWS IoT Core for Amazon Sidewalk is available. Please note that Amazon Sidewalk network is available only in the United States of America. For more information, refer AWS developer guide, Amazon Sidewalk developer guide, and Amazon Sidewalk network coverage.
Amazon Relational Database Service (RDS) for PostgreSQL now supports the latest minor versions 17.7, 16.11, 15.15, 14.20, and 13.23. We recommend that you upgrade to the latest minor versions to fix known security vulnerabilities in prior versions of PostgreSQL, and to benefit from the bug fixes added by the PostgreSQL community.
This release includes the new pgcollection extension for RDS PostgreSQL versions 15.15 and above (16.11 and 17.7). This extension enhances database performance by providing an efficient way to store and manage key-value pairs within PostgreSQL functions. Collections maintain the order of entries and can store various types of PostgreSQL data, making them useful for applications that need fast, in-memory data processing. The release also includes updates to extensions, with pg_tle upgraded to version 1.5.2 and H3_PG upgraded to version 4.2.3.
You can use automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. You can also use Amazon RDS Blue/Green deployments for RDS for PostgreSQL using physical replication for your minor version upgrades. Learn more about upgrading your database instances, including automatic minor version upgrades and Blue/Green deployments in the Amazon RDS User Guide .
Amazon RDS for PostgreSQL makes it simple to set up, operate, and scale PostgreSQL deployments in the cloud. See Amazon RDS for PostgreSQL Pricing for pricing details and regional availability. Create or update a fully managed Amazon RDS database in the Amazon RDS Management Console.
Amazon Connect now provides metrics that measure completion of agent performance evaluations, improving manager productivity and evaluation consistency. Businesses can monitor if the required number of evaluations for their agents have been completed, ensuring compliance with internal policies (e.g., complete 5 evaluations per agent per month), regulatory requirements, and labor union agreements. Additionally, businesses can analyze evaluation scoring patterns across different managers, to identify opportunities to improve evaluation consistency and accuracy. These insights are available in real-time through analytics dashboards in the Connect UI, and APIs.
This feature is available in all regions where Amazon Connect is offered. To learn more, please visit our documentation and our webpage.
For those building with AI, most are in it to change the world — not twiddle their thumbs. So when inspiration strikes, the last thing anyone wants is to spend hours waiting for the latest AI models to download to their development environment.
That’s why today we’re announcing a deeper partnership between Hugging Face and Google Cloud that:
reduces Hugging Face model download times through Vertex AI and Google Kubernetes Engine
offers native support for TPUs on all open models sourced through Hugging Face
provides a safer experience through Google Cloud’s built-in security capabilities.
We’ll enable faster download times through a new gateway for Hugging Face repositories that will cache Hugging Face models and datasets directly on Google Cloud. Moving forward, developers working with Hugging Face’s open models on Google Cloud should expect download times to take minutes, not hours.
We’re also working with Hugging Face to add native support for TPUs for all open models on the Hugging Face platform. This means that whether developers choose to deploy training and inference workloads on NVIDIA GPUs or on TPUs, they’ll experience the same ease of deployment and support.
Open models are gaining traction with enterprise developers, who typically work with specific security requirements. To support enterprise developers, we’re working with Hugging Face to bring Google Cloud’s extensive security protocols to all Hugging Face models deployed through Vertex AI. This means that any Hugging Face model on Vertex AI Model Garden will now be scanned and validated with Google Cloud’s leading cybersecurity capabilities powered by our Threat Intelligence platform and Mandiant.
This expanded partnership with Hugging Face furthers that commitment and will ensure that developers have an optimal experience when serving AI models on Google Cloud, whether they choose a model from Google, from our many partners, or one of the thousands of open models available on Hugging Face.
The prevalence of obfuscation and multi-stage layering in today’s malware often forces analysts into tedious and manual debugging sessions. For instance, the primary challenge of analyzing pervasive commodity stealers like AgentTesla isn’t identifying the malware, but quickly cutting through the obfuscated delivery chain to get to the final payload.
Unlike traditional live debugging, Time Travel Debugging (TTD) captures a deterministic, shareable record of a program’s execution. Leveraging TTD’s powerful data model and time travel capabilities allow us to efficiently pivot to the key execution events that lead to the final payload.
This post introduces all of the basics of WinDbg and TTD necessary to start incorporating TTD into your analysis. We demonstrate why it deserves to be a part of your toolkit by walking through an obfuscated multi-stage .NET dropper that performs process hollowing.
What is Time Travel Debugging?
Time Travel Debugging (TTD), a technology offered by Microsoft as part of WinDbg, records a process’s execution into a trace file that can be replayed forwards and backwards. The ability to quickly rewind and replay execution reduces analysis time by eliminating the need to constantly restart debugging sessions or restore virtual machine snapshots. TTD also enables users to query the recorded execution data and filter it with Language Integrated Query (LINQ) to find specific events of interest like module loads or calls to APIs that implement malware functionalities like shellcode execution or process injection.
During recording, TTD acts as a transparent layer that allows full interaction with the operating system. A trace file preserves a complete execution record that can be shared with colleagues to facilitate collaboration, circumventing environmental differences that can affect the results of live debugging.
While TTD offers significant advantages, users should be aware of certain limitations. Currently, TTD is restricted to user-mode processes and cannot be used for kernel-mode debugging. The trace files generated by TTD have a proprietary format, meaning their analysis is largely tied to WinDbg. Finally, TTD does not offer “true” time travel in the sense of altering the program’s past execution flow; if you wish to change a condition or variable and see a different outcome, you must capture an entirely new trace as the existing trace is a fixed recording of what occurred.
A Multi-Stage .NET Dropper with Signs of Process Hollowing
The Microsoft .NET framework has long been popular among threat actors for developing highly obfuscated malware. These programs often use code flattening, encryption, and multi-stage assemblies to complicate the analysis process. This complexity is amplified by Platform Invoke (P/Invoke), which gives managed .NET code direct access to the unmanaged Windows API, allowing authors to port tried-and-true evasion techniques like process hollowing into their code.
Process hollowing is a pervasive and effective form of code injection where malicious code runs under the guise of another process. It is common at the end of downloader chains because the technique allows injected code to assume the legitimacy of a benign process, making it difficult to spot the malware with basic monitoring tools.
In this case study, we’ll use TTD to analyze a .NET dropper that executes its final stage via process hollowing. The case study demonstrates how TTD facilitates highly efficient analysis by quickly surfacing the relevant Windows API functions, enabling us to bypass the numerous layers of .NET obfuscation and pinpoint the payload.
Basic analysis is a vital first step that can often identify potential process hollowing activity. For instance, using a sandbox may reveal suspicious process launches. Malware authors frequently target legitimate .NET binaries for hollowing as these blend seamlessly with normal system operations. In this case, reviewing process activity on VirusTotal shows that the sample launches InstallUtil.exe (found in %windir%Microsoft.NETFramework<version>). While InstallUtil.exe is a legitimate utility, its execution as a child process of a suspected malicious sample is an indicator that helps focus our initial investigation on potential process injection.
Figure 1: Process activity recorded in the VirusTotal sandbox
Despite newer, more stealthy techniques, such as Process Doppelgänging, when an attacker employs process injection, it’s still often the classic version of process hollowing due to its reliability, relative simplicity, and the fact that it still effectively evades less sophisticated security solutions. The classic process hollowing steps are as follows:
CreateProcess (with the CREATE_SUSPENDED flag): Launches the victim process (InstallUtil.exe) but suspends its primary thread before execution.
ZwUnmapViewOfSection or NtUnmapViewOfSection: “Hollows out” the process by removing the original, legitimate code from memory.
VirtualAllocEx and WriteProcessMemory: Allocates new memory in the remote process and injects the malicious payload.
GetThreadContext: Retrieves the context (the state and register values) of the suspended primary thread.
SetThreadContext: Redirects the execution flow by modifying the entry point register within the retrieved context to point to the address of the newly injected malicious code.
ResumeThread: Resumes the thread, causing the malicious code to execute as if it were the legitimate process.
To confirm this activity in our sample using TTD, we focus our search on the process creation and the subsequent writes to the child process’s address space. The approach demonstrated in this search can be adapted to triage other techniques by adjusting the TTD queries to search for the APIs relevant to that technique.
Recording a Time Travel Trace of the Malware
To begin using TTD, you must first record a trace of a program’s execution. There are two primary ways to record a trace: using the WinDbg UI or the command-line utilities provided by Microsoft. The command-line utilities offer the quickest and most customizable way to record a trace, and that is what we’ll explore in this post.
Warning: Take all usual precautions for performing dynamic analysis of malware when recording a TTD trace of malware executables. TTD recording is not a sandbox technology and allows the malware to interface with the host and the environment without obstruction.
TTD.exe is the preferred command-line tool for recording traces. While Windows includes a built-in utility (tttracer.exe), that version has reduced features and is primarily intended for system diagnostics, not general use or automation. Not all WinDbg installations provide the TTD.exe utility or add it to the system path. The quickest way to get TTD.exe is to use the stand-alone installer provided by Microsoft. This installer automatically adds TTD.exe to the system’s PATH environment variable, ensuring it’s available from a command prompt. To see its usage information, run TTD.exe -help.
The quickest way to record a trace is to simply provide the command line invoking the target executable with the appropriate arguments. We use the following command to record a trace of our sample:
C:UsersFLAREDesktop> ttd.exe 0b631f91f02ca9cffd66e7c64ee11a4b.bin
Microsoft (R) TTD 1.01.11 x64
Release: 1.11.532.0
Copyright (C) Microsoft Corporation. All rights reserved.
Launching '0b631f91f02ca9cffd66e7c64ee11a4b.bin'
Initializing the recording of process (PID:2448) on trace file: C:UsersFLAREDesktopb631f91f02ca9cffd66e7c64ee11a4b02.run
Recording has started of process (PID:2448) on trace file: C:UsersFLAREDesktopb631f91f02ca9cffd66e7c64ee11a4b02.run
Once TTD begins recording, the trace concludes in one of two ways. First, the tracing automatically stops upon the malware’s termination (e.g., process exit, unhandled exception, etc.). Second, the user can manually intervene. While recording, TTD.exe displays a small dialog (shown in figure 2) with two control options:
Tracing Off: Stops the trace and detaches from the process, allowing the program to continue execution.
Exit App: Stops the trace and also terminates the process.
Figure 2: TTD trace execution control dialog
Recording a TTD trace produces the following files:
<trace>.run: The trace file is a proprietary format that contains compressed execution data. The size of a trace file is influenced by the size of the program, the length of execution, and other external factors such as the number of additional resources that are loaded.
<trace>.idx: The index file allows the debugger to quickly locate specific points in time during the trace, bypassing sequential scans of the entire trace. The index file is created automatically the first time a trace file is opened in WinDbg. In general, Microsoft suggests that index files are typically twice the size of the trace file.
<trace>.out: The trace log file containing logs produced during trace recording.
Once a trace is complete, the .runfile can be opened with WinDbg.
Triaging the TTD Trace: Shifting Focus to Data
The fundamental advantage of TTD is the ability to shift focus from manual code stepping to execution data analysis. Performing rapid, effective triage with this data-driven approach requires proficiency in both basic TTD navigation and querying the Debugger Data Model. Let’s begin by exploring the basics of navigation and the Debugger Data Model.
Navigating a Trace
Basic navigation commands are available under the Home tab in the WinDbg UI.
Figure 3: Basic WinDbg TTD Navigation Commands
The standard WinDbg commands and shortcuts for controlling execution are:
Replaying a TTD trace enables the reverse flow control commands that complement the regular flow control commands. Each reverse flow control complement is formed by appending a dash (–) to the regular flow control command:
g-: Go Back – Execute the trace backwards
g-u: Step Out Back – Execute the trace backwards up to the last call instruction
t-: Step Into Back – Single step into backwards
p-: Step Over Back – Single step over backwards
Time Travel (!tt) Command
While basic navigation commands let you move step-by-step through a trace, the time travel command (!tt) enables precise navigation to a specific trace position. These positions are often provided in the output of various TTD commands. A position in a TTD trace is represented by two hexadecimal numbers in the format #:# (e.g., E:7D5) where:
The first part is a sequencing number typically corresponding to a major execution event, such as a module load or an exception.
The second part is a step count, indicating the number of events or instructions executed since that major execution event.
We’ll use the time travel command later in this post to jump directly to the critical events in our process hollowing example, bypassing manual instruction tracing entirely.
The TTD Debugger Data Model
The WinDbg debugger data model is an extensible object model that exposes debugger information as a navigable tree of objects. The debugger data model brings a fundamental shift in how users access debugger information in WinDbg, from wrangling raw text-based output to interacting with structured object information. The data model supports LINQ for querying and filtering, allowing users to efficiently sort through large volumes of execution information. The debugger data model also simplifies automation through JavaScript, with APIs that mirror how you access the debugger data model through commands.
The Display Debugger Object Model Expression(dx) command is the primary way to interact with the debugger data model from the command window in WinDbg. The model lends itself to discoverability – you can begin traversing through it by starting at the root Debugger object:
0:000> dx Debugger
Debugger
Sessions
Settings
State
Utility
LastEvent
The command output lists the five objects that are properties of the Debugger object. Note that the names in the output, which look like links, are marked up using the Debugger Markup Language (DML). DML enriches the output with links that execute related commands. Clicking on the Sessions object in the output executes the following dx command to expand on that object:
The -r# argument specifies recursion up to # levels, with a default depth of one if not specified. For example, increasing the recursion to two levels in the previous command produces the following output:
0:000> dx -r2 Debugger.Sessions
Debugger.Sessions
[0x0] : Time Travel Debugging: 0b631f91f02ca9cffd66e7c64ee11a4b.run
Processes
Id : 0
Diagnostics
TTD
OS
Devices
Attributes
The -g argument displays any iterable object into a data grid in which each element is a grid row and the child properties of each element are grid columns.
0:000> dx -g Debugger.Sessions
Figure 4: Grid view of Sessions, with truncated columns
Debugger and User Variables
WinDbg provides some predefined debugger variables for convenience which can be listed through the DebuggerVariables property.
@$cursession: The current debugger session. Equivalent to Debugger.Sessions[<session>]. Commonly used items include:
@$cursession.Processes: List of processes in the session.
@$cursession.TTD.Calls: Method to query calls that occurred during the trace.
@$cursession.TTD.Memory: Method to query memory operations that occurred during the trace.
@$curprocess: The current process. Equivalent to @$cursession.Processes[<pid>]. Frequently used items include:
@$curprocess.Modules: List of currently loaded modules.
@$curprocess.TTD.Events: List of events that occurred during the trace.
Investigating the Debugger Data Model to Identify Process Hollowing
With a basic understanding of TTD concepts and a trace ready for investigation, we can now look for evidence of process hollowing. To begin, the Calls method can be used to search for specific Windows API calls. This search is effective even with a .NET sample because the managed code must interface with the unmanaged Windows API through P/Invoke to perform a technique like process hollowing.
Process hollowing begins with the creation of a process in a suspended state via a call to CreateProcess with a creation flag value of 0x4. The following query uses the Calls method to return a table of each call to the kernel32 module’s CreateProcess* in the trace; the wildcard (*) ensures the query matches calls to either CreateProcessA or CreateProcessW.
This query returns a number of fields, not all of which are helpful for our investigation. To address this, we can apply the Select LINQ query to the original query, which allows us to specify which columns to display and rename them.
0:000> dx -g @$cursession.TTD.Calls("kernel32!CreateProcess*").Select(c => new { TimeStart = c.TimeStart, Function = c.Function, Parameters = c.Parameters, ReturnAddress = c.ReturnAddress})
The result shows one call to CreateProcessA starting at position 58243:104D. Note the return address: since this is a .NET binary, the native code executed by the Just-In-Time (JIT) compiler won’t be located in the application’s main image address space (as it would be in a non-.NET image). Normally, an effective triage step is to filter results with a Where LINQ query, limiting the return address to the primary module to filter out API calls that do not originate from the malware. This Where filter, however, is less reliable when analyzing JIT-compiled code due to the dynamic nature of its execution space.
The next point of interest is the Parameters field. Clicking on the DML link on the collapsed value {..} displays Parameters via a corresponding dx command.
Function arguments are available under a specific Calls object as an array of values. However, before we investigate the parameters, there are some assumptions made by TTD that are worth exploring. Overall, these assumptions are affected by whether the process is 32-bit or 64-bit. An easy way to check the bitness of the process is by inspecting the DebuggerInformation object.
0:00> dx Debugger.State.DebuggerInformation
Debugger.State.DebuggerInformation
ProcessorTarget : X86 <--- Process Bitness
Bitness : 32
EngineFilePath : C:Program FilesWindowsApps<SNIPPED>x86dbgeng.dll
EngineVersion : 10.0.27871.1001
The key identifier in the output is ProcessorTarget: this value indicates the architecture of the guest process that was traced, regardless of whether the host operating system running the debugger is 64-bit.
TTD uses symbol information provided in a program database (PDB) file to determine the number of parameters, their types and the return type of a function. However, this information is only available if the PDB file contains private symbols. While Microsoft provides PDB files for many of its libraries, these are often public symbols and therefore lack the necessary function information to interpret the parameters correctly. This is where TTD makes another assumption that can lead to incorrect results. Primarily, it assumes a maximum of four QWORD parameters and that the return value is also a QWORD. This assumption creates a mismatch in a 32-bit process (x86), where arguments are typically 32-bit (4-byte) values passed on the stack. Although TTD correctly finds the arguments on the stack, it misinterprets two adjacent 32-bit arguments as a single, 64-bit value.
One way to resolve this is to manually investigate the arguments on the stack. First we use the !tt command to navigate to the beginning of the relevant call to CreateProcessA.
0:000> !tt 58243:104D
(b48.12a4): Break instruction exception - code 80000003 (first/second chance not available)
Time Travel Position: 58243:104D
eax=00bed5c0 ebx=039599a8 ecx=00000000 edx=75d25160 esi=00000000 edi=03331228
eip=75d25160 esp=0055de14 ebp=0055df30 iopl=0 nv up ei pl zr na pe nc
cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000246
KERNEL32!CreateProcessA:
75d25160 8bff mov edi,edi
The return address is at the top of the stack at the start of a function call, so the following dd command skips over this value by adding an offset of 4 to the ESP register to properly align the function arguments.
The value of 0x4 (CREATE_SUSPENDED) set in the bitmask for the dwCreationFlags argument (6th argument) indicates that the process will be created in a suspended state.
The following command dereferences esp+4 via the poi operator to retrieve the application name string pointer then uses the da command to display the ASCII string.
0:000> da poi(esp+4)
0055de74 "C:WindowsMicrosoft.NETFramewo"
0055de94 "rkv4.0.30319InstallUtil.exe"
The command reveals that the target application is InstallUtil.exe, which aligns with the findings from basic analysis.
It is also useful to retrieve the handle to the newly created process in order to identify subsequent operations performed on it. The handle value is returned through a pointer (0x55e068 in the earlier referenced output) to a PROCESS_INFORMATION structure passed as the last argument. This structure has the following definition:
After the call to CreateProcessA, the first member of this structure should be populated with the handle to the process. Step out of the call using the gu(Go Up) command to examine the populated structure.
0:000> gu
Time Travel Position: 58296:60D
0:000> dd /c 1 0x55e068 L4
0055e068 00000104 <-- handle to process
0055e06c 00000970
0055e070 00000d2c
0055e074 00001c30
In this trace, CreateProcess returned 0x104 as the handle for the suspended process.
The most interesting operation in process hollowing for the purpose of triage is the allocation of memory and subsequent writes to that memory, commonly performed via calls to WriteProcessMemory. The previous Calls query can be updated to identify calls to WriteProcessMemory.
Investigating these calls to WriteProcessMemory shows that the target process handle is 0x104, which represents the suspended process. The second argument defines the address in the target process. The arguments to these calls reveal a pattern common to PE loading: the malware writes the PE header followed by the relevant sections at their virtual offsets.
It is worth noting that the memory of the target process cannot be analyzed from this trace. To record the execution of a child process, pass the -children flag to the TTD.exe utility. This will generate a trace file for each process, including all child processes, spawned during execution.
The first memory write to what is likely the target process’s base address (0x400000) is 0x200 bytes. This size is consistent with a PE header, and examining the source buffer (0x9810af0) confirms its contents.
The !dh extension can be used to parse this header information.
0:000> !dh 0x9810af0
File Type: EXECUTABLE IMAGE
FILE HEADER VALUES
14C machine (i386)
3 number of sections
66220A8D time date stamp Fri Apr 19 06:09:17 2024
----- SNIPPED -----
OPTIONAL HEADER VALUES
10B magic #
11.00 linker version
----- SNIPPED -----
0 [ 0] address [size] of Export Directory
3D3D4 [ 57] address [size] of Import Directory
----- SNIPPED -----
0 [ 0] address [size] of Delay Import Directory
2008 [ 48] address [size] of COR20 Header Directory
SECTION HEADER #1
.text name
3B434 virtual size
2000 virtual address
3B600 size of raw data
200 file pointer to raw data
----- SNIPPED -----
SECTION HEADER #2
.rsrc name
546 virtual size
3E000 virtual address
600 size of raw data
3B800 file pointer to raw data
----- SNIPPED -----
SECTION HEADER #3
.reloc name
C virtual size
40000 virtual address
200 size of raw data
3BE00 file pointer to raw data
----- SNIPPED -----
The presence of a COR20 header directory (a pointer to the .NET header) indicates that this is a .NET executable.The relative virtual addresses for the .text (0x2000), .rsrc (0x3E000), and .reloc (0x40000) also align with the target addresses of the WriteProcessMemory calls.
The newly discovered PE file can now be extracted from memory using the writemem command.
Using a hex editor, the file can be reconstructed by placing each section at its raw offset. A quick analysis of the resulting .NET executable (SHA256: 4dfe67a8f1751ce0c29f7f44295e6028ad83bb8b3a7e85f84d6e251a0d7e3076) in dnSpy reveals its configuration data.
This case study demonstrates the benefit of treating TTD execution traces as a searchable database. By capturing the payload delivery and directly querying the Debugger Data Model for specific API calls, we quickly bypassed the multi-layered obfuscation of the .NET dropper. The combination of targeted data model queries and LINQ filters (for CreateProcess* and WriteProcessMemory*) and low-level commands (!dh, .writemem) allowed us to isolate and extract the hidden AgentTesla payload, yielding critical configuration details in a matter of minutes.
The tools and environment used in this analysis—including the latest version of WinDbg and TTD—are readily available via the FLARE-VM installation script. We encourage you to streamline your analysis workflow with this pre-configured environment.
Amazon EventBridge now supports Amazon SQS fair queues as targets, enabling you to build more responsive event-driven applications. You can now leverage SQSs improved message distribution across consumer groups and mitigate the noisy neighbor impact in multi-tenant messaging systems. This enhancement allows EventBridge to send events directly to SQS fair queues. With fair queues, multiple consumers can process messages from the same tenant at the same time, while keeping message processing times consistent across all tenants.
The Amazon EventBridge event bus is a serverless event broker that enables you to create scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. SQS fair queues automatically distribute messages fairly across consumer groups, preventing any single group from monopolizing queue resources. When combined with EventBridge’s event routing capabilities, this creates powerful patterns for building scalable, multi-tenant applications where different teams or services need equitable access to event streams.
To route events to an SQS fair queue, you can select the fair queue as a target when creating or updating EventBridge rules through the AWS Management Console, AWS CLI, or AWS SDKs. Be sure to include a MessageGroupID parameter, which can be specified with either a static value or JSON path expression.
Support for Fair Queue and FIFO targets is available in all AWS commercial and AWS GovCloud (US) Regions. For more information about EventBridge target support, see our documentation. For more information about SQS Fair Queues, see the SQS documentation.
Starting today, Amazon EC2 High Memory U7i instances with 12TB of memory (u7i-12tb.224xlarge) are now available in the Europe (Stockholm) region. U7i-12tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-12tb instances offer 12TB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.
U7i-12tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.
Amazon ECS Service Connect now supports seamless communication between services residing in different AWS accounts through integration with AWS Resource Access Manager (AWS RAM). This enhancement simplifies resource sharing, reduces duplication, and promotes consistent service-to-service communication across environments for organizations with multi-account architectures.
Amazon ECS Service Connect leverages AWS Cloud Map namespaces for storing information about ECS services and tasks. To enable seamless cross-account communication between Amazon ECS Service Connect services, you can now share the underlying AWS Cloud Map namespaces using AWS RAM with individual AWS accounts, specific Organizational Units (OUs), or your entire AWS Organization. To get started, create a resource share in AWS RAM, add the namespaces you want to share, and specify the principals (accounts, OUs, or the organization) that should have access. This enables platform engineers to use the same namespace to register Amazon ECS Service Connect services residing in multiple AWS accounts, simplifying service discovery and connectivity. Application developers can then build services that rely on a consistent, shared registry without worrying about availability or synchronization across accounts. Cross-account connectivity support improves operational efficiency and makes it easier to scale Amazon ECS workloads as your organization grows by reducing duplication and streamlining access to common services.
This feature is available with both Fargate and EC2 launch modes in AWS GovCloud (US-West) and AWS GovCloud (US-East) regions via the AWS Management Console, API, SDK, CLI, and CloudFormation. To learn more, please refer to the Amazon ECS Service Connect documentation.
Starting today, the Amazon Elastic Compute Cloud (Amazon EC2) G6f instances powered by NVIDIA L4 GPUs are now available in Europe (Spain) and Asia Pacific (Seoul) regions. G6f instances can be used for a wide range of graphics workloads. G6f instances offer GPU partitions as small as one-eighth of a GPU with 3 GB of GPU memory giving customers the flexibility to right size their instances and drive significant cost savings compared to EC2 G6 instances with a single GPU.
Customers can use G6f instances to provision remote workstations for Media & Entertainment, Computer-Aided Engineering, for ML research, and for spatial visualization. G6f instances are available in 5 instance sizes with half, quarter, and one-eighth of a GPU per instance size, paired with third generation AMD EPYC processors offering up to 12 GB of GPU memory and 16 vCPUs.
Amazon EC2 G6f instances are available today in the AWS US East (N. Virginia and Ohio), US West (Oregon), Europe (Stockholm, Frankfurt, London and Spain), Asia Pacific (Mumbai, Tokyo, Seoul and Sydney), Canada (Central), and South America (Sao Paulo) regions. Customers can purchase G6f instances as On-Demand Instances, Spot Instances, or as a part of Savings Plans.
Amazon Web Services (AWS) announces JWT Verification for Application Load Balancer (ALB), enabling secure machine-to-machine (M2M) and service-to-service (S2S) communications. This feature allows ALB to verify JSON Web Tokens (JWTs) included in request headers, validating token signatures, expiration times, and claims without requiring modifications to application code.
By offloading OAuth 2.0 token validation to ALB, customers can significantly reduce architectural complexity and streamline their security implementation. This capability is particularly valuable for microservices architectures, API security, and enterprise service integration scenarios where secure service-to-service communication is critical. The feature supports tokens issued through various OAuth 2.0 flows, including Client Credentials Flow, enabling centralized token validation with minimal operational overhead.
The JWT Verification feature is now available in all AWS Regions where Application Load Balancer is supported.
Amazon ElastiCache now supports Graviton3-based M7g and R7g node families in the AWS GovCloud (US) Regions (US-East, US-West). ElastiCache Graviton3 nodes deliver improved price-performance compared to Graviton2. As an example, when running ElastiCache for Redis OSS on an R7g.4xlarge node, you can achieve up to 28% increased throughput (read and write operations per second) and up to 21% improved P99 latency, compared to running on R6g.4xlarge. In addition, these nodes deliver up to 25% higher networking bandwidth.
For complete information on pricing and regional availability, please refer to the Amazon ElastiCache pricing page. To get started, create a new cluster or upgrade to Graviton3 using the AWS Management Console. For more information on supported node types, please refer to the documentation.
AWS Fault Injection Service (FIS) now offers two new scenarios that help you proactively test how your applications handle partial disruptions within and across Availability Zones (AZs). These disruptions, often called gray failures, are more common than complete outages and can be particularly challenging to detect and mitigate.
The FIS scenario library provides AWS-created, pre-defined experiment templates that minimize the heavy lifting of designing tests. The new scenarios expand the testing capabilities for partial disruptions. “AZ: Application Slowdown” lets you test for increased latency and degraded performance for resources, dependencies, and connections within a single AZ. This helps validate observability setups, tune alarm thresholds, and practice critical operational decisions like AZ evacuation. The scenario works with both single and multi-AZ applications. “Cross-AZ: Traffic Slowdown” enables testing of how multi-AZ applications handle traffic disruptions between AZs.
With both scenarios, you can target specific portions of your application traffic for more realistic testing of partial disruptions. These scenarios are particularly valuable for testing application sensitivity to these more subtle disruptions that often manifest as traffic and application slowdowns. For instance, you can test how your application responds to degraded network paths causing packet loss for some traffic flows, or misconfigured connection pools that slow down specific requests.
To get started, access these new scenarios through the FIS scenario library in the AWS Management Console. These new scenarios are available in all AWS Regions where AWS FIS is available, including AWS GovCloud (US) Regions. To learn more, visit the FIS scenario library user guide. For pricing information, visit the FIS pricing page.
While 90% of IT leaders indicate that the future of their end user computing (EUC) strategy is web-based, those same leaders admit that 50% of the applications their organizations rely on today are still legacy client-based apps.1 Similarly, IT leaders note that enabling end users to take advantage of AI on the endpoint is their top priority in the next 12 months. Clearly, something needs to bridge the gap between today’s reality and tomorrow’s strategy.
Announcing Cameyo by Google: Virtual app delivery for the modern tech stack
To provide today’s organizations with a more modern approach to virtualization, we are thrilled to launch Cameyo by Google, bringing a best-in-class Virtual App Delivery (VAD) solution into the Google enterprise family of products.
Cameyo is not VDI. It is a modern alternative designed specifically to solve the legacy app gap without the overhead of traditional virtual desktops. Instead of streaming a full, resource-heavy desktop, Cameyo’s Virtual App Delivery (VAD) technology delivers only the applications users need, securely to any device.
With Cameyo, those legacy Windows or Linux apps can either be streamed in the browser or delivered as Progressive Web Apps (PWAs) to give users the feel of using a native app in its own window. This allows users to run critical legacy applications — everything from specialized ERP clients, Windows-based design programs like AutoCAD, the desktop version of Excel, and everything in between — and access them alongside their other modern web apps in the browser, or access them side-by-side with the other apps in their system tray as PWAs. For the user, the experience is seamless and free from the context-switching of managing a separate virtual desktop environment. For IT, the complexity is eliminated.
“The beauty of Cameyo is its simplicity. It lets users access applications on any device with security built in, allowing us to reach any end user, on any device, without it ever touching our corporate systems or the complexity or overhead — no VPNs or firewall configurations needed,” said Phil Paterson, Head of Cloud & Infrastructure, PTSG. He added, “VPNS were taking up to 15 minutes to log in, but with Cameyo access is instant, saving users upwards of 30 minutes every day.”
Completing the Google Enterprise stack
Today’s enterprises have been increasingly turning to Google for a modern, flexible, and secure enterprise tech stack that was built for the web-based future of work, not modified for it. And Cameyo by Google is a critical unlock mechanism that bridges the gap between those organizations’ legacy investments and this modern stack.
Google’s enterprise tech stack provides organizations with a flexible, modular path to modernization. Unlike all-or-nothing enterprise ecosystems, Google’s enterprise stack doesn’t force you to abandon existing investments for the sake of modernization. Instead, it gives you the freedom to modernize individual layers of your stack at your own pace, as it makes sense for your business — all while maintaining access to your existing technology investments. And Google’s flexible enterprise stack is built for interoperability with a broad ecosystem of modern technologies built for the web, giving you freedom along your modernization journey.
A secure browsing first: Cameyo + Chrome Enterprise
Speaking of enabling organizations to modernize at their own rate, we’ve seen a distinct pattern popping up throughout our conversations with enterprises today. And that pattern is the interest in migrating to Secure Enterprise Browsers (SEBs) to provide a more secure, manageable place for people to do their best work.
And while the market for SEBs is growing rapidly, most enterprise browser solutions share a fundamental blind spot: they are only built to secure web-based SaaS applications. They have no direct answer for the 50% of client-based applications that run entirely outside the browser.1
This is where the combination of Cameyo by Google and Chrome Enterprise Premium provides a unique solution. This combination is the only solution on the market that delivers and secures both modern web apps and legacy client-based apps within a single, unified browser experience.
Here’s how it works:
Chrome Enterprise Premium serves as the secure entry point, providing advanced threat protection, URL filtering, and granular Data Loss Prevention (DLP) controls – like preventing copy/paste or printing – for all sensitive data and web activity.
Cameyo takes your legacy client apps (like your ERP, an internal accounting program, SAP client, etc.) and publishes it within that managed Chrome Enterprise browser.
This unifies the digital workspace. Those legacy applications, which previously lived on a desktop, now run under the single security context of the secure browser. This allows Chrome Enterprise Premium’s advanced security and DLP controls to govern applications they previously couldn’t see, providing a comprehensive security posture across all of your organization’s apps, not just the web-based apps.
Bringing AI to legacy apps. The combination of Cameyo and Chrome Enterprise not only brings all your apps into a secure enterprise browser, but thanks to Gemini in Chrome, all of your legacy apps now have the power of AI layered on top.
Unlocking adoption of a more secure, web-based OS and more collaborative, web-first productivity
Moving all of your apps to the web with Cameyo doesn’t just provide a more unified user experience. It can also provide a significantly better, more flexible, and more secure experience for IT. Compared to traditional virtualization technologies that take weeks or months to deploy, IT can publish their first apps to users within hours, and be fully deployed in days. All while taking advantage of Cameyo’s embedded Zero Trust security model for ultra-secure app delivery.
And that added simplicity, flexibility, and security opens up other opportunities for IT, too.
For organizations that have been looking for a more secure alternative to Windows in the wake of years of security incidents, outages, and forced upgrades to the next Windows version, Cameyo now makes it possible for IT to migrate to ChromeOS — including the use of ChromeOS Flex to convert existing PCs to ChromeOS — while maintaining access to all of their Windows apps.
For years, the primary blocker for deeper enterprise adoption of ChromeOS has always been the “app gap” — the persistent need to access a few remaining Windows applications within an organization. Cameyo eliminates this blocker entirely, enabling organizations to confidently migrate their entire fleet to ChromeOS, the only operating system with zero reported ransomware attacks, ever.
Similarly, Cameyo allows organizations to fully embrace Google Workspace while retaining access to essential client apps that previously kept them tethered to Microsoft™, such as legacy Excel versions with complex macros or specific ERP clients. Now, teams can move to a more modern, collaborative productivity suite that was built for the web, and they can still access any specialized Windows apps that their workflows still depend on.
Your flexible path to modernization starts now
For too long, legacy applications have hindered organizations’ modernization efforts. But the age of tolerating complex, costly virtualization solutions just to keep legacy apps alive is coming to an end.
Cameyo by Google, like the rest of the Google enterprise stack, was built in the cloud specifically to enable the web-based future of work. And like the rest of Google’s enterprise offerings, Cameyo gives you a flexible path forward that enables you to build a modern, secure, and productive enterprise computing stack at the pace that works for you.
Identifying patterns and sequences within your data is crucial for gaining deeper insights. Whether you’re tracking user behavior, analyzing financial transactions, or monitoring sensor data, the ability to recognize specific sequences of events can unlock a wealth of information and actionable insights.
Imagine you’re a marketer at an e-commerce company trying to identify your most valuable customers by their purchasing trajectory. You know that customers who start with small orders and progress to mid-range purchases will usually end up becoming high-value purchasers and your most loyal segment. Having to figure out the complex SQL to aggregate and join this data could be quite the challenging task.
That’s why we’re excited to introduce MATCH_RECOGNIZE, a new feature in BigQuery that allows you to perform complex pattern matching on your data directly within your SQL queries!
What is MATCH_RECOGNIZE?
At its core, MATCH_RECOGNIZE is a tool built directly into GoogleSQL for identifying sequences of rows that match a specified pattern. It’s similar to using regular expressions, but instead of matching patterns in a string of text, you’re matching patterns in a sequence of rows within your tables. This capability is especially powerful for analyzing time-series data or any dataset where the order of rows is important.
With MATCH_RECOGNIZE, you can express complex patterns and define custom logic to analyze them, all within a single SQL clause. This reduces the need for cumbersome self-joins or complex procedural logic. It also lessens your reliance on Python to process data and will look familiar to users who have experience with Teradata’s nPath or other external MATCH_RECOGNIZE workloads (like Snowflake, Azure, Flink, etc.).
How it works
The MATCH_RECOGNIZE clause is highly structured and consists of several key components that work together to define your pattern-matching logic:
PARTITION BY: This clause divides your data into independent partitions, allowing you to perform pattern matching within each partition separately.
ORDER BY: Within each partition, ORDER BY sorts the rows to establish the sequence in which the pattern will be evaluated.
MEASURES: Here, you can define the columns that will be included in the output, often using aggregate functions to summarize the matched data.
PATTERN: This is the heart of the MATCH_RECOGNIZE clause, where you define the sequence of symbols that constitutes a match. You can use quantifiers like *, +, ?, and more to specify the number of occurrences for each symbol.
DEFINE: In this clause, you define the conditions that a row must meet to be classified as a particular symbol in your pattern.
Let’s look at a simple example. From our fictional scenario above, imagine you have a table of sales data, and as a marketing analyst, you want to identify customer purchase patterns where their spending starts low, increases to a mid-range, and then reaches a high level. With MATCH_RECOGNIZE, you could write a query like this:
code_block
<ListValue: [StructValue([(‘code’, ‘SELECT *rnFROMrn Example_Project.Example_Dataset.SalesrnMATCH_RECOGNIZE (rn PARTITION BY customerrn ORDER BY sale_datern MEASURESrn MATCH_NUMBER() AS match_number,rn ARRAY_AGG(STRUCT(MATCH_ROW_NUMBER() AS row, CLASSIFIER() AS symbol, rn product_category)) AS salesrn PATTERN (low+ mid+ high+)rn DEFINErn low AS amount < 50,rn mid AS amount BETWEEN 50 AND 100,rn high AS amount > 100rn);’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x7f692c19cb50>)])]>
In this example, we’re partitioning the data by customer and ordering it by sale_date. The PATTERN clause specifies that we’re looking for one or more “low” sales events, followed by one or more “mid” sales events, followed by one or more “high” sales events. The DEFINE clause then specifies the conditions for a sale to be considered “low”, “mid”, or “high”. The MEASURES clause decides how to summarize each match; here with match_number we are indexing each match starting from 1 and creating a ‘sales’ array that will track every match in order.
Below are example matched customers:
customer
match_number
sales.row
sales.symbol
sales.product_category
Cust1
1
1
low
Books
2
low
Clothing
3
mid
Clothing
4
high
Electronics
5
high
Electronics
Cust2
2
1
low
Software
2
mid
Books
3
high
Clothing
This data highlights some sales trends and could offer insights for a market analyst to strategize conversion of lower-spending customers to higher-value sales based on these trends.
Use cases for MATCH_RECOGNIZE
The possibilities with MATCH_RECOGNIZE are vast. Here are just a few examples of how you can use this powerful feature:
Funnel analysis: Track user journeys on your website or app to identify common paths and drop-off points. For example, you could define a pattern for a successful conversion funnel (e.g., view_product -> add_to_cart -> purchase) and analyze how many users complete it.
Fraud detection: Identify suspicious patterns of transactions that might indicate fraudulent activity. For example, you could look for a pattern of multiple small transactions followed by a large one from a new account.
Financial analysis: Analyze stock market data to identify trends and patterns, such as a “W” or “V” shaped recovery.
Log analysis: Sift through application logs to find specific sequences of events that might indicate an error or a security threat.
Churn analysis: Identify patterns in your data that lead to customer churn and find actionable insights to reduce churn and improve customer sentiment.
Network monitoring: Identify a series of failed login attempts to track issues or potential threats.
Supply chain monitoring: Flag delays in a sequence of shipment events.
Sports analytics: Identify streaks or changes in output for different players / teams over games, such as winning or losing streaks, changes in starting lineups, etc.
Get started today
Ready to start using MATCH_RECOGNIZE in your own queries? The feature is now available to all BigQuery users! To learn more and dive deeper into the syntax and advanced capabilities, check out the official documentation and tutorial available on Colab, BigQuery, and GitHub.
MATCH_RECOGNIZE opens up a whole new world of possibilities for sequential analysis in BigQuery, and we can’t wait to see how you’ll use it to unlock deeper insights from your data.
For decades, SQL has been the universal language for data analysis, offering access to analytics on structured data. Large Language Models (LLMs) like Gemini now provide a path to get nuanced insights from unstructured data such as text, image and video. However, integrating LLMs into standard SQL flow requires data movement, at least some prompt and parameter tuning to optimize result quality. This is expensive to perform at scale, which keeps these capabilities out of reach for many data practitioners. Today, we are excited to announce the public preview of BigQuery-managed AI functions, a new set of capabilities that reimagine SQL for the AI era. These functions — AI.IF, AI.CLASSIFY, and AI.SCORE — allow you to use generative AI for common analytical tasks directly within your SQL queries, no complex prompt tuning or new tools required. These functions have been optimized for their target use cases, and do not need you to choose models or tune their parameters. Further through intelligent optimizations on your provided prompt and query plans we keep the costs minimal.With these new functions, you can perform sophisticated AI-driven analysis using familiar SQL operators:
Filter and join data based on semantic meaning using AI.IF in a WHERE or ON clause.
Categorize unstructured text or images using AI.CLASSIFY in a GROUP BY clause.
Rank rows based on natural language criteria using AI.SCORE in an ORDER BY clause.
Together, these functions allow answering new kinds of questions previously out of reach for SQL analytics, for example, companies to news articles which mention them even when an old or unofficial name is used.
Let’s dive deeper into how each of these functions works.
Function deep dive
AI.IF: Semantic filtering and joining
With AI.IF, you can filter or join data using conditions written in natural language. This is useful for tasks like identifying negative customer reviews, filtering images that have specific attributes, or finding relevant information in documents. BigQuery optimizes the query plan to reduce the number of calls to LLM by evaluating non-AI filters first. For example, the following query finds tech news articles from BBC that are related to Google.
code_block
<ListValue: [StructValue([(‘code’, ‘SELECT title, body rnFROM bigquery-public-data.bbc_news.fulltext rnWHERE AI.IF((“The news is related to Google, news: “, body), rn t connection_id => “us.test_connection”)rn AND category = “tech” t– Non-AI filter evaluated first’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x7f692c1b92e0>)])]>
You can also use AI.IF() for powerful semantic joins, such as performing entity resolutionbetween two different product catalogs. The following query finds products that are semantically identical, even if their names are not an exact match.
code_block
<ListValue: [StructValue([(‘code’, ‘WITH product_catalog_A AS (SELECT “Veridia AquaSource Hydrating Shampoo” as productrn UNION ALL SELECT “Veridia Full-Lift Volumizing Shampoo”),rn product_catalog_B AS (SELECT “Veridia Shampoo, AquaSource Hydration” as product)rnSELECT *rnFROM product_catalog_A a JOIN product_catalog_B brnON AI.IF((a.product, ” is the same product as “, b.product),rn connection_id => “us.test_connection”)’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x7f692c1b92b0>)])]>
AI.CLASSIFY: Data classification
The AI.CLASSIFY function lets you categorize text or image based on labels you provide. You can use it to route support tickets by topic or classify images based on their style. For instance, you can classify news articles by topic and then count the number of articles in each category with a single query.
code_block
<ListValue: [StructValue([(‘code’, “SELECTrn AI.CLASSIFY(rn body, rn categories => [‘tech’, ‘sport’, ‘business’, ‘politics’, ‘entertainment’],rn connection_id => ‘us.test_connection’) AS category,rn COUNT(*) num_articlesrnFROM bigquery-public-data.bbc_news.fulltextrnGROUP BY category;”), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x7f692c1b9550>)])]>
AI.SCORE: Semantic ranking
You can use AI.SCORE to rank rows based on natural language criteria. This is powerful for ranking items based on a rubric. To give you consistent and high-quality results, BigQuery automatically refines your prompt into a structured scoring rubric. This example finds the top 10 most positive reviews for a movie of your choosing.
code_block
<ListValue: [StructValue([(‘code’, ‘SELECTrn review,rn AI.SCORE((“From 1 to 10, rate how much does the reviewer like the movie :”, review),rn connection_id => ‘us.test_connection’) AS ai_rating,rn reviewer_rating AS human_rating,rnFROM bigquery-public-data.imdb.reviewsrnWHERE title = ‘Movie’rnORDER BY ai_rating DESCrnLIMIT 10;’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x7f692c1b9820>)])]>
Built-in optimizations
These functions allow you to easily mix AI processing with common SQL operators like WHERE, JOIN, ORDER BY, and GROUP BY. BigQuery handles prompt optimization, model selection, and model parameter tuning for you.
Prompt optimization: LLMs are sensitive to the wording of a prompt, the same question can be expressed in different ways which affect quality and consistency. BigQuery optimizes your prompts into a structured format specifically for Gemini, helping to ensure higher-quality results and an improved cache hit rate.
Query plan optimization: Running generative AI models over millions of rows can be slow and expensive. BigQuery query planner reorders AI functions in your filters and pulls AI functions out from join to reduce the number of calls to the model, which saves costs and improves performance.
Model endpoint and parameter tuning: BigQuery tunes model endpoint and model parameters to improve both result quality and results consistency across query runs.
Get started
The new managed AI functions — AI.IF() , AI.SCORE() and AI.CLASSIFY() — complement the existing general-purpose Gemini inference functions such as AI.GENERATE from BigQuery. In addition to optimizations discussed above, you can expect future optimization and mixed query processing between BigQuery and Gemini for even better price-performance. You can indicate your interest for early access here.
What to use and when: When your use case fits them, start with the managed AI functions as they are optimized for cost and quality. Use the AI.GENERATE family of functions when you need control on your prompt and input parameters, and want to choose from a wide range of supported models for LLM inference.
To learn more refer to our documentation. The new managed AI functions are also available in BigQuery DataFrames. See this notebook and documentation for Python examples.
When a major vulnerability makes headlines, CISOs want to know fast if their organization is impacted and prepared. Getting the correct answer is often a time-consuming and human-intensive process that can take days or weeks, leaving open a dangerous window of unknown exposure.
To help close that gap, today we’re introducing the Emerging Threats Center in Google Security Operations. Available today for licensed customers in Google Security Operations, this new capability can help solve the core, practical problem of scaling detection engineering, and help transform how teams operationalize threat intelligence.
Enabled by Gemini, our detection-engineering agent responds to new threat campaigns detected by Google Threat Intelligence, and includes frontline insights from Mandiant, VirusTotal, and across Google. It generates representative events, assessing coverage, and closing detection gaps.
The Emerging Threats Center can help you understand if you are impacted by critical threat campaigns, and provides detection coverage to help ensure you are protected going forward.
Introducing campaign-based prioritization with emerging threats
Protecting against new threats has long been a manual, reactive cycle. It begins with threat analysts pouring over reports to identify new campaign activity, which they then translate into indicators of compromise (IoCs) for detection engineers. Next, the engineering team manually authors, tests, and deploys the new detections.
Too often, we hear from customers and security operations teams that this labor-intensive process leaves organizations swimming upstream. It was “hard to derive clear action from threat intelligence data,” according to 59% of IT and cybersecurity leaders surveyed in this year’s Threat Intelligence Benchmark, a commissioned study conducted by Forrester Consulting on behalf of Google Cloud.
By sifting through volumes of threat intelligence data, the Emerging Threats Center can help security surface the most relevant threat campaigns to an organization — and take proactive action against them.
Instead of starting in a traditional alert queue, analysts now have a single view of threats that pose the greatest risks to their specific environment. This view includes details on the presence of IOCs in event data and detection rules.
For example, when a new zero-day vulnerability emerges, analysts don’t have to manually cross-reference blog posts with their alert queue. They can immediately see the campaign, the IOCs already contextualized against their own environment, and the specific detection rules to apply. This holistic approach can help them proactively hunt for the most time-sensitive threats before a major breach occurs.
Making all this possible is Gemini in Security Operations, transforming how we engineer detections. By ingesting a continuous stream of frontline threat intelligence, it can automatically test our detection corpus against new threats. When a gap is found, Gemini generates a new, fully-vetted detection rule for an analyst to approve. This systematic, automated workflow can help ensure you are protected from the latest threats.
Our campaign-based approach can provide definitive answers to the two most critical questions a security team faces during a major threat event: How are we affected, and how well are we prepared.
How are we affected?
The first priority is to understand your exposure. The Emerging Threats Center can help you find active and past threats in your environment by correlating campaign intelligence against your data in two ways:
IOC matches: It automatically searches for and prioritizes campaign-related IoCs across the previous 12 months of your security telemetry.
Detection matches: It instantly surfaces hits from curated detection rules that have been mapped directly to the specific threat campaign.
Both matches provide a definitive starting point for your investigative workflow.
Emerging Threat Center Feed View
How are we prepared?
The Emerging Threat Center can also help prove that you are protected moving forward. This capability can provide immediate assurance of your defensive posture by helping you confirm two key facts:
That you have no current or past IOC or detection hits related to the campaign.
That you have the relevant, campaign-specific detections active and ready to stop malicious activity if it appears.
Emerging Threat Center Campaign Detail View
Under the hood: The detection engineering engine
The Emerging Threat Center is built on a resilient, automated system that uses Gemini models and AI agents to drastically shorten the detection engineering lifecycle.
Agentic Detection Engineering Workflow
Here’s how it works.
First, it ingests intelligence. The system automatically ingests detection opportunities from Google Threat Intelligence campaigns, which are sourced from Mandiant’s frontline incident response engagements, our Managed Defense customers, and Google’s unique global visibility. From thousands of raw sample events from adversary activity, Gemini is able to extract a distinct set of detection opportunities associated with the campaign.
Next, it generates synthetic events. We generate high-fidelity anonymized, synthetic event data that accurately mimics adversary tactics, techniques, and procedures (TTPs) described in the intelligence. We use an automated pipeline to generate a corpus of high-fidelity synthetic log events, providing a robust dataset for testing.
Then, it tests coverage. The system uses the synthetic data to test our existing detection rule set, providing a rapid, empirical answer to how well we are covered for a new threat. This automated testing pipeline quickly provides an answer on detection coverage.
After that, it accelerates rule creation. When coverage gaps are found, the process uses Gemini to automatically generate and evaluate new rules. Gemini drafts a new detection rule and provides a summary of its logic and expected performance, reducing the time to create a production-ready rule from days to hours.
Finally, it requires human review. The new rule is submitted to a human-in-the-loop security analyst who can vet and verify the new rule before deploying it. AI has helped us transform a best-effort, manual process into a systematic, automated workflow. By enabling us to tie new detections directly to the intelligence campaign it covers, we can help you be prepared for the latest threats.
“The real strategic shift is moving past those single indicators to systematically detecting underlying adversary behaviors — that’s how we get ahead and stay ahead. Out-of-box behavioral rules, based on Google’s deep intel visibility, help us get there,” said Ron Smalley, senior vice-president and head of Cybersecurity Operations, Fiserv.