The first models in the new Llama 4 herd of models—Llama 4 Scout 17B and Llama 4 Maverick 17B—are now available fully managed in Amazon Bedrock. You can power your applications with Llama 4 through Amazon Bedrock’s fully managed service via a single API. These advanced multimodal models empower you to build more tailored applications that respond to multiple types of media. Llama 4 offers improved performance at lower cost compared to Llama 3, with expanded language support for global applications. Featuring mixture-of-experts (MoE) architecture, these models deliver efficient multimodal processing for text and image inputs, improved compute efficiency, and enhanced AI safety measures.
According to Meta, the smaller Llama 4 Scout 17B model is the best multimodal model in the world in its class, and is more powerful than Meta’s Llama 3 models. Scout is a general-purpose model with 17 billion active parameters, 16 experts, and 109 billion total parameters that delivers state-of-the-art performance for its class. Scout significantly increases the context length from 128K in Llama 3, to an industry leading 10 million tokens. This enables many practical applications, including multi-document summarization, parsing extensive user activity for personalized tasks, and reasoning over vast code bases. Llama 4 Maverick 17B is a general-purpose model that features 128 experts, 400 billion total parameters, and a 1 million context length. It excels in image and text understanding across 12 languages, making it suitable for versatile assistant and chat applications.
Meta’s Llama 4 models are available in Amazon Bedrock in the US East (N. Virginia) and US West (Oregon) AWS Regions. You can also access Llama 4 in US East (Ohio) via cross-region inference. To learn more, read the launch blog, product page, Amazon Bedrock pricing, and documentation. To get started with Llama 4 in Amazon Bedrock, visit the Amazon Bedrock console.
At Google Cloud Next 25, we expanded the availability of Gemini in Looker, including Conversational Analytics, to all Looker platform users, redefining how line-of-business employees can rapidly gain access to trusted data-driven insights through natural language. Due to the complexity inherent in traditional business intelligence products, which require steep learning curves or advanced SQL knowledge, many potential users who could benefit from BI tools simply don’t. But with the convergence of AI and BI, the opportunity to ask questions and chat with your data using natural language breaks down the barriers that have long stood in the way.
Conversational Analytics from Looker is designed to make BI more simple and approachable, democratizing data access, enabling users to ask data-related queries in plain, everyday language, and go beyond static dashboards that often don’t answer all potential questions. In response, users receive accurate and relevant answers derived from Looker Explores or BigQuery tables, without needing to know SQL or specific data tools.
For data analysts, this means fewer support tickets and interruptions, so they can focus on higher priority work, Business users can now take on their own data queries themselves and get answers, empowering trusted self-service by , putting the controls in the hands of users who need the answers most. Now, instead of struggling with field names and date formats, users can simply ask questions like: “What were our top-performing products last quarter?” or say “Show me the trend of website traffic over the past six months.” Additionally, when using Conversational Analytics with Looker Explores, users can be sure tables are consistently joined and metrics are calculated the same way every time.
With Conversational Analytics, ask questions of your data and get AI-driven insights.
Conversational Analytics in Looker is designed to be simple, helpful, and easy to use, offering:
Trusted, consistent results: Conversational Analytics only uses fields defined by your data experts in LookML. Once the fields are selected, they are deterministically translated to SQL by Looker, the same way every time.
Transparency with “How was this calculated?”: This feature provides a clear, natural language explanation of the underlying query that generated the results, presented in easy-to-understand bullet points.
A deeper dive with follow-up questions: Just like a natural conversation, users can ask follow-up questions to explore the data further. For example, users can ask to filter a result to a specific region, to change the timeframe of the date filter, or to switch from bar graph to an area chart. Conversational Analytics allows for seamless iteration and deeper exploration of the data.
Hidden insights with Gemini: Once the initial query results are displayed, users can click the “Insights” button to ask Gemini to analyze the data results and generate additional insights about patterns and trends they might have otherwise missed.
aside_block
<ListValue: [StructValue([(‘title’, ‘Try Google Cloud for free’), (‘body’, <wagtail.rich_text.RichText object at 0x3e6ec440a400>), (‘btn_text’, ‘Get started for free’), (‘href’, ‘https://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>
Empowering data analysts and developers
With the release of Conversational Analytics, our goal is for it to benefit data analysts and developers on top of line-of–business teams. The Conversational Analytics agent lets data analysts provide crucial context and instructions to Gemini, enhancing its ability to answer business user questions effectively, and empowering analysts to map business jargon to specific fields, specify the best fields for filtering, and define custom calculations.
Analysts can further curate the experience by creating agents for specific use cases. When business users select an agent, they can feel confident that they are interacting with the right data source.
As announced at Next 25, the Conversational Analytics API will power Conversational Analytics across multiple first-party Google Cloud experiences and third-party products, including customer applications, chat apps, Agentspace, and BigQuery, bringing the benefits of natural language queries to your data to the applications where you work every day. Later this year we’ll also bring Conversational Analytics into Looker Dashboards, allowing users to chat with their data in that familiar interface, whether inside Looker or embedded in other applications.Also, if you’re interested in solving even more complex problems while chatting with your data, you can try our new Code Interpreter (available in preview), which uses Python rather than SQL to perform advanced analysis like cohort analysis and forecasting. With the Conversational Analytics Code Interpreter, you can tackle data science tasks without learning advanced coding or statistical methods. Sign up for access here.
Expanding the reach of AI for BI
Looker Conversational Analytics is a step forward in making BI accessible to a wider audience. By removing the technical barriers and providing an intuitive, conversational interface, Looker is empowering more business users to leverage data in their daily routines. With Conversational Analytics available directly in Looker, organizations can now make data-driven insights a reality for everyone. Start using Conversational Analytics today in your Looker instance.
Written by: Casey Charrier, James Sadowski, Clement Lecigne, Vlad Stolyarov
Executive Summary
Google Threat Intelligence Group (GTIG) tracked 75 zero-day vulnerabilities exploited in the wild in 2024, a decrease from the number we identified in 2023 (98 vulnerabilities), but still an increase from 2022 (63 vulnerabilities). We divided the reviewed vulnerabilities into two main categories: end-user platforms and products (e.g., mobile devices, operating systems, and browsers) and enterprise-focused technologies, such as security software and appliances.
Vendors continue to drive improvements that make some zero-day exploitation harder, demonstrated by both dwindling numbers across multiple categories and reduced observed attacks against previously popular targets. At the same time, commercial surveillance vendors (CSVs) appear to be increasing their operational security practices, potentially leading to decreased attribution and detection.
We see zero-day exploitation targeting a greater number and wider variety of enterprise-specific technologies, although these technologies still remain a smaller proportion of overall exploitation when compared to end-user technologies. While the historic focus on the exploitation of popular end-user technologies and their users continues, the shift toward increased targeting of enterprise-focused products will require a wider and more diverse set of vendors to increase proactive security measures in order to reduce future zero-day exploitation attempts.
Scope
This report describes what Google Threat Intelligence Group (GTIG) knows about zero-day exploitation in 2024. We discuss how targeted vendors and exploited products drive trends that reflect threat actor goals and shifting exploitation approaches, and then closely examine several examples of zero-day exploitation from 2024 that demonstrate how actors use both historic and novel techniques to exploit vulnerabilities in targeted products. The following content leverages original research conducted by GTIG, combined with breach investigation findings and reporting from reliable open sources, though we cannot independently confirm the reports of every source. Research in this space is dynamic and the numbers may adjust due to the ongoing discovery of past incidents through digital forensic investigations. The numbers presented here reflect our best understanding of current data.
GTIG defines a zero-day as a vulnerability that was maliciously exploited in the wild before a patch was made publicly available. GTIG acknowledges that the trends observed and discussed in this report are based on detected and disclosed zero-days. Our analysis represents exploitation tracked by GTIG but may not reflect all zero-day exploitation.
aside_block
<ListValue: [StructValue([(‘title’, ‘A 2024 Zero-Day Exploitation Analysis’), (‘body’, <wagtail.rich_text.RichText object at 0x3e437326c9d0>), (‘btn_text’, ‘Download now’), (‘href’, ‘https://services.google.com/fh/files/misc/2024-zero-day-exploitation-analysis-en.pdf’), (‘image’, None)])]>
Key Takeaways
Zero-day exploitation continues to grow gradually. The 75 zero-day vulnerabilities exploited in 2024 follow a pattern that has emerged over the past four years. While individual year counts have fluctuated, the average trendline indicates that the rate of zero-day exploitation continues to grow at a slow but steady pace.
Enterprise-focused technology targeting continues to expand. GTIG continued to observe an increase in adversary exploitation of enterprise-specific technologies throughout 2024. In 2023, 37% of zero-day vulnerabilities targeted enterprise products. This jumped to 44% in 2024, primarily fueled by the increased exploitation of security and networking software and appliances.
Attackers are increasing their focus on security and networking products. Zero-day vulnerabilities in security software and appliances were a high-value target in 2024. We identified 20 security and networking vulnerabilities, which was over 60% of all zero-day exploitation of enterprise technologies. Exploitation of these products, compared to end-user technologies, can more effectively and efficiently lead to extensive system and network compromises, and we anticipate adversaries will continue to increase their focus on these technologies.
Vendors are changing the game. Vendor investments in exploit mitigations are having a clear impact on where threat actors are able to find success. We are seeing notable decreases in zero-day exploitation of some historically popular targets such as browsers and mobile operating systems.
Actors conducting cyber espionage still lead attributed zero-day exploitation. Between government-backed groups and customers of commercial surveillance vendors (CSVs), actors conducting cyber espionage operations accounted for over 50% of the vulnerabilities we could attribute in 2024. People’s Republic of China (PRC)-backed groups exploited five zero-days, and customers of CSVs exploited eight, continuing their collective leading role in zero-day exploitation. For the first year ever, we also attributed the exploitation of the same volume of 2024 zero-days (five) to North Korean actors mixing espionage and financially motivated operations as we did to PRC-backed groups.
Looking at the Numbers
GTIG tracked 75 exploited-in-the-wild zero-day vulnerabilities that were disclosed in 2024. This number appears to be consistent with a consolidating upward trend that we have observed over the last four years. After an initial spike in 2021, yearly counts have fluctuated but not returned to the lower numbers we saw in 2021 and prior.
While there are multiple factors involved in discovery of zero-day exploitation, we note that continued improvement and ubiquity of detection capabilities along with more frequent public disclosures have both resulted in larger numbers of detected zero-day exploitation compared to what was observed prior to 2021.
Figure 1: Zero-days by year
Higher than any previous year, 44% (33 vulnerabilities) of tracked 2024 zero-days affected enterprise technologies, continuing the growth and trends we observed last year. The remaining 42 zero-day vulnerabilities targeted end-user technologies.
Enterprise Exploitation Expands in 2024 as Browser and Mobile Exploitation Drops
End-User Platforms and Products
In 2024, 56% (42) of the tracked zero-days targeted end-user platforms and products, which we define as devices and software that individuals use in their day-to-day life, although we acknowledge that enterprises also often use these. All of the vulnerabilities in this category were used to exploit browsers, mobile devices, and desktop operating systems.
Zero-day exploitation of browsers and mobile devices fell drastically, decreasing by about a third for browsers and by about half for mobile devices compared to what we observed last year (17 to 11 for browsers, and 17 to 9 for mobile).
Chrome was the primary focus of browser zero-day exploitation in 2024, likely reflecting the browser’s popularity among billions of users.
Exploit chains made up of multiple zero-day vulnerabilities continue to be almost exclusively (~90%) used to target mobile devices.
Third-party components continue to be exploited in Android devices, a trend we discussed in last year’s analysis. In 2023, five of the seven zero-days exploited in Android devices were flaws in third-party components. In 2024, three of the seven zero-days exploited in Android were found in third-party components. Third-party components are likely perceived as lucrative targets for exploit development since they can enable attackers to compromise many different makes and models of devices across the Android ecosystem.
2024 saw an increase in the total number of zero-day vulnerabilities affecting desktop operating systems (OSs) (22 in 2024 vs. 17 in 2023), indicating that OSs continue to be a strikingly large target. The proportional increase was even greater, with OS vulnerabilities making up just 17% of total zero-day exploitation in 2023, compared to nearly 30% in 2024.
Microsoft Windows exploitation continued to increase, climbing from 13 zero-days in 2022, to 16 in 2023, to 22 in 2024. As long as Windows remains a popular choice both in homes and professional settings, we expect that it will remain a popular target for both zero-day and n-day (i.e. a vulnerability exploited after its patch has been released) exploitation by threat actors.
Figure 2: Zero-days in end-user products in 2023 and 2024
Enterprise Technologies
In 2024, GTIG identified the exploitation of 33 zero-days in enterprise software and appliances. We consider enterprise products to include those mainly utilized by businesses or in a business environment. While the absolute number is slightly lower than what we saw in 2023 (36 vulnerabilities), the proportion of enterprise-focused vulnerabilities has risen from 37% in 2023 to 44% in 2024. Twenty of the 33 enterprise-focused zero-days targeted security and network products, a slight increase from the 18 observed in this category for 2023, but a 9% bump when compared proportionally to total zero-days for the year.
The variety of targeted enterprise products continues to expand across security and networking products, with notable targets in 2024 including Ivanti Cloud Services Appliance, Palo Alto Networks PAN-OS, Cisco Adaptive Security Appliance, and Ivanti Connect Secure VPN. Security and network tools and devices are designed to connect widespread systems and devices with high permissions required to manage the products and their services, making them highly valuable targets for threat actors seeking efficient access into enterprise networks. Endpoint detection and response (EDR) tools are not usually equipped to work on these products, limiting available capabilities to monitor them. Additionally, exploit chains are not generally required to exploit these systems, giving extensive power to individual vulnerabilities that can single-handedly achieve remote code execution or privilege escalation.
Over the last several years, we have also tracked a general increase of enterprise vendors targeted. In 2024, we identified 18 unique enterprise vendors targeted by zero-days. While this number is slightly less than the 22 observed in 2023, it remains higher than all prior years’ counts. It is also a stark increase in the proportion of enterprise vendors for the year, given that the 18 unique enterprise vendors were out of 20 total vendors for 2024. 2024’s count is still a significant proportional increase compared to the 22 unique enterprise vendors targeted out of a total of 23 in 2023.
Figure 3: Number of unique enterprise vendors targeted
The proportion of zero-days exploited in enterprise devices in 2024 reinforces a trend that suggests that attackers are intentionally targeting products that can provide expansive access and fewer opportunities for detection.
Exploitation by Vendor
The vendors affected by multiple 2024 zero-day vulnerabilities generally fell into two categories: big tech (Microsoft, Google, and Apple) and vendors who supply security and network-focused products. As expected, big tech took the top two spots, with Microsoft at 26 and Google at 11. Apple slid to the fourth most frequently exploited vendor this year, with detected exploitation of only five zero-days. Ivanti was third most frequently targeted with seven zero-days, reflecting increased threat actor focus on networking and security products. Ivanti’s placement in the top three reflects a new and crucial change, where a security vendor was targeted more frequently than a popular end-user technology-focused vendor. We discuss in a following section how PRC-backed exploitation has focused heavily on security and network technologies, one of the contributing factors to the rise in Ivanti targeting.
We note that exploitation is not necessarily reflective of a vendor’s security posture or software development processes, as targeted vendors and products depend on threat actor objectives and capabilities.
Types of Exploited Vulnerabilities
Threat actors continued to utilize zero-day vulnerabilities primarily for the purposes of gaining remote code execution and elevating privileges. In 2024, these consequences accounted for over half (42) of total tracked zero-day exploitation.
Three vulnerability types were most frequently exploited. Use-after-free vulnerabilities have maintained their prevalence over many years, with eight in 2024, and are found in a variety of targets including hardware, low-level software, operating systems, and browsers. Command injection (also at eight, including OS command injection) and cross-site scripting (XSS) (six) vulnerabilities were also frequently exploited in 2024. Both code injection and command injection vulnerabilities were observed almost entirely targeting networking and security software and appliances, displaying the intent to use these vulnerabilities in order to gain control over larger systems and networks. The XSS vulnerabilities were used to target a variety of products, including mail servers, enterprise software, browsers, and an OS.
All three of these vulnerability types stem from software development errors and require meeting higher programming standards in order to prevent them from occurring. Safe and preventative coding practices, including, but not limited to code reviews, updating legacy codebases, and utilizing up-to-date libraries, can appear to hinder production timelines. However, patches prove the potential for these security exposures to be prevented in the first place with proper intention and effort and ultimately reduce the overall effort to properly maintain a product or codebase.
Who Is Driving Exploitation
Figure 4: 2024 attributed zero-day exploitation
Due to the stealthy access zero-day vulnerabilities can provide into victim systems and networks, they continue to be a highly sought after capability for threat actors. GTIG tracked a variety of threat actors exploiting zero-days in a variety of products in 2024, which is consistent with our previous observations that zero-day exploitation has diversified in both platforms targeted and actors exploiting them. We attributed the exploitation of 34 zero-day vulnerabilities in 2024, just under half of the total 75 we identified in 2024. While the proportion of exploitation that we could attribute to a threat actor dipped slightly from our analysis of zero-days in 2023, it is still significantly higher than the ~30% we attributed in 2022. While this reinforces our previous observation that platforms’ investment in exploit mitigations are making zero-days harder to exploit, the security community is also slowly improving our ability to identify that activity and attribute it to threat actors.
Consistent with trends observed in previous years, we attributed the highest volume of zero-day exploitation to traditional espionage actors, nearly 53% (18 vulnerabilities) of total attributed exploitation. Of these 18, we attributed the exploitation of 10 zero-days to likely nation-state-sponsored threat groups and eight to CSVs.
CSVs Continue to Increase Access to Zero-Day Exploitation
While we still expect government-backed actors to continue their historic role as major players in zero-day exploitation, CSVs now contribute a significant volume of zero-day exploitation. Although the total count and proportion of zero-days attributed to CSVs declined from 2023 to 2024, likely in part due to their increased emphasis on operational security practices, the 2024 count is still substantially higher than the count from 2022 and years prior. Their role further demonstrates the expansion of the landscape and the increased access to zero-day exploitation that these vendors now provide other actors.
In 2024, we observed multiple exploitation chains using zero-days developed by forensic vendors that required physical access to a device (CVE-2024-53104, CVE-2024-32896, CVE-2024-29745, CVE-2024-29748). These bugs allow attackers to unlock the targeted mobile device with custom malicious USB devices. For instance, GTIG and Amnesty International’s Security Lab discovered and reported on CVE-2024-53104 in exploit chains developed by forensic company Cellebrite and used against the Android phone of a Serbian student and activist by Serbian security services. GTIG worked with Android to patch these vulnerabilities in the February 2025 Android security bulletin.
PRC-Backed Exploitation Remains Persistent
PRC threat groups remained the most consistent government-backed espionage developer and user of zero-days in 2024. We attributed nearly 30% (five vulnerabilities) of traditional espionage zero-day exploitation to PRC groups, including the exploitation of zero-day vulnerabilities in Ivanti appliances by UNC5221 (CVE-2023-46805 and CVE-2024-21887), which GTIG reported on extensively. During this campaign, UNC5221 chained multiple zero-day vulnerabilities together, highlighting these actors’ willingness to expend resources to achieve their apparent objectives. The exploitation of five vulnerabilities that we attributed to PRC groups exclusively focused on security and networking technologies. This continues a trend that we have observed from PRC groups for several years across all their operations, not just in zero-day exploitation.
North Korean Actors Mix Financially Motivated and Espionage Zero-Day Exploitation
For the first time since we began tracking zero-day exploitation in 2012, in 2024, North Korean state actors tied for the highest total number of attributed zero-days exploited (five vulnerabilities) with PRC-backed groups. North Korean groups are notorious for their overlaps in targeting scope; tactics, techniques, and procedures (TTPs); and tooling that demonstrate how various intrusion sets support the operations of other activity clusters and mix traditional espionage operations with attempts to fund the regime. This focus on zero-day exploitation in 2024 marks a significant increase in these actors’ focus on this capability. North Korean threat actors exploited two zero-day vulnerabilities in Chrome as well as three vulnerabilities in Windows products.
In October 2024, it was publicly reported that APT37 exploited a zero-day vulnerability in Microsoft products. The threat actors reportedly compromised an advertiser to serve malicious advertisements to South Korean users that would trigger zero-click execution of CVE-2024-38178 to deliver malware. Although we have not yet corroborated the group’s exploitation of CVE-2024-38178 as reported, we have observed APT37 previously exploit Internet Explorer zero-days to enable malware distribution.
North Korean threat actors also reportedly exploited a zero-day vulnerability in the Windows AppLocker driver (CVE-2024-21338) in order to gain kernel-level access and turn off security tools. This technique abuses legitimate and trusted but vulnerable already-installed drivers to bypass kernel-level protections and provides threat actors an effective means to bypass and mitigate EDR systems.
Non-State Exploitation
In 2024, we linked almost 15% (five vulnerabilities) of attributed zero-days to non-state financially motivated groups, including a suspected FIN11 cluster’s exploitation of a zero-day vulnerability in multiple Cleo managed file transfer products (CVE-2024-55956) to conduct data theft extortion. This marks the third year of the last four (2021, 2023, and 2024) in which FIN11 or an associated cluster has exploited a zero-day vulnerability in its operations, almost exclusively in file transfer products. Despite the otherwise varied cast of financially motivated threat actors exploiting zero-days, FIN11 has consistently dedicated the resources and demonstrated the expertise to identify, or acquire, and exploit these vulnerabilities from multiple different vendors.
We attributed an additional two zero-days in 2024 to non-state groups with mixed motivations, conducting financially motivated activity in some operations but espionage in others. Two vulnerabilities (CVE-2024-9680 and CVE-2024-49039, detailed in the next section) were exploited as zero-days by CIGAR (also tracked as UNC4895 or publicly reported as RomCom), a group that has conducted financially motivated operations alongside espionage likely on behalf of the Russian government, based partly on observed highly specific targeting focused on Ukrainian and European government and defense organizations.
A Zero-Day Spotlight on CVE-2024-44308, CVE-2024-44309, and CVE-2024-49039: A look into zero-days discovered by GTIG researchers
Spotlight #1: Stealing Cookies with Webkit
On Nov. 12, 2024, GTIG detected a potentially malicious piece of JavaScript code injected on https://online.da.mfa.gov[.]ua/wp-content/plugins/contact-form-7/includes/js/index.js?ver=5.4. The JavaScript was loaded directly from the main page of the website of the Diplomatic Academy of Ukraine, online.da.mfa.gov.ua. Upon further analysis, we discovered that the JavaScript code was a WebKit exploit chain specifically targeting MacOS users running on Intel hardware.
The exploit consisted of a WebKit remote code execution (RCE) vulnerability (CVE-2024-44308), leveraging a logical Just-In-Time (JIT) error, succeeded by a data isolation bypass (CVE-2024-44309). The RCE vulnerability employed simple and old JavaScriptCore exploitation techniques that are publicly documented, namely:
Setting up addrof/fakeobj primitives using the vulnerability
Leaking StructureID
Building a fake TypedArray to gain arbitrary read/write
JIT compiling a function to get a RWX memory mapping where a shellcode can be written and executed
The shellcode traversed a set of pointers and vtables to find and call WebCookieJar::cookieRequestHeaderFieldValue with an empty firstPartyForCookies parameter, allowing the threat actor to access cookies of any arbitrary website passed as the third parameter to cookieRequestHeaderFieldValue.
The end goal of the exploit is to collect users’ cookies in order to access login.microsoftonline.com. The cookie values were directly appended in a GET request sent to https://online.da.mfa.gov.ua/gotcookie?.
This is not the first time we have seen threat actors stay within the browser to collect users’ credentials. In March 2021, a targeted campaign used a zero-day against WebKit on iOS to turn off Same-Origin-Policy protections in order to collect authentication cookies from several popular websites. In August 2024, a watering hole on various Mongolian websites used Chrome and Safari n-day exploits to exfiltrate users’ credentials.
While it is unclear why this abbreviated approach was taken as opposed to deploying full-chain exploits, we identified several possibilities, including:
The threat actor was not able to get all the pieces to have a full chain exploit. In this case, the exploit likely targeted only the MacIntel platform because they did not have a Pointer Authentication Code (PAC) bypass to target users using Apple Silicon devices. A PAC bypass is required to make arbitrary calls for their data isolation bypass.
The price for a full chain exploit was too expensive, especially when the chain is meant to be used at a relatively large scale. This especially includes watering hole attacks, where the chances of being detected are high and subsequently might quickly burn the zero-day vulnerability and exploit.
Stealing credentials is sufficient for their operations and the information they want to collect.
This trend is also observed beyond the browser environment, wherein third-party mobile applications (e.g., messaging applications) are targeted, and threat actors are stealing the information only accessible within the targeted application.
Spotlight #2: CIGAR Local Privilege Escalations
CIGAR’s Browser Exploit Chain
In early October 2024, GTIG independently discovered a fully weaponized exploit chain for Firefox and Tor browsers employed by CIGAR. CIGAR is a dual financial- and espionage-motivated threat group assessed to be running both types of campaigns in parallel, often simultaneously. In 2023, we observed CIGAR utilizing an exploit chain in Microsoft Office (CVE-2023-36884) as part of an espionage campaign targeting attendees of the Ukrainian World Congress and NATO Summit; however, in an October 2024 campaign, the usage of the Firefox exploit appears to be more in line with the group’s financial motives.
Our analysis, which broadly matched ESET’s findings, indicated that the browser RCE used is a use-after-free vulnerability in the Animation timeline. The vulnerability, known as CVE-2024-9680, was an n-day at the time of discovery by GTIG.
Upon further analysis, we identified that the embedded sandbox escape, which was also used as a local privilege escalation to NT/SYSTEM, was exploiting a newfound vulnerability. We reported this vulnerability to Mozilla and Microsoft, and it was later assigned CVE-2024-49039.
Double-Down on Privilege Escalation: from Low Integrity to SYSTEM
Firefox uses security sandboxing to introduce an additional security boundary and mitigate the effects of malicious code achieving code execution in content processes. Therefore, to achieve code execution on the host, an additional sandbox escape is required.
The in-the-wild CVE-2024-49039 exploit, which contained the PDB string C:etalonPocLowIL@OutputPocLowIL.pdb, could achieve both a sandbox escape and privilege escalation. The exploit abused two distinct issues to escalate privileges from Low Integrity Level (IL) to SYSTEM: the first allowed it to access the WPTaskScheduler RPC Interface (UUID: {33d84484-3626-47ee-8c6f-e7e98b113be1}), normally not accessible from a sandbox Firefox content process via the “less-secure endpoint” ubpmtaskhostchannel created in ubpm.dll; the second stems from insufficient Access Control List (ACL) checks in WPTaskScheduler.dll RPC server, which allowed an unprivileged user to create and execute scheduled tasks as SYSTEM.
1. Securing the endpoint: In WPTaskScheduler::TsiRegisterRPCInterface, the third argument to RpcServerUseProtseq is a non-NULL security descriptor (SD).
This SD should prevent the Firefox “Content” process from accessing the WPTaskScheduler RPC endpoint. However, a lesser known “feature” of RPC is that RPC endpoints are multiplexed, meaning that if there is a less secure endpoint in the same process, it is possible to access an interface indirectly from another endpoint (with a more permissive ACL). This is what the exploit does: instead of accessing RPC using the ALPC port that the WPTaskScheduler.dll sets up, it resolves the interface indirectly via upbmtaskhostchannel. ubpm.dll uses a NULL security descriptor when initializing the interface, instead relying on the UbpmpTaskHostChannelInterfaceSecurityCb callback for ACL checks:
Figure 5: NULL security descriptor used when creating “ubpmtaskhostchannel” RPC endpoint in ubpm.dll::UbpmEnableTaskHostChannelRpcInterface, exposing a less secure endpoint for WPTaskScheduler interface
2. Securing the interface: In the same WPTaskScheduler::TsiRegisterRPCInterface function, an overly permissive security descriptor was used as an argument to RpcServerRegisterIf3. As we can see on the listing below, the CVE-2024-49039 patch addressed this by introducing a more locked-down SD.
Figure 6: Patched WPTaskScheduler.dll introduces a more restrictive security descriptor when registering an RPC interface
3. Ad-hoc Security: Implemented in WPTaskScheduler.dll::CallerHasAccess and called prior to enabling or executing any scheduled task. The function performs checks on whether the calling user is attempting to execute a task created by them or one they should be able to access but does not perform any additional checks to prevent calls originating from an unprivileged user.
CVE-2024-49039 addresses the issue by applying a more restrictive ACL to the interface; however, the issue with the less secure endpoint described in “1. Securing the endpoint” remains, and a restricted token process is still able to access the endpoint.
Unidentified Actor Using the Same Exploits
In addition to CIGAR, we discovered another, likely financially motivated, group using the exact same exploits (albeit with a different payload) while CVE-2024-49039 was still a zero-day. This actor utilized a watering hole on a legitimate, compromised cryptocurrency news website redirecting to an attacker-controlled domain hosting the same CVE-2024-9680 and CVE-2024-49039 exploit.
Outlook and Implications
Defending against zero-day exploitation continues to be a race of strategy and prioritization. Not only are zero-day vulnerabilities becoming easier to procure, but attackers finding use in new types of technology may strain less experienced vendors. While organizations have historically been left to prioritize patching processes based on personal or organizational threats and attack surfaces, broader trends can inform a more specific approach alongside lessons learned from major vendors’ mitigation efforts.
We expect zero-day vulnerabilities to maintain their allure to threat actors as opportunities for stealth, persistence, and detection evasion. While we observed trends regarding improved vendor security posture and decreasing numbers around certain historically popular products—particularly mobile and browsers—we anticipate that zero-day exploitation will continue to rise steadily. Given the ubiquity of operating systems and browsers in daily use, big tech vendors are consistently high-interest targets, and we expect this to continue. Phones and browsers will almost certainly remain popular targets, although enterprise software and appliances will likely see a continued rise in zero-day exploitation. Big tech companies have been victims of zero-day exploitation before and will continue to be targeted. This experience, in addition to the resources required to build more secure products and detect vulnerabilities in responsible manners, permits larger companies to approach zero-days as a more manageable problem.
For newly targeted vendors and those with products in the growing prevalence of targeted enterprise products, security practices and procedures should evolve to consider how successful exploitation of these products could bypass typical protection mechanisms. Preventing successful exploitation will rely heavily on these vendors’ abilities to enforce proper and safe coding practices. We continue to see the same types of vulnerabilities exploited over time, indicating patterns in what weaknesses attackers seek out and find most beneficial to exploit. Continued existence and exploitation of similar issues makes zero-days easier; threat actors know what to look for and where exploitable weaknesses are most pervasive.
Vendors should account for this shift in threat activity and address gaps in configurations and architectural decisions that could permit exploitation of a single product to cause irreparable damage. This is especially true for highly valuable tools with administrator access and/or widespread reach across systems and networks. Best practices continue to represent a minimum threshold of what security standards an architecture should demonstrate, including zero-trust fundamentals such as least-privilege access and network segmentation. Continuous monitoring should occur where possible in order to restrict and end unauthorized access as swiftly as possible, and vendors will need to account for EDR capabilities for technologies that currently lack them (e.g., many security and networking products). GTIG recommends acute threat surface awareness and respective due diligence in order to defend against today’s zero-day threat landscape. Zero-day exploitation will ultimately be dictated by vendors’ decisions and ability to counter threat actors’ objectives and pursuits.
AWS Amplify introduces two key improvements to its backend tooling: streamlined deployment output using the AWS CDK Toolkit and a new notice system for local development. These updates optimize how developers receive deployment status information and important messages directly in their terminal when executing Amplify commands.
These enhancements improve the development experience by focusing on essential information while proactively surfacing critical notices. Frontend developers can now focus on relevant deployment information without the distraction of underlying infrastructure details. The notice system, similar to CDK’s approach, delivers important messages about potential issues, compatibility concerns, and other noteworthy items related to Amplify backends, enabling developers to address problems early in the development process.
These features are available in all AWS Regions where AWS Amplify is supported.
AWS Certificate Manager (ACM) announces automated public TLS certificates for Amazon CloudFront. CloudFront customers can now simply check a box to receive required public certificates to enable TLS when creating new CloudFront content delivery applications. ACM and CloudFront work together to automatically request, issue and associate the required public certificates with CloudFront. ACM will also automatically renew these certificates as long as the certificate is in use and traffic for the certificate domain is routed to CloudFront. Previously, to set up a similar secure CloudFront distribution, customers had to request a public certificate through ACM, validate the domain, and then associate the issued certificate with the CloudFront distribution. This option remains available to customers.
ACM uses a domain validation method commonly referred to as HTTP, or file-based validation, to both issue and renew these certificates. Domain validation ensures that ACM issues the certificates only to domain users who are authorized to acquire a certificate for the domain. Network and certificate administrators can still use ACM to view and monitor these certificates. While ACM automatically manages the certificate lifecycle, administrators can use ACM’s Certificate lifecycle CloudWatch events to monitor certificate updates and publish the information to a centralized security information and event management (SIEM) and/or enterprise resource planning (ERP) solution.
To learn more about this feature, please refer to our documentation. You can learn more about ACM here and CloudFront here.
Today, AWS announced support for VPC endpoints in Amazon Route 53 Profiles, allowing you to create, manage, and share private hosted zones (PHZs) for interface VPC endpoints across multiple VPCs and AWS accounts within your organization. With this enhancement, Amazon Route 53 Profiles simplifies the management of VPC endpoints by streamlining the process of creating and associating interface VPC endpoint managed PHZs with VPCs and AWS accounts, and without requiring you to manually associate them.
Route 53 Profiles makes it easy for you to create one or more configurations for VPC-related DNS settings, such as private hosted zones and Route 53 Resolver rules, and share them across VPCs and AWS accounts. The new capability helps you centralize the management of PHZs associated with interface VPC endpoints, reducing administrative overhead and minimizing the risk of configuration errors. This feature eliminates the need for creation and manual association of PHZs for VPC endpoints with individual VPCs and accounts, saving time and effort for network administrators. Additionally, it improves security and consistency by providing a centralized approach to managing DNS resolution for VPC endpoints across an organization’s AWS infrastructure.
Route 53 Profiles support for VPC endpoints is now available in the AWS Regions mentioned here. To learn more about the capability and how it can benefit your organization, visit the Amazon Route 53 documentation. You can get started by accessing the Amazon Route 53 console in your AWS Management Console or through AWS CLI. To learn more about pricing of Route 53 Profiles, see here.
Writer’s enterprise-grade foundation models, Palmyra X5 and X4, are now available as fully managed, serverless models in Amazon Bedrock. AWS is the first cloud provider to offer fully managed models from Writer, allowing organizations to leverage enterprise AI capabilities with serverless scalability and cost optimization.
Palmyra X5 has a one million token context window, while Palmyra X4 features a 128,000 token context window, both designed for sophisticated business applications. These models, top-ranked on Stanford’s HELM benchmark, excel at complex tasks including advanced reasoning, multi-step tool-calling, and built-in RAG (Retrieval-Augmented Generation). Both models support multiple languages including English, Spanish, French, German, and Chinese, making them ideal for global enterprise deployments. Organizations can use Palmyra models to automate sophisticated workflows across various industries—financial services teams can analyze extensive market research and regulatory documents, healthcare providers can process medical documentation and analyze research papers, and technology companies can generate and validate code at scale. With Amazon Bedrock, these capabilities are available with automatic resource scaling and on-demand pricing.
Today, AWS announced a new AWS Client VPN feature that monitors device networking routes, prevents VPN traffic leaks, and strengthens remote access security. The feature continuously tracks your users’ device routing tables to ensure outbound traffic flows through the VPN tunnel according to your configured settings. If the feature detects any modified networking route settings, it automatically restores the routes to your original configuration.
AWS Client VPN allows admins to configure routes on users’ devices to route traffic through the VPN. For example, an admin might configure end users’ devices to connect to the 10.0.0.0/24 network using VPN connectivity while the rest of the traffic breaks out locally from the device. However, connected devices could deviate from the organization’s configurations, causing VPN leaks. For example, even if you configure traffic to the 10.0.0.0/24 network to go via VPN, users or other clients running on the device can modify settings and bypass VPN for this traffic. With this feature enabled, our VPN client will continuously monitor routes and automatically correct deviations by repairing routes back to the original configuration. This feature ensures admin’s configuration is consistently applied to end users, maintaining the connection integrity of your organization.
This feature is available in all regions where AWS Client VPN is generally available with no additional cost.
Amazon Web Services is announcing the general availability of next generation high performance Storage Optimized Amazon EC2 I7i instances. Powered by 3rd generation AWS Nitro SSDs, I7i instances offer up to 45TB of NVMe storage with up to 50% better real-time storage performance, up to 50% lower storage I/O latency, and up to 60% lower storage I/O latency variability compared to I4i instances. Powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, I7i instances offer the best compute and storage performance for x86-based storage optimized instances in Amazon EC2, delivering up to 23% better compute performance and more than 10% better price performance over previous generation I4i instances.
I7i instances are ideal for I/O intensive and latency-sensitive workloads such as transactional databases, real-time and NoSQL databases, real-time analytics, AI/ML pre-processing for training, indexing and search engines requiring high random IOPS performance with real-time latency to access the small to medium size datasets (multi-TBs). Additionally, the torn write prevention feature, enables customers to eliminate database performance bottlenecks.
I7i instances are available in eleven sizes – nine virtual sizes up to 48xlarge and two bare metal options – delivering up to 100Gbps of network bandwidth and 60Gbps of Amazon Elastic Block Store (EBS) bandwidth. These instances are available today in US East (N. Virginia, Ohio) and US West (Oregon) AWS regions, with flexible purchase options including On Demand and Savings Plans. To learn more, visit the I7i instances page.
Starting today, Amazon EC2 High Memory instances with 24TB of memory (u-24tb1.112xlarge) are available in the US East (Ohio) region. Customers can start using these new High Memory instances with On Demand and Savings Plan purchase options.
Amazon EC2 High Memory instances are certified by SAP for running Business Suite on HANA, SAP S/4HANA, Data Mart Solutions on HANA, Business Warehouse on HANA, and SAP BW/4HANA in production environments. For details, see the Certified and Supported SAP HANA Hardware Directory.
For information on how to get started with your SAP HANA migration to EC2 High Memory instances, view the Migrating SAP HANA on AWS to an EC2 High Memory Instance documentation. To hear from Steven Jones, GM for SAP on AWS on what this launch means for our SAP customers, you can read his launch blog.
Today, AWS announces CloudFront SaaS Manager, a new Amazon CloudFront feature designed to efficiently manage content delivery across multiple websites for Software-as-a-Service (SaaS) providers, web development platforms, and companies with multiple brands/websites. CloudFront SaaS Manager provides a unified experience, alleviating the operational burden of managing multiple websites at scale, including TLS certificate management, DDoS protection, and observability.
CloudFront SaaS Manager introduces reusable configuration settings, eliminating redundant configurations and allowing customers to maintain consistent settings across their websites. This not only saves time but also reduces the potential for errors in configuration. With CloudFront SaaS Manager, customers benefit from optimal CDN and security defaults, ensuring high performance and secure protections following AWS best practices. Additionally, CloudFront SaaS Manager can automate requesting, issuing, and associating TLS certificates with CloudFront through a simplified AWS Certificate Manager (ACM) integration. This addresses the growing complexity in certificate management, security policy enforcement, and cross-account synchronization that companies face as their customer base expands.
At Google, we believe in empowering people and founders to use AI to tackle humanity’s biggest challenges. That’s why we’re supporting the next generation of AI leaders through our Google for Startups Accelerator: AI First programs. We announced the program in January and today, we’re proud to welcome 16 UK-based startups into our accelerator community that are using AI to drive real-world impact.
Out of hundreds of applicants, we’ve carefully selected these 16 high-potential startups to receive 1:1 guidance and support from Google, each demonstrating a unique vision for leveraging AI to address critical challenges and opportunities. This diverse cohort showcases how AI is being applied across sectors — from early cancer detection and climate resilience, to smarter supply chains and creative content generation. By joining the Google for Startups Accelerator: AI First UK program, these startups gain access to technical expertise, mentorship, and a global network to help them scale responsibly and sustainably.
“Google for Startups Accelerator: AI First provides an exceptional opportunity for us to enhance our AI expertise, accelerate the development of our data-driven products, and engage meaningfully with potential investors.” – Denise, Williams, Managing Director, Dysplasia Diagnostics.
Read more about the selected startups and the founders shaping the future of AI:
Bindbridge (London) is a generative AI platform that discovers and designs molecular glues for targeted protein degradation in plants.
Building Atlas (Edinburgh) uses data and AI to support the decarbonisation of non-domestic buildings by modelling the best retrofit plans for any portfolio size.
Comply Stream (London) helps to streamline financial crime compliance operations for businesses and consumers.
Datawhisper (London) provides safe and compliant AI Agentic solutions tailored for the fintech and payments industry.
Deducta (London) is a data intelligence platform that supports global procurement teams with supply chain insights and efficiencies.
Dysplasia Diagnostics (London) develops AI-based, non-invasive, and affordable solutions for early cancer detection and treatment monitoring.
Flow.bio (London)is an end-to-end cloud platform for running large sequencing pipelines and auto-structuring bio-data for machine learning workflows.
Humble (London) enables non-technical users to build and share AI-powered apps and workflows, allowing them to automate without writing code.
Immersive Fox (London) is an AI studio for creating presenter-led marketing and communication videos directly from text.
Kestrix (London) uses thermal drones and advanced software to map and quantify heat loss from buildings and generate retrofit plans.
Measmerize (Birmingham) provides sizing advice for fashion e-commerce retailers, enabling brands to increase sales and decrease return rates.
PSi (London) uses AI to host large-scale online deliberations, enabling local governments to harness collective intelligence for effective policymaking.
Shareback (London) is an AI platform that allows employees to securely interact with GPT-based assistants trained on company, department, or project-specific data.
Sikoia (London) streamlines customer verification for financial services by consolidating data, automating tasks, and delivering actionable insights.
SmallSpark (Cardiff) enables low power AI at the edge, simplifying the deployment, management, and optimization of ML models on embedded devices.
Source.dev (London) simplifies the software development lifecycle for smart devices, to help accelerate innovation and streamline software updates.
“Through the program, we aim to leverage Google’s expertise and cutting-edge AI infrastructure to supercharge our growth on all fronts.” Lauren Ladd, Founder, Shareback
These 16 startups reflect the diversity and depth of AI innovation happening across the UK. Each company will receive technical mentorship, strategic guidance, and access to strategic connections from Google, and will continue to receive hands-on support via our alumni network after the program wraps in July.
Congratulations to this latest cohort! To learn more about applying for an upcoming Google for Startups program , visit the program page here.
aside_block
<ListValue: [StructValue([(‘title’, ‘Try Google Cloud for free’), (‘body’, <wagtail.rich_text.RichText object at 0x3e4e9c1c2520>), (‘btn_text’, ‘Get started for free’), (‘href’, ‘https://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>
In 2023, the Waze platform engineering team transitioned to Infrastructure as Code (IaC) using Google Cloud’s Config Connector (KCC) — and we haven’t looked back since. We embraced Config Connector, an open-source Kubernetes add-on, to manage Google Cloud resources through Kubernetes. To streamline management, we also leverage Config Controller, a hosted version of Config Connector on Google Kubernetes Engine (GKE), incorporating Policy Controller and Config Sync. This shift has significantly improved our infrastructure management and is shaping our future infrastructure.
The shift to Config Connector
Previously, Waze relied on Terraform to manage resources, particularly during our dual-cloud, VM-based phase. However, maintaining state and ensuring reconciliation proved challenging, leading to inconsistent configurations and increased management overhead.
In 2023, we adopted Config Connector, transforming our Google Cloud infrastructure into Kubernetes Resource Modules (KRMs) within a GKE cluster. This approach addresses the reconciliation issues encountered with Terraform. Config Sync, paired with Config Connector, automates KRM synchronization from source repositories to our live GKE cluster. This managed solution eliminates the need for us to build and maintain custom reconciliation systems.
The shift helped us meet the needs of three key roles within Waze’s infrastructure team:
Infrastructure consumers: Application developers who want to easily deploy infrastructure without worrying about the maintenance and complexity of underlying resources.
Infrastructure owners: Experts in specific resource types (e.g., Spanner, Google Cloud Storage, Load Balancers, etc.), who want to define and standardize best practices in how resources are created across Waze on Google Cloud.
Platform engineers: Engineers who build the system that enables infrastructure owners to codify and define best practices, while also providing a seamless API for infrastructure consumers.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud containers and Kubernetes’), (‘body’, <wagtail.rich_text.RichText object at 0x3e4e81a9b730>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectpath=/marketplace/product/google/container.googleapis.com’), (‘image’, None)])]>
First stop: Config Connector
It may seem circular to define all of our Google Cloud infrastructure as KRMs within a Google Cloud service, however, KRM is actually a great representation for our infrastructure as opposed to existing IaC tooling.
Terraform’s reconciliation issues – state drift, version management, out of band changes – are a significant pain. Config Connector, through Config Sync, offers out-of-the-box reconciliation, a managed solution we prefer. Both KRM and Terraform offer templating, but KCC’s managed nature aligns with our shift to Google Cloud-native solutions and reduces our maintenance burden.
Infrastructure complexity requires generalization regardless of the tool. We can see this when we look at the Spanner requirements at Waze:
Consistent backups for all Spanner databases
Each Spanner database utilizes a dedicated Cloud Storage bucket and Service Account to automate the execution of DDL jobs.
All IAM policies for Spanner instances, databases, and Cloud Storage buckets are defined in code to ensure consistent and auditable access control.
To define these resources, we evaluated various templating and rendering tools and selected Helm, a robust CNCF package manager for Kubernetes. Its strong open-source community, rich templating capabilities, and native rendering features made it a natural fit. We can now refer to our bundled infrastructure configurations as ‘Charts.’ While KRO has since emerged that achieves a similar purpose, our selection process predated its availability.
Under the hood
Let’s open the hood and dive into how the system works and is driving value for Waze.
Waze infrastructure owners generically define Waze-flavored infrastructure in Helm Charts.
Infrastructure consumers use these Charts with simplified inputs to generate infrastructure (demo).
Infrastructure code is stored in repositories, enabling validation and presubmit checks.
Code is uploaded to a Artifact Registry where Config Sync and Config Connector align Google Cloud infrastructure with the code definitions.
This diagram represents a single “data domain,” a collection of bounded services, databases, networks, and data. Many tech orgs today consist of Prod, QA, Staging, Development, etc.
Approaching our destination
So why does all of this matter? Adopting this approach allowed us to move from Infrastructure as Code to Infrastructure as Software. By treating each Chart as a software component, our infrastructure management goes beyond simple code declaration. Now, versioned Charts and configurations enable us to leverage a rich ecosystem of software practices, including sophisticated release management, automated rollbacks, and granular change tracking.
Here’s where we apply this in practice: our configuration inheritance model minimizes redundancy. Resource Charts inherit settings from Projects, which inherit from Bootstraps. All three are defined as Charts. Consequently, Bootstrap configurations apply to all Projects, and Project configurations apply to all Resources.
Every change to our infrastructure – from changes on existing infrastructure to rolling out new resource types – can be treated like a software rollout.
Now that all of our infrastructure is treated like software, we can see what this does for us system-wide:
Reaching our destination
In summary, Config Connector and Config Controller have enabled Waze to achieve true Infrastructure as Software, providing a robust and scalable platform for our infrastructure needs, along with many other benefits including:
Infrastructure consumers receive the latest best practices through versioned updates.
Infrastructure owners can iterate and improve infrastructure safely.
Platform Engineers and Security teams are confident our resources are auditable and compliant
For data scientists and ML engineers, building analysis and models in Python is almost second nature, and Python’s popularity in the data science community has only skyrocketed with the recent generative AI boom. We believe that the future of data science is no longer just about neatly organized rows and columns. For decades, many valuable insights have been locked in images, audio, text, and other unstructured formats. And now, with the advances in gen AI, data science workloads must evolve to handle multi-modality and use new gen AI and agentic techniques.
To prepare you for the data science of tomorrow, we announced BigQuery DataFrames 2.0 last week at Google Cloud Next 25, bringing multimodal data processing and AI directly into your BigQuery Python workflows.
Extending Pandas DataFrames for BigQuery Multimodal Data
In BigQuery, data scientists frequently look to use Python to process large data sets for analysis and machine learning. However, this almost always involves learning a different Python framework and rewriting the code that worked on smaller data sets. You can hardly take Pandas code that worked on 10 GB of data and get it working for a terabyte of data without expending significant time and effort.
Version 2.0 also strengthens the core foundation for larger-scale, Python data science. And then it builds on this foundation, adding groundbreaking new capabilities that unlock the full potential of your data, both structured and unstructured.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud data analytics’), (‘body’, <wagtail.rich_text.RichText object at 0x3eca84717640>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/bigquery/’), (‘image’, None)])]>
BigQuery DataFrames adoption
We launched BigQuery DataFrames last year as an open-source Python library that scales Python data processing without having to add any new infrastructure or APIs, transpiling common Python data science APIs from Pandas and scikit-learn to various BigQuery SQL operators. Since its launch, there’s been over 30X growth in how much data it processes and, today, thousands of customers use it to process more than 100 PB every month.
During the last year we evolved our library significantly across 50+ releases and worked closely with thousands of users. Here’s how a couple of early BigQuery DataFrames customers use this library in production.
Deutsche Telekom has standardized on BigQuery DataFrames for its ML platform.
“With BigQuery DataFrames, we can offer a scalable and managed ML platform to our data scientists with minimal upskilling.” – Ashutosh Mishra, Vice President – Data Architecture & Governance, Deutsche Telekom
Trivago, meanwhile, migrated its PySpark transformations to BigQuery DataFrames.
“With BigQuery DataFrames, data science teams focus on business logic and not on tuning infrastructure.” – Andrés Sopeña Pérez, Head of Data Infrastructure, Trivago
What’s new in BigQuery Dataframes 2.0?
This release is packed with features designed to streamline your AI and machine learning pipelines:
Working with multimodal data and generative AI techniques
Multimodal DataFrames (Preview): BigQuery Dataframes 2.0 introduces a unified dataframe that can handle text, images, audio, and more, alongside traditional structured data, breaking down the barriers between structured and unstructured data. This is powered by BigQuery’s multimodal capabilities enabled by ObjectRef, helping to ensure scalability and governance for even the largest datasets.
When working with multimodal data, BigQuery DataFrames also abstracts many details for working with multimodal tables and processing multimodal data, leveraging BigQuery features behind the scene like embedding generation, vector search, Python UDFs, and others.
Pythonic operators for BigQuery AI Query Engine (experimental): BigQuery AI Query Engine makes it trivial to generate insights from multimodal data: Now, you can analyze unstructured data simply by including natural language instructions in your SQL queries. Imagine writing SQL queries where you can rank call transcripts in a table by ‘quality of support’ or generate a list of products with ‘high satisfaction’ based on reviews in a column. BigQuery AI Query Engine makes that possible with simple, stackable SQL.
BigQuery DataFrames offers a DataFrame interface to work with AI Query Engine. Here’s a sample:
code_block
<ListValue: [StructValue([(‘code’, ‘import bigframes.pandas as bpdrnrnfrom bigframes.ml import llm rngemini_model = llm.GeminiTextGenerator(model_name=”gemini-1.5-flash-002″)rnrn# Get Top K products with higher satisfacton rndf = bpd.read_gbq(“project.dataset.transcripts_table”)rnresult = df.ai.top_k(“The reviews in {review_transcription_col} indicates higher satisfaction”, model=gemini_model)rnrn# Works with multimodal data as well. rndf = bpd.from_glob_path(“gs://bucket/images/*”, name=”image_col”)rnresult = df.ai.filter(“The main object in the {image_col} can be seen in city streets”, model=gemini_model)’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3eca952504f0>)])]>
Gemini Code Assist for DataFrames (Preview): To keep up with the evolving user expectations around code generation, we’re also making it easier to develop BigQuery DataFrames code, using natural language prompts directly within BigQuery Studio. Together, Gemini’s contextual understanding and DataFrames-specific training help ensure smart, efficient code generation. This feature is released as part of Gemini in BigQuery.
Strengthening the core
To make the core Python data science workflow richer and faster to use, we added the following features.
Partial ordering (GA): By default, BigQuery DataFrames maintains strict ordering (as does Pandas). With 2.0, we’re introducing a relaxed ordering mode that significantly improves performance, especially for large-scale feature engineering. This “spin” on traditional Pandas ordering is tailored for the massive datasets common in BigQuery. Read more about partial ordering here.
Here’s some example code that uses partial ordering :
code_block
<ListValue: [StructValue([(‘code’, ‘import bigframes.pandas as bpdrnimport datetimernrn# Enable the partial ordering modernbpd.options.bigquery.ordering_mode = “partial”rnrnpypi = bpd.read_gbq(“bigquery-public-data.pypi.file_downloads”)rnrn# Show a preview of the previous day’s downloads.rn# The partial ordering mode is 4,000,000+ more efficient in terms of billed bytes.rnlast_1_days = datetime.datetime.now(datetime.timezone.utc) – datetime.timedelta(days=1)rnbigframes_downloads = pypi[(pypi[“timestamp”] > last_1_days) & (pypi[“project”] == “bigframes”)]rnbigframes_downloads[[“timestamp”, “project”, “file”]].peek()’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3eca84682850>)])]>
Work with Python UDF (Preview): BigQuery Python user-defined functions are now available in preview [see the documentation].
Within BigQuery DataFrames you can now auto-scale Python function execution to millions of rows, with serverless, scale-out execution. All you need to do is put a “@udf” decorator on top of a function that needs to be pushed to the server-side.
Here is an example code that tokenizes comments from stackoverflow data stored in a BigQuery public table with ~90 million rows using a Python UDF:
code_block
<ListValue: [StructValue([(‘code’, ‘import bigframes.pandas as bpdrnrn# Auto-create the server side Python UDFrn@bpd.udf(packages=[“tokenizer”])rndef get_sentences(text: str) -> list[str]:rn from tokenizer import split_into_sentences rn return list(split_into_sentences(text))rnrndf = bpd.read_gbq(rn “bigquery-public-data.stackoverflow.comments”rn)rn# Invoke the Python UDFrnresult = df[“text”].apply(get_sentences)rnresult.peek()’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3eca666e2550>)])]>
dbt Integration (Preview): For all the dbt users out there, you can now integrate BigQuery DataFrames Python into your existing dbt workflows. The new dbt Python model allows you to run BigQuery DataFrames code alongside your BigQuery SQL, unifying billing, and simplifying infrastructure management. No new APIs or infrastructure to learn — just the power of Python and BigQuery DataFrames within your familiar dbt environment. [Try now ]
For years, unstructured data has largely resided in silos, separate from the structured data in data warehouses. This separation restricted the ability to perform comprehensive analysis and build truly powerful AI models. BigQuery’s multimodal capabilities and BigQuery Dataframes 2.0 eliminates this divide, bringing the capabilities traditionally associated with data lakes directly into the data warehouse, enabling:
Unified data analysis: Analyze all your data – structured and unstructured – in one place, using a single, consistent Pandas-like API.
LLM-powered insights: Unlock deeper insights by combining the power of LLMs with the rich context of your structured data.
Simplified workflows: Streamline your data pipelines and reduce the need for complex data movement and transformation.
Scalability and governance: Leverage BigQuery’s serverless architecture and robust governance features for all your data, regardless of format.
See BigQuery Dataframes 2.0 in Action
You can see all of these features in action in this video from Google Cloud Next ’25
Get started today!
BigQuery Dataframes 2.0 is a game-changer for anyone working with data and AI. It’s time to unlock the full potential of your data, regardless of its structure. Start experimenting with the new features today!
The daily grind of sifting through endless alerts and repetitive tasks is burdening security teams. Too often, defenders struggle to keep up with evolving threats, but the rapid pace of AI advancement means it doesn’t have to be that way.
Agentic AIpromises a fundamental, tectonic shift for security teams, where intelligent agents work alongside human analysts to autonomously take on routine tasks, augment human decision-making, automate workflows and empower them to focus on what matters most: the complex investigations and strategic challenges that truly demand human expertise.
The agentic AI future
While assistive AI primarily aids human analyst actions, agentic AI goes further and can independently identify, reason through, and dynamically execute tasks to accomplish goals — all while keeping human analysts in the loop.
Our vision for this agentic future for security builds on the the tangible benefits our customers experience today with Gemini in Security Operations:
“No longer do we have our analysts having to write regular expressions that could take anywhere from 30 minutes to an hour. Gemini can do it within a matter of seconds,” said Hector Peña, senior information security director, Apex Fintech Solutions.
We believe that agentic AI will transform security operations. The agentic security operations center (SOC), powered by multiple connected and use-case driven agents, can execute semi-autonomous and autonomous security operations workflows on behalf of defenders.
The agentic SOC
We are rapidly building the tools for the agentic SOC with Gemini in Security. Earlier this month at Google Cloud Next, we introduced two new Gemini in Security agents:
The alert triage agent in Google Security Operations autonomously performs dynamic investigations and provides a verdict.
In Google Security Operations, an alert triage agent performs dynamic investigations on behalf of users. Expected to preview for select customers in Q2 2025, this agent analyzes the context of each alert, gathers relevant information, and renders a verdict on the alert.
It also provides a fully transparent audit log of the agent’s evidence, reasoning and decision making. This always-on investigation agent will vastly reduce the manual workload of Tier 1 and Tier 2 analysts who otherwise are triaging and investigating hundreds of alerts per day.
The malware analysis agent in Google Threat Intelligence performs reverse engineering.
In Google Threat Intelligence, a malware analysis agent performs reverse engineering tasks to determine if a file is malicious. Expected to preview for select customers in Q2 2025, this agent analyzes potentially malicious code, including the ability to create and execute scripts for deobfuscation. The agent will summarize its work, and provide a final verdict.
Building on these investments, the agentic SOC is a connected, multi-agent system that works collaboratively with the human analyst to achieve exponential gains in efficiency. These intelligent agents are designed to fundamentally change security and threat management, working alongside analysts to automate common tasks and workflows, improve decision-making, and ultimately enable a greater focus on complex threats.
The agentic SOC will be a connected, multi-agent system that works collaboratively with human analysts.
To illustrate this vision in action, consider the following examples of how agentic collaboration could transform everyday security tasks with agents. At Google Cloud, we believe many critical SOC functions can be automated and orchestrated:
Data management:Ensures data quality and optimizes data pipelines.
Alert triage: Prioritizes and escalates alerts.
Investigation:Gathers evidence and provides verdicts on alerts, documents each analysis step, and determines the response mechanism.
Response: Remediates issues using hundreds of integrations,such as endpoint isolation.
Threat research:Bridges silos by analyzing and disseminating intelligence to other agents, such as the threat hunt agent.
Threat hunt:Proactively hunts for unknown threats in your environment with data from Google Threat Intelligence.
Malware analyst:Analyzes files at scale for potentially malicious attributes.
Exposure management: Proactively monitors internal and external sources for credential leaks, initial access brokers, and exploited vulnerabilities.
Detection engineering: Continuously analyzes threat profiles and can create, test, and fine-tune detection rules.
How the Google advantage helps agentic AI
Developing dependable and impactful agents for real-world security applications requires three key ingredients, all of which Google excels in:
We harness our deep reservoir of security data and expertise to provide guiding principles for the agents.
We integrate our cutting-edge AI research, and use mature agent development tools and frameworks to enable the creation of a reusable and scalable agentic system architecture.
Our ownership of the complete AI technology stack, from highly scalable and secure infrastructure to state-of-the-art models, provides a robust foundation for agentic AI development.
These advantages allow us to establish a well-defined framework for security agents, empowering AI to emulate human-level planning and reasoning, leading to superior performance in security tasks compared to general-purpose large language models.
This approach ensures high-quality and consistent results across security tasks and also facilitates the development of new agents through the modular composition of existing security capabilities – building a diverse garden of reusable, task-focused security agents.
Furthermore, agent interoperability, regardless of developer, boosts autonomy, productivity, and reduces long-term costs. Our open Agent2Agent (A2A) protocol, announced at Google Cloud Next, facilitates this, complementing the model context protocol (MCP) for standardized AI interaction with security applications and platforms.
To further advance interoperability, we are pleased to announce the open-sourcing of MCP servers for Google Unified Security, allowing users to build custom security workflows that use both Google Cloud and ecosystem tools. We are committed to an open ecosystem, envisioning a future where agents can collaborate dynamically across different products and vendors.
“We see an immediate opportunity to use MCP with Gemini to connect with our array of custom and commercial tools. It can help us make ad-hoc execution of data gathering, data enrichment, and communication easier for our analysts as they use the Google Security Operations platform,” said Grant Steiner, principal cyber-intelligence analyst, Enablement Operations, Emerson.
Introducing SecOps Labs for AI
To help defenders as our AI work rapidly advances, and to give the community an opportunity to offer direct feedback, we’re excited to introduce SecOps Labs. This initiative offers customers early access to cutting-edge AI pilotsin Google Security Operations, and is designed to foster collaboration with defenders through firsthand experience, valuable feedback, and direct influence on future Google Security Operations technologies.
Initial pilots showcase AI’s potential to address key security challenges, such as:
Detection engineering: This pilot autonomously converts threat reports into detection rules and generates synthetic data for testing their effectiveness.
Response playbooks: This pilot recommends and generates automation playbooks for new alerts based on analysis of past incidents.
Data parsing: This pilot is a first step towards AI generated parsers starting with allowing users to update their parsers using natural language.
SecOps Labs is a collaborative space to refine AI capabilities, to ensure they address real-world security challenges and deliver tangible value, while enabling teams to experiment with the latest pre-production capabilities. Stay tuned for more in Q2 2025 to participate in shaping the future of agentic security operations with Google Cloud Security.
Meet us at RSAC to learn more
Excited about agentic AI and the impact it will have on security? Connect with our experts and see Google Cloud Security tech in action. Find us on the show floor at booth #N-6062 Moscone Center, North Hall, or at the Marriott Marquis to meet with our security experts and learn how you can make Google part of your security team.
Not able to join us in person? Stream RSA Conference or catch up on-demand here, and connect with Google Cloud Security experts and fellow professionals in the Google Cloud Security Community to share knowledge, access resources, discover local events and elevate your security experience.
Cybersecurity is facing a unique moment, where AI-enhanced threat intelligence, products, and services are poised to give defenders an advantage over the threats they face that’s proven elusive — until now.
To empower security teams and business leaders in the AI era, and to help organizations proactively combat evolving threats, today at RSA Conference we’re sharing Mandiant’s latest M-Trends report findings, and announcing enhancements across Google Unified Security, our product portfolio, and our AI capabilities.
M-Trends 2025
The 16th edition of M-Trends is now available. The report provides data, analysis, and learnings drawn from Mandiant’s threat intelligence findings and over 450,000 hours of incident investigations conducted in 2024. Providing actionable insights into current cyber threats and attacker tactics, this year’s report continues our efforts to help organizations understand the evolving threat landscape and improve their defenses based on real-world data.
We see that attackers are relentlessly seizing opportunities to further their objectives, from using infostealer malware, to targeting unsecured data repositories, to exploiting cloud migration risks. While exploits are still the most common way that attackers are breaching organizations, they’re using stolen credentials more than ever before. The financial sector remains the top target for threat actors.
From M-Trends 2025, the most common initial infection vector was exploit (33%), followed by stolen credentials (16%), and email phishing (14%).
M-Trends 2025 dives deep into adversarial activity, loaded with highly relevant threat data analysis, including insider risks from North Korean IT workers, blockchain-fueled cryptocurrency threats, and looming Iranian threat actor activity. Our unique frontline insight helps us illustrate how threat actors are conducting their operations, how they are achieving their goals, and what organizations need to be doing to prevent, detect, and respond to these threats.
Google Unified Security
Throughout 2024, Google Cloud Security customers directly benefited from the threat intelligence and insights now publicly released in the M-Trends 2025 report. The proactive application of our ongoing findings included expert-crafted threat intelligence, enhanced detections in our security operations and cloud security solutions, and Mandiant security assessments, ensuring customers quickly received the latest insights and detections as threats were uncovered on the frontlines.
Now, with the launch of Google Unified Security, customers benefit from even greater visibility into threats and their environment’s attack surface, while Mandiant frontline intelligence is actioned directly through curated detections and playbooks in the converged solution.
By integrating Google’s leading threat intelligence, security operations, cloud security, secure enterprise browsing, and Mandiant expertise, Google Unified Security creates a single, scalable security data fabric across the entire attack surface. Gemini AI enhances threat detection with real-time insights; streamlines security operations; and fuels our new malware analysis and triage AI agents, empowering organizations to shift from reactive to preemptive security.
In today’s threat landscape, one of the most critical choices you need to make is who will be your strategic security partner, and Google Unified Security is the best, easiest, and fastest way to make Google part of your security team. Today, we’re excited to share several enhancements across the product portfolio.
Google Unified Security is powered by Mandiant frontline intelligence gathered from global incident response engagements.
What’s new in Google Security Operations
Google Security Operations customers now benefit from Curated Detections and Applied Threat Intelligence Rule Packs released for specific M-Trends 2025 observations, which can help detect malicious activity, including infostealer malware, cloud compromise, and data theft.
For example, the indicators of compromise (IOCs) and tactics, techniques, and procedures (TTPs) from cloud compromise observations have been added to the Cloud Threats curated detections rule pack.
We’re also excited to announce several AI and product updates designed to simplify workflows, dramatically reduce toil, and empower analysts.
We’ve already seen the transformative power of AI in security operations through the tangible benefits our customers experience today with Gemini in Google Security Operations. Our vision for the future is even more ambitious: an agentic security operations center (SOC), where security operations are fundamentally enhanced by a collaborative multi-agent system.
As we bring this vision to life, we’re developing intelligent, use-case driven agents that are designed to work in concert with human analysts as they automate routine tasks and improve decision-making. Ultimately, the agentic SOC will enable a greater focus on complex threats, helping to deliver autonomous security operations workflows and exponential gains in efficiency.
To further accelerate the adoption and refinement of AI-powered security capabilities, we are launching SecOps Labs, a new space for customers to get early access to our latest AI pilots and provide feedback. Initial features include an Natural Language Parser Extension, a Detection Engineering Agent for automated rule creation and testing, and a Response Agent for generating automation playbooks. SecOps Labs will foster collaboration in shaping the future of AI-powered security operations.
Composite Detections, in preview, can connect the dots between seemingly isolated events to help defenders uncover a more complete attack story. Your SOC can use it to create sophisticated multi-stage detections and attacker activity correlation, simplify detection engineering, and minimize false positives and false negatives.
Composite Detections can help teams build reusable detection logic to reveal hidden connections, stop advanced attackers that evade simple detection, and overcome the assumed precision and recall tradeoff inherent to most detection engineering.
Connect detections, catch more threats.
The Content Hub, in preview, is your go-to for the resources you need to streamline security operations and maximize the platform’s potential. Security operations teams can access content packs for top product integrations and use cases, making data ingestion configuration and data onboarding more efficient.
There’s also a library of certified integrations, pre-built dashboards, and ready-to-install search queries. Plus, you can gain deeper insights into your security posture with access to curated detections and insights into their underlying logic. Now you can discover, onboard, and manage all your security operations content in one place.
Activate your platform with ready-to-use content packs.
With Gemini in Google Security Operations, we’re also introducing a new way to get your product questions answered instantly, accessible from anywhere in the platform (in preview). You can now search documentation with Gemini, which will provide fast and high-quality answers for your security operations related questions, complete with reference links.
Get instant answers to your Google Security Operations product questions.
What’s new in Security Command Center
Rapidly building on AI Protection, which was announced in March, we are adding new multi-modal capabilities for detecting sensitive data in images used for training and inference.
To help security teams gain more visibility into AI environments, discover a wider range of sensitive data, and configure image-redaction rules if needed, AI Protection will be able to conduct object-based detection (such as barcodes) available in June.
Multi-modal detection: Sensitive data redacted from scanned loan application.
In addition to detecting sensitive data in images, we’ve added new AI threat detectors to AI Protection to identify specific cloud-based threats against your AI workloads. Aligned with MITRE ATLAS tactics, AI Protection detects threats like Suspicious/Initial Access, Persistence, and Access Modifications for your Vertex workloads and associated resources, empowering your organization with the visibility and context needed to rapidly investigate and respond to threats against your AI environment.
AI Protection is currently in preview (sign up here), and provides full AI lifecycle security that discovers AI assets and prioritizes top risks, secures AI with guardrails and safety controls, and helps detect, investigate, and respond to AI threats.
We’re also excited to share our latest research on the intersection of security and AI, Secure AI Framework (SAIF) in the Real World. We provide key considerations for applying SAIF principles across the data, infrastructure, application, and model dimensions of your AI projects.
What’s new in Mandiant Cybersecurity Consulting
Google Unified Security integrates Mandiant’s expertise through the Mandiant Retainer, offering on-demand access to experts with rapid incident response and flexible pre-paid funds for consulting services and, through Mandiant Threat Defense, which provides AI-assisted threat detection, hunting, and response, extending customer security teams through expert collaboration and SOAR playbooks.
Mandiant’s new Essential Intelligence Access (EIA) subscription, available now, offers organizations direct and flexible access to our world-class threat intelligence experts. These experts serve as an extension of your security team, providing personalized research and analysis, delivering tailored insights to inform critical decisions, focus defenses, and strengthen cybersecurity strategies.
EIA also helps customers maximize the value and efficiency of their Cyber Threat Intelligence (CTI) investments. Going beyond raw threat feeds, EIA analyzes data in the context of your specific environment to illuminate unique threats. Crucially, this includes personalized guidance from human experts deeply experienced in operationalizing threat intelligence, upskilling teams, prioritizing threats, and delivering continuous support to improve security posture and reduce organizational risk.
Evolve your security strategy with Google Cloud
The M-Trends 2025 report is a call to action. It highlights the urgency of adapting your defenses to meet increasingly sophisticated attacks.
At RSA Conference, we’ll be sharing how these latest Google Cloud Security advancements and more can transform threat intelligence into proactive, AI-powered security. You can find us at booth #N-6062 Moscone Center, North Hall, and connect with security experts at our Customer Lounge in the Marriott Marquis.
You can also stream the conference or catch up on-demand here, and join the Google Cloud Security Community to share knowledge, access resources, discover local events, and elevate your security experience.
Feel more secure about your security, by making Google part of your security team today.
Amazon Bedrock Data Automation (BDA) now supports modality enablement, modality routing by file type, extraction of embedded hyperlinks when processing documents in Standard Output, and an increased overall document page limit of 3,000 pages. These new features give you more control over how your multimodal content is processed and improve BDA’s overall document extraction capabilities.
With Modality Enablement and Routing, you can configure which modalities (Document, Image, Audio, Video) should be enabled for a given project and manually specify the modality routing for specific file types. JPEG/JPG and PNG files can be processed as either Images or Documents based on your specific use case requirements. Similarly, MP4/M4V and MOV files can be processed as either video files or audio files, allowing you to choose the optimal processing path for your content.
Embedded Hyperlink Support enables BDA to detect and return embedded hyperlinks found in PDFs as part of the BDA standard output. This feature enhances the information extraction capabilities from documents, preserving valuable link references for applications such as knowledge bases, research tools, and content indexing systems.
Lastly, BDA now supports processing documents up to 3,000 pages per document, doubling the previous limit of 1,500 pages. This increased limit allows you to process larger documents without splitting them, simplifying workflows for enterprises dealing with long documents or document packets.
Amazon Bedrock Data Automation is generally available in the US West (Oregon) and US East (N. Virginia) AWS Regions.
Starting today, in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions, you can now deliver events from an Amazon EventBridge Event Bus directly to AWS services in another account. Using multiple accounts can improve security and streamline business processes while reducing the overall cost and complexity of your architecture.
Amazon EventBridge Event Bus is a serverless event broker that enables you to create scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. This launch allows you to directly target services in another account, without the need for additional infrastructure such as an intermediary EventBridge Event Bus or Lambda function, simplifying your architecture and reducing cost. For example, you can now route events from your EventBridge Event Bus directly to a different team’s SQS queue in a different account. The team receiving events does not need to learn about or maintain EventBridge resources and simply needs to grant IAM permissions to provide access to the queue. Events can be delivered cross-account to EventBridge targets that support resource-based IAM policies such as Amazon SQS, AWS Lambda, Amazon Kinesis Data Streams, Amazon SNS, and Amazon API Gateway.
In addition to the AWS GovCloud (US) Regions, direct delivery to cross-account targets is available in all commercial AWS Regions. To learn more, please read our blog post or visit our documentation. Pricing information is available on the EventBridge pricing page.
Today, AWS Resource Groups is adding support for an additional 160 resource types for tag-based Resource Groups. Customers can now use Resource Groups to group and manage resources from services such as AWS Code Catalyst and AWS Chatbot.
AWS Resource Groups enables you to model, manage and automate tasks on large numbers of AWS resources by using tags to logically group your resources. You can create logical collections of resources such as applications, projects, and cost centers, and manage them on dimensions such as cost, performance, and compliance in AWS services such as myApplications, AWS Systems Manager and Amazon CloudWatch.
Resource Groups expanded resource type coverage is available in all AWS Regions, including the AWS GovCloud (US) Regions. You can access AWS Resource Groups through the AWS Management Console, the AWS SDK APIs, and the AWS CLI.
Starting today, Amazon Q Developer operational investigations is available in preview in 11 additional regions. With this launch, Amazon Q Developer operational investigations is now available in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (Stockholm), Europe (Spain), Asia Pacific (Tokyo), Asia Pacific (Hong Kong), Asia Pacific (Sydney), Asia Pacific (Singapore), and Asia Pacific (Mumbai).
Amazon Q Developer helps you accelerate operational investigations across your AWS environment in just a fraction of the time. With a deep understanding of your AWS cloud environment and resources, Amazon Q Developer looks for anomalies in your environment, surfaces related signals for you to explore, identifies potential root-cause hypotheses, and suggests next steps to help you remediate issues faster.
The new operational investigation capability within Amazon Q Developer is available at no additional cost during preview. To learn more, see getting started and best practices documentation.