Everyone’s talking about AI agents, but the real magic happens when they collaborate to tackle complex tasks. Think: complex processes, data analysis, content creation, and customer support. In this hackathon, you’ll build autonomous multi-agent AI systems using Google Cloud and the open source Agent Development Kit (ADK).
This is your chance to dive deep into cutting-edge AI, showcase your skills, and contribute to the future of agent development.
Hands-on learning with the ADK: This is your chance to try out and contribute to Agent Development Kit (ADK). We’ll provide you with the resources, support, and expert guidance you need to build sophisticated multi-agent systems.
Real-world impact: Tackle real world problems that directly impact how work gets done, from automating complex processes and deriving data insights to changing customer service and content creation.
A showcase for your talent: Present your project to a panel of judges and demonstrate your expertise to a wide audience. Build working agents that can help your workflows and be the foundation for a future product.
And the rewards? Exciting prizes await!
We’re offering a range of exciting prizes:
Overall grand prize: $15,000 in USD, $3,000 in Google Cloud Credits for use with a Cloud Billing Account, 1 year of Google Developer Program Premium subscription at no-cost, virtual coffee with a Google team member, and social promo
Regional winners: $8,000 in USD, $1,000 in Google Cloud Credits for use with a Cloud Billing Account, virtual coffee with a Google team member, and social promo
Honorable mentions: $1000 in USD and $500 in Google Cloud Credits for use with a Cloud Billing Account
Unleash the power of the Agent Development Kit (ADK):
ADK is a flexible and modular framework designed for developing and deploying AI agents. It’s an open-source framework that offers tight integration with the Google ecosystem and Gemini models. ADK makes it easy to get started with simple agents powered by Gemini models and Google AI tools, while also providing the control needed for more complex agent architectures and orchestration.
What to build
Your project should demonstrate how to design and orchestrate interactions between multiple autonomous agents using ADK. Build in one of these categories:
Automation of complex processes: Design multi-agent workflows to automate complex, multi-step business processes, software development lifecycle, or manage intricate tasks.
Data analysis and insights: Create multi-agent systems that autonomously analyze data from various sources, derive meaningful insights using tools like BigQuery, and collaboratively present findings.
Customer service and engagement: Develop intelligent virtual assistants or support agents built with ADK as multi-agent systems to handle complex customer inquiries, provide personalized support, and proactively engage with customers.
Content creation and generation: Build multi-agent systems that can autonomously generate different forms of content, such as marketing materials, reports, or code, by orchestrating agents with specialized content generation capabilities.
Crucial note: Your project must be built using the Agent Development Kit (ADK), focusing on the design and interactions between multiple agents. Think ADK first, but feel free to supercharge your solution by integrating with other awesome Google Cloud technologies!
Ready to start building?
Head over to our hackathon website and watch our webinar to learn more, review the rules, and register.
Today, AWS Backup announces support for the creation of backup indexes in backup policies, allowing you to automatically create backup indexes of your Amazon S3 backups or Amazon EBS snapshots at the AWS Organization level. The creation of a backup index is the prerequisite for searching your backups. Once the backup index is created, you can perform a search and item level recovery of your S3 backups or EBS snapshots. You can now use your Organization management account to set a backup indexing policy across your AWS accounts.
To get started, create a new or edit an existing AWS Backup policy from your AWS Organization management account. You can designate your backup policies to automatically create a backup index of your S3 backups and/or EBS Snapshots. Once your backup is indexed, you can search across multiple backups to locate specific files or objects. You can specify your search criteria based on one or more filters such as file name, creation time, and size. Once you identify the specific files or objects you are looking for, you can choose to restore just these items to an Amazon S3 bucket rather than restoring the full backup, allowing for quicker recovery times.
AWS Backup support for backup indexes in backup policies is available in all AWS Commercial and AWS GovCloud (US) Regions, where AWS Backup, backup policies, and backup indexes are available. You can get started by using the AWS Organizations API, or CLI. For more information, visit our documentation and blog post.
Google Cloud’s Vertex AI platform makes it easy to experiment with and customize over 200 advanced foundation models – like the latest Google Gemini models, and third-party partner models such as Meta’s Llama and Anthropic’s Claude. And now, thanks to a major refresh focused on developer feedback, it’s even more efficient and intuitive.
The redesigned, developer-first experience will be your source for generative AI media models across all modalities. You’ll have access to Google’s powerfulgenerative AI media models such as Veo, Imagen, Chirp and Lyria in the Vertex AI Media Studio.These aren’t just cosmetic changes; they translate directly into five workflow benefits, from accelerated prototyping to experimentation:
Stay cutting-edge: Get hands-on experience with Google’s latest AI models and features as soon as they’re available.
Easier to start with AI in Cloud: The new design makes it easier for developers of all experience levels to start building with generative AI.
Accelerated prototyping: Quickly test ideas, iterate on prompts, and prototype applications faster than before.
Integrated end-to-end workflow: Move easily from ideation and prompting to grounding, tuning, code generation, and even test deployment – all within a single, cohesive environment…with a couple of clicks! Less tool-switching, more building!
Efficient experimentation: Vertex AI Studio provides a place to explore different models, parameters, and prompting techniques.
Dive in to see the key improvements.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Vertex AI Studio’), (‘body’, <wagtail.rich_text.RichText object at 0x3ee588102dc0>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/vertex-ai/’), (‘image’, None)])]>
What’s new and how it works for you
We heard you wanted features to explore, iterate and boost your productivity. That’s why we’re making things easier and more powerful in three ways: faster prompting, easier ways to build, and a fresh interface.
Enhanced prompting capabilities:
Faster prompting: Get prompting faster. Our revamped overview provides quick access to samples and tools, complemented by a unified UI combining Chat and Freeform prompting for a smoother workflow.
Prompt management & enhancement: Simplify your prompt engineering by easily managing the lifecycle (create, refine, compare, save, track history) while simultaneously improving prompt quality and capabilities through techniques like variables, function calling, and adding examples.
Integrated prompt engineering: Access to tuning, evaluation, and batch prediction, all designed to optimize model performance.
Prompt with gen AI models in Vertex AI Studio
Better ways to build
Build with Gemini: Access and experiment with the latest Gemini models such as Gemini 2.5 to test:
Text generation
Image creation
Audio generation
Multimodal capabilities
and Live API directly within the Studio.
Build trust with grounded AI: Easily connect models to real-world, up-to-date information or your specific private data. Grounding with Google Search or Google Maps is simpler than ever. Need custom knowledge? Integrate effortlessly with your data via Vertex AI RAG Engine or Vertex AI Search. This dramatically improves the reliability and factual accuracy of model outputs, letting you build applications your users can trust.
Code generation & app deployment: Get sample code (Python, Android, Swift, Web, Flutter, cUrl), including direct integration to open Python in Colab Enterprise. You can also deploy the prompt as a test web application for quick proof-of-concept validation.
Fresher interface
Dark mode is here: Recognizing that many developers prefer darker interfaces for extended sessions, you can now experience dark mode across the entire Vertex AI platform for improved visual comfort and focus. Activate it easily in your Cloud profile user preferences.
Get started with Vertex AI today
We’re committed to continually refining Vertex AI Studio based on your feedback, which you can share right in the console, ensuring you have the tools you need for building the next generation of AI applications.
The cyber defense and threat landscape demands continuous adaptation, as threat actors continue to refine their tactics to breach defenses. While some adversaries are using increasingly sophisticated approaches with custom malware, zero-day exploits, and advanced evasion techniques, it’s crucial to remember that not all successful attacks are complex or sophisticated. Many successful attacks exploit basic vulnerabilities, like stolen credentials via infostealers – now the second-highest initial infection vector – or unprotected data repositories.
In order to arm government agencies with the insights needed to combat this multifaceted threat landscape, we’ve just released the 16th edition of our annual report Mandiant M-Trends 2025. By digging deeper into the key trends, data, insights and analysis from the frontlines of our incident response engagements, we aim to help public sector organizations stay ahead of all types of attacks and arm them with critical insights around the latest cyber threats.
Here are three top findings from our annual M-Trends 2025 report and what they mean for public sector agencies.
Malicious exploits top the list
For the fifth consecutive year, exploits – malicious code targeting specific known vulnerabilities in software and networks – continue to be the most frequent source of attacks, or initial infection vector, accounting for one-third of security intrusions. Among Mandiant incident response investigations, the report details the year’s four most targeted vulnerabilities, affecting vendors like Palo Alto Networks, Ivanti, and Fortinet.
Given public sector agencies handle vast amounts of sensitive citizen data and critical infrastructure, this underscores the necessity for stringent cybersecurity hygiene, rapid patching protocols, and continuous threat intelligence to prevent severe operational disruptions and maintain public trust.
Increasing malware families and threat groups
According to the report, in 2024 Mandiant began tracking 632 net new malware families, bringing the total number of tracked malware families to over 5,500 unique families. Also tabulated were 737 newly tracked “threat groups” – clusters of consistent attacks, adding to a total of over 4,500 currently tracked groups which may indicate organized cybercrime campaigns – including financial theft and state-sponsored espionage – targeting both the public and private sectors.
For public sector agencies, this proliferation of new malware families demands enhanced vigilance, adaptive defense strategies, and intelligence-driven cybersecurity investments to safeguard critical government operations and sensitive citizen data from sophisticated attacks.
New York City Cyber Command, a centralized organization charged with protecting city systems that deliver critical services that New Yorkers rely on, leverages a highly secure, resilient, and scalable cloud infrastructure powered by Google Cloud, that helps its cybersecurity experts detect and mitigate an estimated 90 billion cyberthreats every week. By applying Google’s Zero Trust framework to secure the smartphones and other devices used by police officers and by leveraging Google Threat Intelligence, they are able to get the right information to the right teams at the right time in order to detect and respond to threats faster.
Ransomware on the rise
This year’s M-Trends 2025 report dives deeper into the global scope and consequences of ransomware – with ransomware-related events accounting for over one-fifth (21%) of all Mandiant incident response investigations in 2024. The most commonly observed initial infection vector for ransomware-related intrusions, when the vector could be identified, was brute-force attacks, followed by stolen credentials and exploits. This increasing risk facing organizations of all kinds – including public sector agencies – necessitates the investment in resilient cybersecurity infrastructure, comprehensive employee training, and the adoption of defense tools.
Covered California leverages Assured Workloads and Google Security Operations (SecOps) to proactively scan all log information, signatures and threats in the landscape to eliminate security blind spots and proactively safeguard against attacks. In this framework, all solution network traffic is private and encrypted at all times. Together, these solutions help Covered California achieve its goals to reduce costs for residents and increase the number of Californians with access to healthcare, while also improving the employee and customer journey.
Arming public sector agencies in readiness and response
With this latest M-trends 2025 report, we aim to equip security professionals across public sector agencies with frontline insights into the latest evolving cyberattacks as well as practical and actionable learnings for better organizational security. Read the full M-Trends 2025 report here, and subscribe to our Google Public Sector Newsletter to stay informed and stay ahead with the latest updates, announcements, events and more.
Since November 2024, Mandiant Threat Defense has been investigating an UNC6032 campaign that weaponizes the interest around AI tools, in particular those tools which can be used to generate videos based on user prompts. UNC6032 utilizes fake “AI video generator” websites to distribute malware leading to the deployment of payloads such as Python-based infostealers and several backdoors. Victims are typically directed to these fake websites via malicious social media ads that masquerade as legitimate AI video generator tools like Luma AI, Canva Dream Lab, and Kling AI, among others. Mandiant Threat Defense has identified thousands of UNC6032-linked ads that have collectively reached millions of users across various social media platforms like Facebook and LinkedIn. We suspect similar campaigns are active on other platforms as well, as cybercriminals consistently evolve tactics to evade detection and target multiple platforms to increase their chances of success.
Mandiant Threat Defense has observed UNC6032 compromises culminating in the exfiltration of login credentials, cookies, credit card data, and Facebook information through the Telegram API. This campaign has been active since at least mid-2024 and has impacted victims across different geographies and industries. Google Threat Intelligence Group (GTIG) assesses UNC6032 to have a Vietnam nexus.
Mandiant Threat Defense acknowledges Meta’s collaborative and proactive threat hunting efforts in removing the identified malicious ads, domains, and accounts. Notably, a significant portion of Meta’s detection and removal began in 2024, prior to Mandiant alerting them of additional malicious activity we identified.
Threat actors haven’t wasted a moment capitalizing on the global fascination with Artificial Intelligence. As AI’s popularity surged over the past couple of years, cybercriminals quickly moved to exploit the widespread excitement. Their actions have fueled a massive and rapidly expanding campaign centered on fraudulent websites masquerading as cutting-edge AI tools. These websites have been promoted by a large network of misleading social media ads, similar to the ones shown in Figure 1 and Figure 2.
Figure 1: Malicious Facebook ads
Figure 2: Malicious LinkedIn ads
As part of Meta’s implementation of the Digital Services Act, the Ad Library displays additional information (ad campaign dates, targeting parameters and ad reach) on all ads that target people from the European Union. LinkedIn has also implemented a similar transparency tool.
Our research through both Ad Library tools identified over 30 different websites, mentioned across thousands of ads, active since mid 2024, all displaying similar ad content. The majority of ads which we found ran on Facebook, with only a handful also advertised on LinkedIn. The ads were published using both attacker-created Facebook pages, as well as by compromised Facebook accounts. Mandiant Threat Defense performed further analysis of a sample of over 120 malicious ads and, from the EU transparency section of the ads, their total reach for EU countries was over 2.3 million users. Table 1 displays the top 5 Facebook ads by reach. It should be noted that reach does not equate to the number of victims. According to Meta, the reach of an ad is an estimated number of how many Account Center accounts saw the ad at least once.
Ad Library ID
Ad Start Date
Ad End Date
EU Reach
1589369811674269
14.12.2024
18.12.2024
300,943
559230916910380
04.12.2024
09.12.2024
298,323
926639029419602
07.12.2024
09.12.2024
270,669
1097376935221216
11.12.2024
12.12.2024
124,103
578238414853201
07.12.2024
10.12.2024
111,416
Table 1: Top 5 Facebook ads by reach
The threat actor constantly rotates the domains mentioned in the Facebook ads, likely to avoid detection and account bans. We noted that once a domain is registered, it will be referenced in ads within a few days if not the same day. Moreover, most of the ads are short lived, with new ones being created on a daily basis.
On LinkedIn, we identified roughly 10 malicious ads, each directing users to hxxps://klingxai[.]com. This domain was registered on September 19, 2024, and the first ad appeared just a day later. These ads have a total impression estimate of 50k-250k. For each ad, the United States was the region with the highest percentage of impressions, although the targeting included other regions such as Europe and Australia.
Ad Library ID
Ad Start Date
Ad End Date
Total Impressions
% Impressions in the US
490401954
20.09.2024
20.09.2024
<1k
22
508076723
27.09.2024
28.09.2024
10k-50k
68
511603353
30.09.2024
01.10.2024
10k-50k
61
511613043
30.09.2024
01.10.2024
10k-50k
40
511613633
30.09.2024
01.10.2024
10k-50k
54
511622353
30.09.2024
01.10.2024
10k-50k
36
Table 2: LinkedIn ads
From the websites investigated, Mandiant Threat Defense observed that they have similar interfaces and offer purported functionalities such as text-to-video or image-to-video generation. Once the user provides a prompt to generate a video, regardless of the input, the website will serve one of the static payloads hosted on the same (or related) infrastructure.
The payload downloaded is the STARKVEIL malware. It drops three different modular malware families, primarily designed for information theft and capable of downloading plugins to extend their functionality. The presence of multiple, similar payloads suggests a fail-safe mechanism, allowing the attack to persist even if some payloads are detected or blocked by security defences.
In the next section, we will delve deeper into one particular compromise Mandiant Threat Defense responded to.
Luma AI Investigation
Infection Chain
Figure 3: Infection chain lifecycle
This blog post provides a detailed analysis of our findings on the key components of this campaign:
Lure: The threat actors leverage social networks to push AI-themed ads that direct users to fake AI websites, resulting in malware downloads.
Malware: It contains several malware components, including the STARKVEIL dropper, which deploys the XWORM and FROSTRIFT backdoors and the GRIMPULL downloader.
Execution: The malware makes extensive use of DLL side-loading, in-memory droppers, and process injection to execute its payloads.
Persistence: It uses AutoRun registry key for its two Backdoors (XWORM and FROSTRIFT).
Anti-VM and Anti-analysis: GRIMPULL checks for commonly used artifactsfeatures from known Sandbox and analysis tools.
Reconnaissance
Host reconnaissance: XWORM and FROSTRIFT survey the host by collecting information, including OS, username, role, hardware identifiers, and installed AV.
Software reconnaissance: FROSTRIFT checks the existence of certain messaging applications and browsers.
Command-and-control (C2)
Tor: GRIMPULL utilizes a Tor Tunnel to fetch additional .NET payloads.
Telegram: XWORM sends victim notification via telegram including information gathered during host reconnaissance.
TCP: The malware connects to its C2 using ports 7789, 25699, 56001.
Information stealer
Keylogger: XWORM log keystrokes from the host.
Browser extensions: FROSTRIFT scans for 48 browser extensions related to Password managers, Authenticators, and Digital wallets potentially for data theft.
Backdoor Commands: XWORM supports multiple commands for further compromise.
The Lure
This particular case began from a Facebook Ad for “Luma Dream AI Machine”, masquerading as a well-known text-to-video AI tool – Luma AI. The ad, as seen in Figure 4, redirected the user to an attacker-created website hosted at hxxps://lumalabsai[.]in/.
Figure 4: The ad the victim clicked on
Once on the fake Luma AI website, the user can click the “Start Free Now” button and choose from various video generation functionalities. Regardless of the selected option, the same prompt is displayed, as shown in the GIF in Figure 5.
This multi-step process, made to resemble any other legitimate text-to-video or image-to-video generation tool website, creates a sense of familiarity to the user and does not give any immediate indication of malicious intent. Once the user hits the generate button, a loading bar appears, mimicking an AI model hard at work. After a few seconds, when the new video is supposedly ready, a Download button is displayed. This leads to the download of a ZIP archive file on the victim host.
Figure 5: Fake AI video generation website
Unsurprisingly, the ready-to-download archive is one of many payloads already hosted on the same server, with no connection to the user input. In this case, several archives were hosted at the path hxxps://lumalabsai[.]in/complete/. Mandiant determined that the website will serve the archive file with the most recent “Last Modified” value, indicating continuous updates by the threat actor. Mandiant compared some of these payloads and found them to be functionally similar, with different obfuscation techniques applied, thus resulting in different sizes.
Figure 6: Payloads hosted at hxxps://lumalabsai[.]in/complete
Execution
The previously downloaded ZIP archive contains an executable with a double extension (.mp4 and.exe) in its name, separated by thirteen Braille Pattern Blank (Unicode: U+2800, UTF-8: E2 A0 80)characters. This is a special whitespace character from the Braille Pattern Block in Unicode.
Figure 7: Braille Pattern Blank characters in the file name
The resulting file name, Lumalabs_1926326251082123689-626.mp4⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀.exe, aims to make the binary less suspicious by pushing the.exe extension out of the user view. The number of Braille Pattern Blank characters used varies across different samples served, ranging from 13 to more than 30. To further hide the true purpose of this binary, the default .mp4 Windows icon is used on the malicious file.
Figure 8 shows how the file looks on Windows 11, compared to a legitimate.mp4 file.
Figure 8: Malicious binary vs legitimate .mp4 file
STARKVEIL
The binary Lumalabs_1926326251082123689-626.mp4⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀.exe, tracked by Mandiant as STARKVEIL, is a dropper written in Rust. Once executed, it extracts an embedded archive containing benign executables and its malware components. These are later utilized to inject malicious code into several legitimate processes.
Executing the malware displays an error window, as seen in Figure 9, to trick the user into trying to execute it again and into believing that the file is corrupted.
Figure 9: Error window displayed when executing STARKVEIL
For a successful compromise, the executable needs to run twice; the initial execution results in the extraction of all the embedded files under the C:winsystem directory.
Figure 10: Files in the winsystem directory
During the second execution, the main executable spawns the Python Launcher, py.exe, with an obfuscated Python command as an argument. The Python command decodes an embedded Python code, which Mandiant tracks as COILHATCHdropper. COILHATCH performs the following actions (note that the script has been deobfuscated and renamed for improved readability):
The command takes a Base85-encoded string, decodes it, decompresses the result using zlib, deserializes the resulting data using the marshalmodule, and then executes the final deserialized data as Python code.
Figure 11: Python command
The decompiled first-stage Python code combines RSA, AES, RC4, and XOR techniques to decrypt the second stage Python bytecode.
Figure 12: First-stage Python
The decrypted second-stage Python script executes C:winsystemheifheif.exe, which is a legitimate, digitally signed executable, used to side-load a malicious DLL. This serves as the launcher to execute the other malware components.
As mentioned, the STARKVEIL malware drops its components during its first execution and executes a launcher on its second execution. The complete analysis of all the malware components and their roles is provided in the next sections.
Each of these DLLs operates as an in-memory dropper and spawns a new victim process to perform code injection through process replacement.
Launcher
The execution of C:winsystemheifheif.exe results in the side-loading of the malicious heif.dll, located in thesame directory. This DLL is an in-memory dropper that spawns a legitimate Windows process (which may vary) and performs code injection through process replacement.
The injected code is a .NET executable that acts as a launcher and performs the following:
Moves multiple folders from C:winsystem to %APPDATA%. The destination folders are:
%APPDATA%python
%APPDATA%pythonw
%APPDATA%ffplay
%APPDATA%Launcher
Launches three legitimate processes to side-load associated malicious DLLs. The malicious DLLs for each process are:
python.exe: %APPDATA%pythonavcodec-61.dll
pythonw.exe: %APPDATA%pythonwheif.dll
ffplay.exe: %APPDATA%ffplaylibde265.dll
Establishes persistence via AutoRun registry key.
value: Dropbox
key: SOFTWAREMicrosoftWindowsCurrentVersionRun
root: HKCU
value data: "cmd.exe /c "cd /d "<exePath>" && "Launcher.exe""
Figure 14: Main function of launcher
The AutoRun Key executes %APPDATA%LauncherLauncher.exe that sideloads the DLL file libde265.dll. This DLL spawns and injects its payload into AddInProcess32.exe via PE hollowing. The injected code’s main purpose is to execute the legitimate binaries C:winsystemheif2rgbheif2rgb.exe and C:winsystemheif-infoheif-info.exe, which, in turn, sideload the backdoors XWORM and FROSTRIFT,respectively.
GRIMPULL
Of the three executables, the launcher first executes %APPDATA%pythonpython.exe, which side-loads the DLL avcodec-61.dll and injects the malware GRIMPULLinto a legitimate Windows process.
GRIMPULLis a .NET-based downloader that incorporates anti-VM capabilities and utilizes Tor for C2 server connections.
Anti-VM and Anti-Analysis
GRIMPULL begins by checking for the presence of the mutex value aff391c406ebc4c3, and terminates itself if this is found. Otherwise, the malware proceeds to perform further anti-VM checks, exiting in case any of the mentioned checks succeeds.
Anti-VM and Anti-Analysis Checks
Module Detection
Checks for sandbox/analysis tool DLLs:
SbieDll.dll (Sandboxie)
cuckoomon.dll (Cuckoo Sandbox)
BIOS Information Checks
Queries Win32_BIOS via WMI and checks version and serial number for:
VMware
VIRTUAL
A M I (AMI BIOS)
Xen
Parent Process Check
Checks if parent process is cmd (command line)
VM File Detection
Checks for existence of vmGuestLib.dll in the System folder
System Manufacturer Checks
Queries Win32_ComputerSystem via WMI and checks manufacturer and model for:
Microsoft (Hyper-V)
VMWare
Virtual
Display and System Configuration Checks
Checks for specific screen resolutions:
1440×900
1024×768
1280×1024
Checks if the OS is 32-bit
Username Checks
Checks for common analysis environment usernames:
john
anna
Any username containing xxxxxxxx
Table 4: Anti-VM and Anti-analysis checks
Download Function
GRIMPULLverifies the presence of a Tor process. If a Tor process is not detected, it proceeds to download, decompress, and execute Tor from the following URL:
GRIMPULL then attempts to connect to the following C2 server via the Tor tunnel over TCP.
strokes[.]zapto[.]org:7789
The malware maintains this connection and periodically checks for .NET payloads. Fetched payloads are decrypted using TripleDES in ECB mode with the MD5 hash of the campaign ID aff391c406ebc4c3 as the decryption key, decompressed with GZip (using a 4-byte length prefix), reversed, and then loaded into memory as .NET assemblies.
Malware Configuration
The configuration elements are encoded as base64 strings, as shown in Figure 16.
Figure 16: Encoded malware configuration
Table 5 shows the extracted malware configuration.
GRIMPULL Malware Configuration
C2 domain/server
strokes[.]zapto[.]org
Port number
7789
Unique identifier/campaign ID
aff391c406ebc4c3
Configuration profile name
Default
Table 5: GRIMPULL configuration
XWORM
Secondly, the launcher executes the file %APPDATA%pythonwpythonw.exe, which side-loads the DLL heif.dll and injects XWORM into a legitimate Windows process.
XWORM is a .NET-based backdoor that communicates using a custom binary protocol over TCP. Its core functionality involves expanding its capabilities through a plugin management system. Downloaded plugins are written to disk and executed. Supported capabilities include keylogging, command execution, screen capture, and spreading to USB drives.
XWORM Configuration
The malware begins by decoding its configuration using the AES algorithm.
Figure 17: Decryption of configuration
Table 6 shows the extracted malware configuration.
XWORM Malware Configuration
Host
artisanaqua[.]ddnsking[.]com
Port number
25699
KEY
<123456789>
SPL
<Xwormmm>
Version
XWorm V5.2
USBNM
USB.exe
Telegram Token
8060948661:AAFwePyBCBu9X-gOemLYLlv1owtgo24fcO0
Telegram ChatID
-1002475751919
Mutex
ZMChdfiKw2dqF51X
Table 6: XWORM configuration
Host Reconnaissance
The malware then performs a system survey to gather the following information:
Bot ID
Username
OS Name
If it’s running on USB
CPU Name
GPU Name
Ram Capacity
AV Products list
Sample of collected information:
☠ [KW-2201]
New Clinet : <client_id_from_machine_info_hash>
UserName : <victim_username>
OSFullName : <victim_OS_name>
USB : <is_sample_name_USB.exe>
CPU : <cpu_description>
GPU : <gpu_description>
RAM : <ram_size_in_GBs>
Groub : <installed_av_solutions>
Then the sample waits for any of the following supported commands:
Command
Description
Command
Description
pong
echo back to server
StartDDos
Spam HTTP requests over TCP to target
rec
restart bot
StopDDos
Kill DDOS threads
CLOSE
shutdown bot
StartReport
List running processes continuously
uninstall
self delete
StopReport
Kill process monitoring threads
update
uninstall and execute received new version
Xchat
Send C2 message
DW
Execute file on disk via powershell
Hosts
Get hosts file contents
FM
Execute .NET file in memory
Shosts
Write to file, likely to overwrite hosts file contents
LN
Download file from supplied URL and execute on disk
DDos
Unimplemented
Urlopen
Perform network request via browser
ngrok
Unimplemented
Urlhide
Perform network request in process
plugin
Load a Bot plugin
PCShutdown
Shutdown PC now
savePlugin
Save plugin to registry and load it HKCUSoftware<victim_id><plugin_name>=<plugin_bytes>
PCRestart
Restart PC now
RemovePlugins
Delete all plugins in registry
PCLogoff
Log off
OfflineGet
Read Keylog
RunShell
Execute CMD on shell
$Cap
Get screen capture
Table 7: Supported commands
FROSTRIFT
Lastly, the launcher executes the file %APPDATA%ffplayffplay.exe to side-load the DLL %APPDATA%ffplaylibde265.dll and inject FROSTRIFT into a legitimate Windows process.
FROSTRIFT is a .NET backdoor that collects system information, installed applications, and crypto wallets. Instead of receiving C2 commands, it receives .NET modules that are stored in the registry to be loaded in-memory. It communicates with the C2 server using GZIP-compressed protobuf messages over TCP/SSL.
Malware Configuration
The malware starts by decoding its configuration, which is a Base64-encoded and GZIP-compressed protobuf message embedded within the strings table.
Figure 18: FROSTRIFT configuration
Table 8 shows the extracted malware configuration.
Field
Value
Protobuf Tag
38
C2 Domain
strokes.zapto[.]org
C2 Port
56001
SSL Certificate
<Base64 encoded SSL certificate>
Unknown
Default
Installation folder
APPDATA
Mutex
7d9196467986
Table 8: FROSTRIFT configration
Persistence
FROSTRIFT can achieve persistence by running the command:
The sample copies itself to %APPDATA% and adds a new registry value under HKCUSOFTWAREMicrosoftWindowsCurrentVersionRun with the new file path as data to ensure persistence at each system startup.
Host Reconnaissance
The following information is initially collected and submitted by the malware to the C2:
Collected Information
Host information
Installed Anti-Virus
Web camera
Hostname
Username and Role
OS name
Local time
Victim ID
HEX digest of the MD5 hash for the following combined:
Sample process ID
Disk drive serial number
Physical memory serial number
Victim user name
Malware Version
4.1.8
Software Applications
com.liberty.jaxx
Foxmail
Telegram
Browsers (see Table 10)
Standalone Crypto Wallets
Atomic, Bitcoin-Qt, Dash-Qt, Electrum, Ethereum, Exodus, Litecoin-Qt, Zcash, Ledger Live
Browser Extension
Password managers, Authenticators, and Digital wallets (see Table 11)
Others
5th entry from the Config (“Default” in this sample)
Malware full file path
Table 9: Collected information
FROSTRIFT checks for the existence of the following browsers:
FROSTRIFT also checks for the existence of 48 browser extensions related to Password managers, Authenticators, and Digital wallets. The full list is provided in Table 11.
String
Extension
ibnejdfjmmkpcnlpebklmnkoeoihofec
TronLink
nkbihfbeogaeaoehlefnkodbefgpgknn
MetaMask
fhbohimaelbohpjbbldcngcnapndodjp
Binance Chain Wallet
ffnbelfdoeiohenkjibnmadjiehjhajb
Yoroi
cjelfplplebdjjenllpjcblmjkfcffne
Jaxx Liberty
fihkakfobkmkjojpchpfgcmhfjnmnfpi
BitApp Wallet
kncchdigobghenbbaddojjnnaogfppfj
iWallet
aiifbnbfobpmeekipheeijimdpnlpgpp
Terra Station
ijmpgkjfkbfhoebgogflfebnmejmfbml
BitClip
blnieiiffboillknjnepogjhkgnoapac
EQUAL Wallet
amkmjjmmflddogmhpjloimipbofnfjih
Wombat
jbdaocneiiinmjbjlgalhcelgbejmnid
Nifty Wallet
afbcbjpbpfadlkmhmclhkeeodmamcflc
Math Wallet
hpglfhgfnhbgpjdenjgmdgoeiappafln
Guarda
aeachknmefphepccionboohckonoeemg
Coin98 Wallet
imloifkgjagghnncjkhggdhalmcnfklk
Trezor Password Manager
oeljdldpnmdbchonielidgobddffflal
EOS Authenticator
gaedmjdfmmahhbjefcbgaolhhanlaolb
Authy
ilgcnhelpchnceeipipijaljkblbcobl
GAuth Authenticator
bhghoamapcdpbohphigoooaddinpkbai
Authenticator
mnfifefkajgofkcjkemidiaecocnkjeh
TezBox
dkdedlpgdmmkkfjabffeganieamfklkm
Cyano Wallet
aholpfdialjgjfhomihkjbmgjidlcdno
Exodus Web3
jiidiaalihmmhddjgbnbgdfflelocpak
BitKeep
hnfanknocfeofbddgcijnmhnfnkdnaad
Coinbase Wallet
egjidjbpglichdcondbcbdnbeeppgdph
Trust Wallet
hmeobnfnfcmdkdcmlblgagmfpfboieaf
XDEFI Wallet
bfnaelmomeimhlpmgjnjophhpkkoljpa
Phantom
fcckkdbjnoikooededlapcalpionmalo
MOBOX WALLET
bocpokimicclpaiekenaeelehdjllofo
XDCPay
flpiciilemghbmfalicajoolhkkenfel
ICONex
hfljlochmlccoobkbcgpmkpjagogcgpk
Solana Wallet
cmndjbecilbocjfkibfbifhngkdmjgog
Swash
cjmkndjhnagcfbpiemnkdpomccnjblmj
Finnie
knogkgcdfhhbddcghachkejeap
Keplr
kpfopkelmapcoipemfendmdcghnegimn
Liquality Wallet
hgmoaheomcjnaheggkfafnjilfcefbmo
Rabet
fnjhmkhhmkbjkkabndcnnogagogbneec
Ronin Wallet
klnaejjgbibmhlephnhpmaofohgkpgkd
ZilPay
ejbalbakoplchlghecdalmeeeajnimhm
MetaMask
ghocjofkdpicneaokfekohclmkfmepbp
Exodus Web3
heaomjafhiehddpnmncmhhpjaloainkn
Trust Wallet
hkkpjehhcnhgefhbdcgfkeegglpjchdc
Braavos Smart Wallet
akoiaibnepcedcplijmiamnaigbepmcb
Yoroi
djclckkglechooblngghdinmeemkbgci
MetaMask
acdamagkdfmpkclpoglgnbddngblgibo
Guarda Wallet
okejhknhopdbemmfefjglkdfdhpfmflg
BitKeep
mijjdbgpgbflkaooedaemnlciddmamai
Waves Keeper
Table 11: List of browser extensions
C2 Communication
The malware expects the C2 to respond by sending GZIP-compressed Protobuf messages with the following fields:
registry_val: A registry value under HKCUSoftware<victim_id> to store the loader_bytes.
loader_bytes: Assembly module to load the loaded_bytes (stored at registry in reverse order).
loaded_bytes: GZIP-compressed assembly module to be loaded in-memory.
The sample receives loader_bytes only in the first message as it stores it under the registry value HKCUSoftware<victim_id>registry_val. For the subsequent messages, it only receives registry_val which it uses to fetch loader_bytes from the registry.
The sample sends empty GZIP-compressed Protobuf messages as a keep-alive mechanism until the C2 sends another assembly module to be loaded.
The malware has the ability to download and execute extra payloads from the following hardcoded URLs (this feature is not enabled in this sample):
The files are WebDrivers for browsers that can be used for testing, automation, and interacting with the browser. They can also be used by attackers for malicious purposes, such as deploying additional payloads.
Conclusion
As AI has gained tremendous momentum recently, our research highlights some of the ways in which threat actors have taken advantage of it. Although our investigation was limited in scope, we discovered that well-crafted fake “AI websites” pose a significant threat to both organizations and individual users. These AI tools no longer target just graphic designers; anyone can be lured in by a seemingly harmless ad. The temptation to try the latest AI tool can lead to anyone becoming a victim. We advise users to exercise caution when engaging with AI tools and to verify the legitimacy of the website’s domain.
Acknowledgements
Special thanks to Stephen Eckels, Muhammad Umair, and Mustafa Nasser for their assistance in analyzing the malware samples. Richmond Liclican for his inputs and attribution. Ervin Ocampo, Swapnil Patil, Muhammad Umer Khan, and Muhammad Hasib Latif for providing the detection opportunities.
Detection Opportunities
The following indicators of compromise (IOCs) and YARA rules are also available as a collection and rule pack in Google Threat Intelligence (GTI).
rule G_Backdoor_FROSTRIFT_1 {
meta:
author = "Mandiant"
strings:
$guid = "$23e83ead-ecb2-418f-9450-813fb7da66b8"
$r1 = "IdentifiableDecryptor.DecryptorStack"
$r2 = "$ProtoBuf.Explorers.ExplorerDecryptor"
$s1 = "\User Data\" wide
$s2 = "SELECT * FROM AntiVirusProduct" wide
$s3 = "Telegram.exe" wide
$s4 = "SELECT * FROM Win32_PnPEntity WHERE (PNPClass =
'Image' OR PNPClass = 'Camera')" wide
$s5 = "Litecoin-Qt" wide
$s6 = "Bitcoin-Qt" wide
condition:
uint16(0) == 0x5a4d and (all of ($s*) or $guid or all of ($r*))
}
YARA-L Rules
Mandiant has made the relevant rules available in the Google SecOps Mandiant Intel Emerging Threats curated detections rule set. The activity discussed in the blog post is detected under the rule names:
At Google Cloud, we’re committed to providing the most open and flexible AI ecosystem for you to build solutions best suited to your needs. Today, we’re excited to announce our expanded AI offerings with Mistral AI on Google Cloud:
Le Chat Enterprise on Google Cloud Marketplace: An AI assistant that offers enterprise search, agent builders, custom data and tool connectors, custom models, document libraries, and more in a unified platform.
Available today on Google Cloud Marketplace, Mistral AI’s Le Chat Enterprise is a generative AI work assistant designed to connect tools and data in a unified platform for enhanced productivity.
Use cases include:
Building agents: With Le Chat Enterprise, you can customize and deploy a variety of agents that understand and synchronize with your unique context, including no-code agents.
Accelerating research and analysis: WithLe Chat Enterprise, you can quickly summarize lengthy reports, extract key data from documents, and perform rapid web searches to gather information efficiently.
Generating actionable insights: With Le Chat Enterprise, industries — like finance — can convert complex data into actionable insights, generate text-to-SQL queries for financial analysis, and automate financial report generation.
Accelerating software development: With Le Chat Enterprise, you can debug and optimize existing code, generate and review code, or create technical documentation.
Enhancing content creation: With Le Chat Enterprise, you can help marketers generate and refine marketing copy across channels, analyze campaign performance data, and collaborate on visual content creation through Canvas.
By deploying Le Chat Enterprise through Google Cloud Marketplace, organizations can leverage the scalability and security of Google Cloud’s infrastructure, while also benefiting from a simplified procurement process and integrations with existing Google Cloud services such as BigQuery and Cloud SQL.
Mistral OCR 25.05 excels in document understanding and can comprehend elements of content-rich papers—like media, text, charts, tables, graphs, and equations—with powerful accuracy and cognition. More example use cases include:
Digitizing scientific research: Research institutions can use Mistral OCR 25.05 to accelerate scientific workflows by converting scientific papers and journals into AI-ready formats, making them accessible to downstream intelligence engines.
Preserving historical and cultural heritage: Digitizing historical documents and artifacts to assist with preservation and making them more accessible to a broader audience.
Streamlining customer service: Customer service departments can reduce response times and improve customer satisfaction by using Mistral OCR 25.05 to transform documentation and manuals into indexed knowledge.
Making literature across design, education, legal, etc. AI ready: Mistral OCR 25.05 can discover insights and accelerate productivity across a large volume of documents by helping companies convert technical literature, engineering drawings, lecture notes, presentations, regulatory filings and more into indexed, answer-ready formats.
When building with Mistral OCR 25.05 as a Model-as-a-Service (MaaS) on Vertex AI, you get a comprehensive AI platform to scale with fully managed infrastructure and build confidently with enterprise-grade security and compliance. Mistral OCR 25.05 joins a curated selection of over 200 foundation models in Vertex AI Model Garden, empowering you to choose the ideal solution for your specific needs.
To start building with Mistral OCR 25.05 on Vertex AI, visit the Mistral OCR 25.05 model card in Vertex AI Model Garden, select “Enable”, and follow the proceeding instructions.
Amazon Elastic Container Service (Amazon ECS) has extended the length of the container exit reason message from 255 to 1024 characters. The enhancement helps you debug more effectively by providing more complete error messages when containers fail.
Amazon ECS customers use container exit reason messages to troubleshoot their running or stopped tasks. Error messages can be accessed through the “reason” field in the DescribeTasks API response, which is a short, human-readable string that provides details about a running or stopped container. Previously, error messages beyond 255 characters were truncated. With the increased limit to 1024 characters, customers can now surface and view richer error details, making troubleshooting faster.
Customers can access longer container exit reason messages through the AWS Management Console and the DescribeTasks API. This improvement is available in all AWS regions for tasks deployed on Fargate Platform 1.4.0 or container instances with ECS Agent v1.92.0 or later. To learn more, refer to the documentation and release notes.
Starting today, Route 53 Profiles is available in Asia Pacific (Thailand), Mexico (Central), and Asia Pacific (Malaysia) Regions.
Route 53 Profiles allows you to define a standard DNS configuration (Profile), that may include Route 53 private hosted zone (PHZ) associations, Route 53 Resolver rules, and Route 53 Resolver DNS Firewall rule groups, and apply this configuration to multiple VPCs in your account. Route 53 Profiles can also be used to enforce DNS settings for your VPCs, with configurations for DNSSEC validations, Resolver reverse DNS lookups, and the DNS Firewall failure mode. You can share Profiles with AWS accounts in your organization using AWS Resource Access Manager (RAM). Route 53 Profiles simplifies the association of Route 53 resources and VPC-level settings for DNS across VPCs and AWS accounts in a Region with a single configuration, minimizing the complexity of having to manage each resource association and setting per VPC.
Route 53 Profiles is available in the AWS Regions mentioned here. To get started with this feature, visit the Route 53 documentation. To learn more about pricing, you can visit the Route 53 pricing page.
CloudWatch Database Insights announces support for Amazon Aurora PostgreSQL Limitless databases. Database Insights is a database observability solution that provides a curated experience designed for DevOps engineers, application developers, and database administrators (DBAs) to expedite database troubleshooting and gain a holistic view into their database fleet health.
Database Insights consolidates logs and metrics from your applications, your databases, and the operating systems on which they run into a unified view in the console. Using its pre-built dashboards, recommended alarms, and automated telemetry collection, you can monitor the health of your database fleets and use a guided troubleshooting experience to drill down to individual instances for root-cause analysis. You can now enable Database Insights on Aurora Limitless databases and start monitoring how database load is spread across your Limitless shard groups.
You can get started with Database Insights for Aurora Limitless by enabling it on your Limitless databases using the Aurora service console, AWS APIs, and SDKs.
Database Insights for Aurora Limitless is available in all regions where Aurora Limitless is available and applies a new ACU-based pricing – see pricing page for details. For further information, visit the Database Insights documentation.
Anthropic’s Claude 3.5 Sonnet v1 and Claude 3 Haiku, and Meta’s Llama 3 8B and 70B models are now FedRAMP High and Department of Defense Cloud Computing Security Requirements Guide (DoD CC SRG) Impact Level (IL) 4 and 5 approved within Amazon Bedrock in the AWS GovCloud (US) Regions. Additionally, Amazon Bedrock features including Agents, Guardrails, Knowledge Bases, and Model Evaluation are now approved.
Federal agencies, public sector organizations, and other enterprises with FedRAMP High compliance requirements can now use Amazon Bedrock to access high-performing foundation models (FMs) from Anthropic and Meta.
AWS Deadline Cloud Monitor now supports multiple languages, allowing you to view critical job information using an expanded selection of languages. Supported languages include Chinese (Traditional), Chinese (Simplified), English, French, German, Indonesian, Italian, Japanese, Korean, Portuguese (Brazil), and Turkish. AWS Deadline Cloud is a fully managed service that simplifies render management for teams creating computer-generated graphics and visual effects for films, television, broadcasting, web content, and design.
This new localization feature gives you the ability to manage and monitor information about rendering jobs in your preferred language, reducing complexity and improving workflow efficiency. The Deadline Cloud Monitor will automatically match your system languages used in both the desktop and web application, but can also be manually configured.
Multi-language support for AWS Deadline Cloud Monitor is available in all AWS Regions where the service is offered. To learn more about AWS Deadline Cloud Monitor and its new localization feature, see the AWS Deadline Cloud documentation.
Today, we’re expanding the choice of third-party models available in Vertex AI Model Garden with the addition of Anthropic’s newest generation of the Claude model family: Claude Opus 4 and Claude Sonnet 4. Both Claude Opus 4 and Claude Sonnet 4 are hybrid reasoning models, meaning they offer modes for near-instant responses and extended thinking for deeper reasoning.
Claude Opus 4 is Anthropic’s most powerful model to date. Claude Opus 4 excels at coding, with sustained performance on complex, long-running tasks and agent workflows. Use cases include advanced coding work, autonomous AI agents, agentic search and research, tasks that require complex problem solving, and long-running tasks that require precise content management.
Claude Sonnet 4 is Anthropic’s mid-size model that balances performance with cost. It surpasses its predecessor, Claude Sonnet 3.7, across coding and reasoning while responding more precisely to steering. Use cases include coding tasks such as code reviews and bug fixes, AI assistants, efficient research, and large-scale content generation and analysis.
Claude Opus 4 and Claude Sonnet 4 are generally available as a Model-as-a-Service (MaaS) offering on Vertex AI. For more informationon the newest Claude models, visit Anthropic’s blog.
Build advanced agents on Vertex AI
Vertex AI is Google Cloud’s comprehensive platform for orchestrating your production AI workflows across three pillars: data, models, and agents—a combination that would otherwise require multiple fragmented solutions. A key component of the model pillar is Vertex AI Model Garden, which offers a curated selection of over 200 foundation models, including Google’s models, third-party models, and open models—empowering you to choose the ideal solution for your specific needs.
You can leverage Vertex AI’s Model-as-a-Service (MaaS) to rapidly deploy and scale Claude-powered intelligent agents and applications, benefiting from integrated agentic tooling, fully managed infrastructure, and enterprise-grade security.
By building on Vertex AI, you can:
Orchestrate sophisticated multi-agent systems: Build agents with an open approach using Google’s Agent Development Kit (ADK) or your preferred framework. Deploy your agents to production with enterprise-grade controls directly in Agent Engine.
Harness the power of Google Cloud integrations: You can connect Claude directly within BigQuery ML to facilitate functions like text generation, summarization, translation, and more.
Optimize performance with provisioned throughput: Reserve dedicated capacity and prioritized processing for critical production workloads with Claude models at a fixed fee. To get started with provisioned throughput, contact your Google Cloud sales representative.
Maximize Claude model utilization: Reduce latency and costs while increasing throughput by employing Vertex AI’s advanced features for Claude models such asbatch predictions, prompt caching, token counting, and citations. For detailed information, refer to our documentation.
Scale withfully managed infrastructure: Vertex AI’s fully managed and AI-optimized infrastructure simplifies how you deploy your AI workloads in production. Additionally, Vertex AI’s new global endpoints for Claude (public preview) enhance availability by dynamically serving traffic from the nearest available region.
Build confidently with enterprise-grade security and compliance: Benefit from Vertex AI’s built-in security and compliance measures that satisfy stringent enterprise requirements.
Customers achieving real impact with Claude on Vertex AI
To date, more than 4,000 customers have started using Anthropic’s Claude models on Vertex AI. Here’s a look at how top organizations are driving impactful results with this powerful integration:
Augment Codeis running its AI coding assistant, which specializes in helping developers navigate and contribute to production-grade codebases, with Anthropic’s Claude models on Vertex AI.
“What we’re able to get out of Anthropic is truly extraordinary, but all of the work we’ve done to deliver knowledge of customer code, used in conjunction with Anthropic and the other models we host on Google Cloud, is what makes our product so powerful.” – Scott Dietzen, CEO, Augment Code
Palo Alto Networks is accelerating software development and security by deploying Claude on Vertex AI.
“With Claude running on Vertex AI, we saw a 20% to 30% increase in code development velocity. Running Claude on Google Cloud’s Vertex AI not only accelerates development projects, it enables us to hardwire security into code before it ships.” – Gunjan Patel, Director of Engineering, Office of the CPO, Palo Alto Networks
Replit leverages Claude on Vertex AI to power Replit Agent, which empowers people across the world to use natural language prompts to turn their ideas into applications, regardless of coding experience.
“Our AI agent is made more powerful through Anthropic’s Claude models running on Vertex AI. This integration allows us to easily connect with other Google Cloud services, like Cloud Run, to work together behind the scenes to help customers turn their ideas into apps.” – Amjad Masad, Founder and CEO, Replit
Get started
To get started with the new Claude models on Vertex AI, navigate to the Claude Opus 4 or the Claude Sonnet 4 model card in Vertex AI Model Garden, select “Enable”, and follow the proceeding instructions.
EC2 Public DNS names can now resolve to IPv6 Global Unicast Address (AAAA record) associated with your EC2 instances and Elastic Network Interfaces (ENI). This allows customers to publicly access their IPv6-enabled Amazon EC2 instances over IPv6, using EC2 Public DNS names.
Prior to this, EC2 Public DNS name resolved to the Public IPv4 address (A record) associated with the primary ENI of the instance. So, customers adopting IPv6, used the specific IPv6 address instead of a DNS name to access an IPv6-only Amazon EC2 instance, or used a custom domain by creating a hosted zone using Amazon Route 53. IPv6 support for EC2 Public DNS names allows customers to easily access their IPv6-only Amazon EC2 instances, or formulate a migration plan that allows them to access a dual stack instance via IPv6, with a simple DNS cut over.
This feature is available in all AWS commercial and AWS GovCloud (US) Regions, and customers can set IPv6 support for EC2 Public DNS using the same VPC settings that customers use to enable IPv4-only EC2 Public DNS name today. To learn more about using IPv6 support for EC2 Public DNS name, please refer to our documentation.
Amazon Aurora for MySQL and Amazon Aurora for PostgreSQL now offer faster Global Database cross-Region switchover, reducing recovery time for read/write operations to typically under 30 seconds and enhancing availability for applications operating at a global scale.
With Global Database, a single Aurora cluster can span multiple AWS Regions, providing disaster recovery from Region-wide outages and enabling fast local reads for globally distributed applications. Global Database cross-Region switchover is a fully managed process designed for planned events such as regional rotations. This launch optimizes the duration during which a writer in your global cluster is unavailable, improving recovery time and business continuity for your applications following cross-Region switchover operations. See documentation to learn more about Global Database Switchover.
Amazon Aurora combines the performance and availability of high-end commercial databases with the simplicity and cost-effectiveness of open-source databases. To get started with Amazon Aurora, take a look at our getting started page.
AWS HealthImaging announces rich hierarchical search per the DICOMweb QIDO-RS standard as well as an improved data management experience. With this launch, HealthImaging automatically organizes image sets into DICOM Study and Series resources. Incoming DICOM SOP instances are automatically merged to the same DICOM Series.
Rich DICOMweb QIDO-RS search capabilities make it easier to find and retrieve data, enabling customers to focus more on empowering end users and less on infrastructure management. HealthImaging’s automatic organization of data by DICOM Studies and Series makes it easier for healthcare and life sciences customers to manage their data at scale by eliminating the need for post-import workflows, saving time and reducing complexity. This helps customers more efficiently organize data and better resolve any inconsistencies. This launch also delivers significant reductions in the last byte latency of DICOMweb WADO-RS APIs, and faster import of large instances (such as digital pathology whole slide imaging).
AWS HealthImaging is generally available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland).
AWS HealthImaging is a HIPAA-eligible service that empowers healthcare providers, life sciences researchers, and their software partners to store, analyze, and share medical images at petabyte scale. To learn more, see the AWS HealthImaging Developer Guide.
Today, AWS HealthImaging announces support for retrieving the metadata for all DICOM instances in a series via a single API action. This new feature extends HealthImaging’s support for the DICOMweb standard, simplifying integrations and improving interoperability with existing applications.
This launch significantly reduces the cost and complexity of retrieving series level metadata, especially when DICOM series contain hundreds or even thousands of instances. With this enhancement, it is easier than ever to retrieve instance metadata with consistent low latency, enabling clinical, AI, and research use cases.
AWS HealthImaging is generally available in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), and Europe (Ireland).
AWS HealthImaging is a HIPAA-eligible service that empowers healthcare providers, life sciences researchers, and their software partners to store, analyze, and share medical images at petabyte scale. To learn more, see the AWS HealthImaging Developer Guide.
Today, AWS Control Tower introduces a new ‘Enabled controls’ page, helping customers track, filter, and manage their enabled controls across their AWS Control Tower organization. This enhancement significantly improves visibility and streamlines the management of your AWS Control Tower controls, saving valuable time and reducing the complexity of managing enabled controls. For organizations managing hundreds or thousands of AWS accounts, this feature provides a centralized view of control coverage, making it easier to maintain consistent governance at scale.
Previously, to assess the enabled controls coverage, you had to navigate to the organizational unit (OU) or account details page in the console to track the controls deployed per target. With this release, the Enabled controls view centralizes all the enabled controls across your AWS Control Tower environment, giving you a single, unified location to track, filter, and manage enabled controls. With this new feature, you can now more easily identify gaps in your control coverage. For instance, you can quickly search and filter for all enabled preventive controls and verify if they’re applied consistently across critical OUs.
You can drill down by organizational units, behavior, severity and implementation to see exactly which controls are enabled, giving you a targeted visibility into your governance posture across your environment. Lastly, you can also get a pre-filtered list of enabled controls by behavior from the AWS Control Tower dashboard’s Controls summary page.
To benefit from the new Enabled controls view page, navigate to the Controls section in your AWS Control Tower console. To learn more, visit the AWS Control Tower homepage or see the AWS Control Tower User Guide. For a full list of AWS Regions where AWS Control Tower is available, see the AWS Region Table.
Amazon Relational Database Service (Amazon RDS) Custom for Oracle now supports R7i and M7i instances. These instances are powered by custom 4th Generation Intel Xeon Scalable custom processors, available only on AWS. R7i and M7i instances are available in sizes up to 48xlarge, or 50% larger than the previous generation R6i and M6i instances.
M7i and R7i instances are available for Amazon RDS Custom for Oracle in Bring Your Own License model for Oracle Database Enterprise Edition (EE) and Oracle Database Standard Edition 2 (SE2) . You can modify your existing RDS instance or create a new instance with just a few clicks on the Amazon RDS Management Console or using the AWS SDK or CLI. Visit Amazon RDS Custom Pricing Page for pricing details and region availability.
Amazon RDS Custom for Oracle is a managed database service for legacy, custom, and packaged applications that require access to the underlying operating system and database environment. To get started with Amazon RDS Custom for Oracle, refer the User Guide.
The next generation of Anthropic’s Claude models, Claude Opus 4 and Claude Sonnet 4, are now available in Amazon Bedrock, representing significant advancements in AI capabilities. These models excel at coding, enable AI agents to analyze thousands of data sources, execute long-running tasks, write high-quality content, and perform complex actions. Both Opus 4 and Sonnet 4 are hybrid reasoning models offering two modes: near-instant responses and extended thinking for deeper reasoning.
Claude Opus 4: Opus 4 is Anthropic’s most powerful Claude model to date and Anthropic’s benchmarks show it is the best coding model available, excelling at autonomously managing complex, multi-step tasks with accuracy. It can independently break down abstract projects, plan architectures, and maintain high code quality throughout extended tasks. Opus 4 is ideal for powering agentic AI applications that require uncompromising intelligence for orchestrating cross-functional enterprise workflows or handling a major code migration for a large codebase.
Claude Sonnet 4: Sonnet 4 is a midsize model designed for high-volume use cases and can function effectively as a task-specific sub-agent within broader AI systems. It efficiently handles specific tasks like code generation, search, data analysis, and content synthesis, making it well suited for production AI applications requiring a balance of quality, costeffectiveness, and responsiveness.
You can now use both Claude 4 models in Amazon Bedrock. To get started, visit the Amazon Bedrock console. Integrate it into your applications using the Amazon Bedrock API or SDK. For more information including region availability, see the AWS News Blog, Anthropic’s Claude in Amazon Bedrock product page, and the Amazon Bedrock pricing page.
Amazon Managed Service for Prometheus, a fully managed Prometheus-compatible monitoring service, now provides the capability to identify expensive PromQL queries, and limit their execution. This enables customers to monitor and control the types of queries being issued against their Amazon Managed Service for Prometheus workspaces.
Customers have highlighted the need for tighter governance controls for queries, specifically around high cost queries. You can now monitor queries above a certain Query Samples Processed (QSP) threshold, and log those queries to Amazon CloudWatch. The information in the vended logs allows you to identify expensive queries. The vended logs contain the PromQL query and metadata about where it originated from, such as from Grafana dashboard IDs or alerting rules. In addition, you can now set warning or error thresholds for query execution. To control query cost, you can pre-empt the execution of expensive queries by providing an error threshold in the HTTP headers to the QueryMetrics API. Alternatively, by setting a warning threshold, we return the query results, charge you for the QSP, and return a warning to the end-user that the query is more expensive than the limit set by your workspace administrator.
This feature is now available in all regions where Amazon Managed Service for Prometheus is generally available.
To learn more about Amazon Managed Service for Prometheus collector, visit the user guide or product page.