In an industry generating vast volumes of streaming data every day, ensuring precision, speed, and transparency in royalty tracking is a constant and evolving priority. For music creators, labels, publishers, and rights holders, even small gaps in data clarity can influence how and when income is distributed — making innovation in data processing and anomaly detection essential.
To stay ahead of these challenges, BMG partnered with Google Cloud to develop StreamSight, an AI-driven application that enhances digital royalty forecasting and detection of reporting anomalies. The tool uses machine learning models to analyze historical data and flag patterns that help predict future revenue — and catch irregularities that might otherwise go unnoticed.
The collaboration combines Google Cloud’s scalable technology, such as BigQuery, Vertex AI, and Looker, with BMG’s deep industry expertise. Together, they’ve built an application that demonstrates how cloud-based AI can help modernize royalty processing and further BMG’s and Google’s commitment to fairer and faster payout of artist share of label and publisher royalties.
“At BMG, we’re accelerating our use of AI and other technologies to continually push the boundaries of how we best serve our artists, songwriters, and partners. StreamSight reflects this commitment — setting a new standard for data clarity and confidence in digital reporting and monetization. Our partnership with Google Cloud has played a key role in accelerating our AI and data strategy.” – Sebastian Hentzschel,Chief Operating Officer, BMG
aside_block
<ListValue: [StructValue([(‘title’, ‘Try Google Cloud for free’), (‘body’, <wagtail.rich_text.RichText object at 0x3ee206c9bee0>), (‘btn_text’, ”), (‘href’, ”), (‘image’, None)])]>
From Data to Insights: How StreamSight Works
At its core, StreamSight utilizes several machine learning models within Google BigQuery ML for its analytical power:
For Revenue Forecasting:
ARIMA_PLUS: This model is a primary tool for forecasting revenue patterns. It excels at capturing underlying sales trends over time and is well-suited for identifying and interpreting long-term sales trajectories rather than reacting to short-term volatility.
BOOSTED_TREE: This model is valuable for the exploratory analysis of past sales behavior. It can effectively capture past patterns, short-term fluctuations and seasonality, helping to understand historical dynamics and how sales responded to recent changes.
For Anomaly Detection & Exploratory Analysis:
K-means and ANOMALY_DETECT function: These are highly effective for identifying various anomaly types in datasets, such as sudden spikes, country-based deviations, missing sales periods, or sales reported without corresponding rights.
Together, these models provide a comprehensive approach: ARIMA_PLUS offers robust future trend predictions, while other models contribute to a deeper understanding of past performance and the critical detection of anomalies. This combination supports proactive financial planning and helps safeguard royalty revenues.
Data Flow in Big Query:
Finding the Gaps: Smarter Anomaly Detection
StreamSight doesn’t just forecast earnings — it also flags when things don’t look right. Whether it’s a missing sales period; unexpected spikes or dips in specific markets; or mismatches between reported revenue and rights ownership, the system can highlight problems that would normally require hours of manual review. And now it’s done at the click of a button.
For example:
Missing sales periods: Gaps in data that could mean missing money.
Sales mismatched with rights: Revenue reported from a region where rights aren’t properly registered.
Global irregularities: Sudden increases in streams or sales that suggest a reporting error or unusual promotional impact.
With StreamSight, these issues are detected at scale, allowing teams to take faster and more consistent action.
The StreamSight Dashboard:
Built on Google Cloud for Scale and Simplicity
The technology behind StreamSight is just as innovative as its mission. Developed on Google Cloud, it uses:
BigQuery ML to run machine learning models directly on large datasets using SQL.
Vertex AI and Python for advanced analysis and model training.
Looker Studio to create dashboards that make results easy to interpret and share across teams.
This combination of tools made it possible to move quickly from concept to implementation, while keeping the system scalable and cost-effective.
A Foundation for the Future
While StreamSight is currently a proof of concept, its early success points to vast potential. Future enhancements could include:
Adding data from concert tours and marketing campaigns to refine predictions.
Include more Digital Service Providers (DSPs) that provide access to digital music, such as Amazon, Apple Music or Spotify to allow for better cross-platform comparisons.
Factoring in social media trends or fan engagement as additional inputs.
Segmenting analysis by genre, region, music creator type, or release format.
By using advanced technology for royalty processing, we’re not just solving problems — we’re building a more transparent ecosystem for the future, one that supports our shared commitment to the fairer and faster payout of the artist’s share of label and publisher royalties.
The collaboration between BMG and Google Cloud demonstrates the music industry’s potential to use advanced technology to create a future where data drives smarter decisions and where everyone involved can benefit from a clearer picture of where music earns its value.
The Amazon ECS console now supports ECS Exec, enabling you to open secure, interactive shell access directly from the AWS Management Console to any running container.
ECS customers often need to access running containers to debug applications and examine running processes. ECS Exec provides easy and secure access to running containers without requiring inbound ports or SSH key management. Previously, ECS Exec was only accessible through the AWS API, CLI, or SDKs, requiring customers to switch interfaces when troubleshooting in the console. With this new feature, customers can now connect to running containers directly from the AWS Management Console, streamlining troubleshooting workflows.
To get started, you can turn on ECS Exec directly in the console when creating or updating services and standalone tasks. Additional settings like encryption and logging can also be configured at the cluster level through the console. Once enabled, simply navigate to a task details page, select a container, and click “Connect” to open an interactive session through CloudShell. The console also displays the underlying AWS CLI command, which you can customize or copy to use in your local terminal.
ECS Exec console support is now available in all AWS commercial regions. To learn more, visit the ECS developer guide.
Amazon Connect now offers expanded disconnect reasons to help you better understand why outbound calls failed to connect in your contact center. These enhanced reasons are based on standard telecom error codes that provide deeper call insights and enable faster troubleshooting, reducing the need to create support tickets to understand failure reasons. You’ll benefit from improved reporting capabilities with granular disconnection data and real-time visibility through Contact Trace Records, allowing you to monitor call disconnection patterns more effectively.
Amazon Elastic Container Registry (ECR) now supports repository creation templates in the AWS GovCloud (US) Regions. Repository creation templates allow you to configure the settings for the new repositories that Amazon ECR creates on your behalf during pull through cache and replication operations. These settings include encryption, lifecycle policies, access permissions, and tag immutability. Each template uses a prefix to match and apply configurations to new repositories automatically, enabling you to maintain consistent settings across your container registries.
To learn more about ECR repository creation template, see our documentation.
PostgreSQL 18 includes “skip scan” support for multicolumn B-tree indexes and improves WHERE clause handling for OR and IN conditions. It introduces parallel GIN (Generalized Inverted) Index builds and updates join operations. Observability improvements show buffer usage counts and index lookups during query execution, along with per-connection I/O utilization metric. Please refer the RDS PostgreSQL release documentation for more details.
Amazon RDS Database Preview Environment database instances are retained for a maximum period of 60 days and are automatically deleted after the retention period. Amazon RDS database snapshots that are created in the preview environment can only be used to create or restore database instances within the preview environment. You can use the PostgreSQL dump and load functionality to import or export your databases from the preview environment.
Amazon RDS Database Preview Environment database instances are priced as per the pricing in the US East (Ohio) Region.
AWS Clean Rooms now supports configurable compute size for PySpark, offering customers the flexibility to customize and allocate resources to run PySpark jobs based on their performance, scale, and cost requirements. With this launch, customers can specify the instance type and cluster size at job runtime for each analysis that uses PySpark, the Python API for Apache Spark. For example, customers can use large instance configurations to achieve the performance needed for their complex data sets and analyses, or smaller instances to optimize costs.
AWS Clean Rooms helps companies and their partners easily analyze and collaborate on their collective datasets without revealing or copying one another’s underlying data. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.
Today, AWS announces that AWS HealthOmics private workflows are now available in the Asia Pacific (Seoul) Region, expanding access to fully managed bioinformatics workflows for healthcare and life sciences customers in Korea. AWS HealthOmics is a HIPAA-eligible service that helps healthcare and life sciences customers accelerate scientific breakthroughs with fully managed bioinformatics workflows. HealthOmics enables customers to focus on scientific discovery rather than infrastructure management, reducing time to value for research, drug discovery, and agriculture science initiatives.
With private workflows, customers can build and scale genomics data analysis pipelines using familiar domain-specific languages including Nextflow, WDL, and CWL. The service provides built-in features such as call caching to resume runs, dynamic run storage that automatically scales with run storage needs, Git integrations for version-controlled workflow development, and third-party container registry support through Amazon ECR pull-through cache. These capabilities make it easier to migrate existing pipelines and accelerate development of new genomics workflows while maintaining full data provenance and compliance requirements.
Private workflows are now available in all regions where AWS HealthOmics is available: US East (N. Virginia), US West (Oregon), Europe (Frankfurt, Ireland, London), Israel (Tel Aviv), Asia Pacific (Singapore), and Asia Pacific (Seoul).
To get started, see the AWS HealthOmics documentation.
Amazon MQ now supports OAuth 2.0 authentication and authorization for RabbitMQ brokers with public identity providers in both single instance and highly available Multi-AZ cluster deployments. This feature enables RabbitMQ brokers to authenticate clients and users using JWT-encoded OAuth 2.0 access tokens, providing enhanced security and flexibility in access management.
You can configure OAuth 2.0 on your RabbitMQ broker on Amazon MQ using the AWS Console, AWS CloudFormation, AWS Command Line Interface (CLI), or the AWS Cloud Development Kit (CDK). This feature is available in all AWS regions where Amazon MQ is available. To get started, create a new RabbitMQ broker with OAuth 2.0 authentication or update your existing broker’s configuration to enable OAuth2.0 support. This feature maintains compatibility with standard RabbitMQ OAuth 2.0 implementations, ensuring seamless migration for existing OAuth 2.0 enabled brokers. For detailed configuration options and steps, refer to the Amazon MQ documentation page.
We introduced Cross-Cloud Network to help organizations transform hybrid and multicloud connectivity, and today, many customers are using it to build distributed applications across multiple clouds, on-premises networks, and the internet. A key aspect of this evolution is the ability to scale with IPv6 addressing. However, the transition from IPv4 to IPv6 is a gradual process creating a coexistence challenge: How do IPv6-only devices reach services and content that still resides on IPv4 networks?
To ensure a smooth transition to IPv6, we’re expanding our toolkit. After launching IPv6 Private Service Connect endpoints that connect to IPv4 published services, we are now introducing DNS64 and NAT64. Together, DNS64 and NAT64 form a robust mechanism that intelligently translates communication, allowing IPv6-only environments in Google Cloud to interact with the legacy IPv4 applications on the internet. In this post, we explore the vital role DNS64 and NAT64 play in making IPv6 adoption practical and efficient, removing the dependency on migrating legacy IPv4 services to IPv6.
The importance of DNS64 and NAT64
While dual-stack networking assigns both IPv4 and IPv6 addresses to a network interface, it doesn’t solve the pressing issues of private IPv4 address exhaustion or the increasing push for native IPv6 compliance. For major enterprises, the path toward widespread IPv6 adoption of cloud workloads involves creating new single-stack IPv6 workloads without having to migrate legacy IPv4 applications and services to IPv6. Together, DNS64 and NAT64 directly address this requirement, facilitating IPv6-to-IPv4 communication while maintaining access to existing IPv4 infrastructure.
This IPv6-to-IPv4 translation mechanism supports several critical use cases.
Enabling IPv6-only networks: As IPv4 addresses become increasingly scarce and costly, organizations can build future-proof IPv6-only environments, with DNS64 and NAT64 providing the essential translation to access remaining IPv4 services on the internet.
Gradual migration to IPv6: This allows organizations to gradually phase out IPv4 while guaranteeing their IPv6-only clients can still reach vital IPv4-only services.
Supporting legacy applications: Many critical business applications still rely solely on IPv4; these new services ensure they remain accessible to IPv6-only clients, safeguarding ongoing business operations during the transition.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 to try Google Cloud networking’), (‘body’, <wagtail.rich_text.RichText object at 0x3e97ad267460>), (‘btn_text’, ”), (‘href’, ”), (‘image’, None)])]>
How does it work?
An IPv6-only workload begins communication by performing a DNS lookup for the specific service URL. If a AAAA record exists, then an IPv6 address is returned and the connection proceeds directly using IPv6.
However, if DNS64 is enabled but a AAAA record cannot be found, the system instead queries for an A record. Once an A record is found, DNS64 constructs a unique synthesized IPv6 address by combining the well-known 64:ff9b::/96 prefix with the IPv4 address obtained from the A record.
The NAT64 gateway recognizes that the destination address is a part of the 64:ff9b::/96 range. It extracts the original IPv4 address from the latter part of the IPv6 address and initiates a new IPv4 connection to the destination, using the NAT64 gateway’s own IPv4 address as the source. Upon receiving a response, the NAT64 gateway prepends the 64:ff9b::/96 prefix to the response packet’s source IP, providing communication back to the IPv6-only client.
Here’s a diagram of the above-mentioned scenario:
Getting started with DNS64 and NAT64
You can simply setup IPv6-only VMs with DNS64 and NAT64 as follows:
Create VPC, subnets, VMs and firewall rules
Create a DNS64 server policy
Create a NAT64 gateway
Step 1: Create VPC, subnets, VMs, and firewall rules
And with that, we hope that you now understand how to connect your IPv6-only workloads to IPv4 destinations by using DNS64 and NAT64. To learn more about enabling DNS64 and NAT64 for IPv6-only workloads, check out the documentation.
Most businesses with mission-critical workloads have a two-fold disaster recovery solution in place that 1) replicates data to a secondary location, and 2) enables failover to that location in the event of an outage. For BigQuery, that solution takes the shape of BigQuery Managed Disaster Recovery. But the risk of data loss while testing a disaster recovery event remains a primary concern. Like traditional “hard failover” solutions, it forces a difficult choice: promote the secondary immediately and risk losing any data within the Recovery Point Objective (RPO), or delay recovery while you wait for a primary region that may never come back online.
Today, we’re addressing this directly with the introduction of soft failover in BigQuery Managed Disaster Recovery. Soft failover logic promotes the secondary region’s compute and datasets only after replication has been confirmed to be complete, providing you with full control over disaster recovery transitions, and minimizing the risk of data loss during a planned failover.
Figure 1: Comparing hard vs. soft failover
Summary of differences between hard failover and soft failover
Hard failover
Soft failover
Use case
Unplanned outages, region down
Failover testing, requires primary and secondary to both be available
Failover timing
As soon as possible ignoring any pending replication between primary and secondary; data loss possible
Subject to primary and secondary acquiescing, minimizing potential for data loss
RPO/RTO
15 minutes / 5 minutes*
N/A
*Supported objective depending on configuration
BigQuery soft failover in action
Imagine a large financial services company, “SecureBank,” which uses BigQuery for its mission-critical analytics and reporting. SecureBank requires a reliable Recovery Time Objective (RTO) and15 minute Recovery Point Objective (RPO) for its primary BigQuery datasets, as robust disaster recovery is a top priority. They regularly conduct DR drills with BigQuery Managed DR to ensure compliance and readiness for unforeseen outages.
Before the introduction of soft failover in BigQuery Managed DR BigQuery, SecureBank faced a dilemma on how to perform their DR drills. While BigQuery Managed DR handled the failover of compute and associated datasets, conducting a full “hard failover” drill meant accepting the risk of up to 15 minutes of data loss if replication wasn’t complete when the failover was initiated — or significant operational disruption if they first manually verified data synchronization across regions. This often led to less realistic or more complex drills, consuming valuable engineering time and causing anxiety.
New solution:
With soft failover in BigQuery Managed DR, administrators have several options for failover procedures. Unlike hard failover for unplanned outages, soft failover initiates failover only after all data is replicated to the secondary region, to help guarantee data integrity.
Figure 2: Soft Failover Mode Selection
Figure 3: Disaster recovery reservations
Figure 4: Replication status / Failover details
BigQuery soft failover feature is available today via the BigQuery UI, DDL, and CLI, providing enterprise-grade control for disaster recovery, confident simulations, and compliance — without risking data loss during testing. Get started today to maintain uptime, prevent data loss, and test scenarios safely.
Amazon CloudWatch now allows you to query metrics data up to two weeks in the past using the Metrics Insights query source. CloudWatch Metrics Insights offers fast, flexible, SQL-based queries. This new capability allows you to display, aggregate, or slice and dice metrics data older than 3 hours, for enhanced visualization and investigation.
Customers creating dashboards and alarms to monitor dynamic groups of metrics over their resources and applications could visualize up to 3 hours of data when using Metrics Insights SQL queries. This enhancement helps customers identify trends and investigate impact for a longer period of time, even days after an event. This extended query time range helps improve the operational health of teams and ensures impacts are never missed.
Querying metrics data up to two weeks old with Metrics Insights is now available in commercial AWS regions.
The ability to query metrics data up to 2 weeks old is automatically available at no additional cost. Standard pricing applies for alarms, dashboards or API usage on Metrics Insights, see CloudWatch pricing for details. To learn more about metrics queries with Metrics Insights, visit the CloudWatch documentation.
Amazon CloudWatch now allows you to monitor multiple individual metrics via a single alarm. By dynamically including metrics to monitor via a query, this new capability eliminates the need to manually manage separate alarms for dynamic resource fleets.
As customers rely more on autonomous teams and autoscaled resources, they face a choice between maintenance-free aggregated monitoring and the operational cost of maintaining per-resource alarming. Alarms that evaluate multiple metrics provide granular monitoring with individual actions through an alarm that automatically adjusts in real time as resources get created or deleted. This reduces operational efforts, allowing customers to focus on the value of their observability while ensuring no resources go unmonitored.
Monitoring multiple metrics with a single alarm is now available in all commercial AWS regions, the AWS GovCloud (US) Regions, and the China Regions.
To start alarming on multiple metrics, create an alarm on a Metrics Insights (SQL) metrics query using GROUP BY and ORDER BY conditions. The alarm automatically updates the query results with each evaluation, and matches corresponding metrics as resources change. You can configure alarms through the CloudWatch console, AWS CLI, CloudFormation, or CDK. Metrics Insights query alarms’ pricing applies, see CloudWatch pricing for details. To learn more about monitoring multiple metrics with query alarms and improving your monitoring efficiency, visit the CloudWatch alarms documentation.
Today, AWS announced the opening of a new AWS Direct Connect location within East African Data Centres NBO1 near Nairobi, Kenya. You can now establish private, direct network access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones from this location. This site is the first AWS Direct Connect location in Kenya. This Direct Connect location offers dedicated 10 Gbps and 100 Gbps connections with MACsec encryption available.
The Direct Connect service enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet.
For more information on the over 145 Direct Connect locations worldwide, visit the locations section of the Direct Connect product detail pages. Or, visit our getting started page to learn more about how to purchase and deploy Direct Connect.
Amazon RDS for Oracle and Amazon RDS Custom for Oracle now support bare metal instances. You can use M7i, R7i, X2iedn, X2idn, X2iezn, M6i, M6id, M6in, R6i, R6id, and R6in bare metal instances at 25% lower price compared to equivalent virtualized instances.
With bare metal instances, you can combine multiple databases onto a single bare metal instance to reduce cost by using the Multi-tenant feature. For example, databases running on a db.r7i.16xlarge instance and a db.r7i.8xlarge instance can be consolidated to individual pluggable databases on a single db.r7i.metal-24xl instance. Furthermore, you may be able to reduce your commercial database license and support costs by using bare metal instances since they provide full visibility into the number of CPU cores and sockets of the underlying server. Refer to Oracle Cloud Policy and Oracle Core Factor Table, and consult your licensing partner to determine if you can reduce license and support costs.
AWS Config now supports 5 additional AWS resource types. This expansion provides greater coverage over your AWS environment, enabling you to more effectively discover, assess, audit, and remediate an even broader range of resources.
With this launch, if you have enabled recording for all resource types, then AWS Config will automatically track these new additions. The newly supported resource types are also available in Config rules and Config aggregators.
You can now use AWS Config to monitor the following newly supported resource types in all AWS Regions where the supported resources are available.
Written by: Rommel Joven, Josh Fleischer, Joseph Sciuto, Andi Slok, Choon Kiat Ng
In a recent investigation, Mandiant Threat Defense discovered an active ViewState deserialization attack affecting Sitecore deployments leveraging sample machine keys that had been exposed in Sitecore deployment guides from 2017 and earlier. An attacker leveraged the exposed ASP.NET machine keys to perform remote code execution.
Mandiant worked directly with Sitecore to address this issue. Sitecore tracks this vulnerable configuration as CVE-2025-53690, which affects customers who deployed any version of multiple Sitecore products using sample keys exposed in publicly available deployment guides (specifically Sitecore XP 9.0 and Active Directory 1.4 and earlier versions). Sitecore has confirmed that its updated deployments automatically generate unique machine keys and that affected customers have been notified.
Refer to Sitecore’s advisory for more information on which products are potentially impacted.
Summary
Mandiant successfully disrupted the attack shortly after initiating rapid response, which ultimately prevented us from observing the full attack lifecycle. However, our investigation still provided insights into the adversary’s activity. The attacker’s deep understanding of the compromised product and the exploited vulnerability was evident in their progression from initial server compromise to privilege escalation. Key events in this attack chain included:
Initial compromise was achieved by exploiting the ViewState Deserializationvulnerability CVE-2025-53690 on the affected internet-facing Sitecore instance, resulting in remote code execution.
A decrypted ViewState payload contained WEEPSTEEL, a malware designed for internal reconnaissance.
Leveraging this access, the threat actor archived the root directory of the web application, indicating an intent to obtain sensitive files such as web.config. This was followed by host and network reconnaissance.
The threat actor staged tooling in a public directory which included an:
Open-source network tunnel tool, EARTHWORM
Open-source remote access tool,DWAGENT
Open-source Active Directory (AD) reconnaissance tool, SHARPHOUND
Local administrator accounts were created and used to dump SAM/SYSTEM hives in an attempt to compromise cached administrator credentials. The compromised credentials then enabled lateral movement via RDP.
DWAgent provided persistent remote access and was used for Active Directory reconnaissance.
Figure 1: Attack lifecycle
Initial Compromise
External Reconnaissance
The threat actor began their operation by probing the victim’s web server with HTTP requests to various endpoints before ultimately shifting their attention to the /sitecore/blocked.aspxpage. This page is a legitimate Sitecore component that simply returns a message if a request was blocked due to licensing issues. The page’s use of a hidden ViewState form (a standard ASP.NET feature), combined with being accessible without authentication, made it a potential target for ViewState deserialization attacks.
ViewState Deserialization Attack
ViewStates are an ASP.NET feature designed to persist the state of webpages by storing it in a hidden HTML field named __VIEWSTATE. ViewState deserialization attacks exploit the server’s willingness to deserialize ViewState messages when validation mechanisms are either absent or circumvented. When machine keys (which protect ViewState integrity and confidentiality) are compromised, the application effectively loses its ability to differentiate between legitimate and malicious ViewState payloads sent to the server.
Local web server (IIS) logs recorded that the threat actor’s attack began by sending an HTTP POST request to the blocked.aspx endpoint, which was met with an HTTP 302 “Found” response. This web request coincided with a “ViewState verification failed” message in Windows application event logs (Event ID 1316) containing the crafted ViewState payload sent by the threat actor:
Log: Application
Source: ASP.NET 4.0.30319.0
EID: 1316
Type: Information
Event code: 4009-++-Viewstate verification failed. Reason: Viewstate was
invalid.
<truncated>
ViewStateException information:
Exception message: Invalid viewstate.
Client IP: <redacted>
User-Agent: Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1;
Trident/5.0) chromeframe/10.0.648.205 Mozilla/5.0
(Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko)
Chrome/121.0.0.0 Safari/537.36
PersistedState: <27760 byte encrypted + base64 encoded payload>
Referer:
Path: /sitecore/blocked.aspx
Mandiant recovered a copy of the server’s machine keys, which were stored in the ASP.NET configuration file web.config. Like many other ViewState deserialization attacks, this particular Sitecore instance used compromised machine keys. Knowledge of these keys enabled the threat actor to craft malicious ViewStates that were accepted by the server by using tools like the public ysoserial.net project.
Initial Host Reconnaissance
Mandiant decrypted the threat actor’s ViewState payload using the server’s machine keys and found it contained an embedded .NET assembly named Information.dll. This assembly, which Mandiant tracks as WEEPSTEEL, functions as an internal reconnaissance tool and has similarities to the GhostContainer backdoor and an information-gathering payload previously observed in the wild.
About WEEPSTEEL
WEEPSTEELis a reconnaissance tool designed to gather system, network, and user information. This data is then encrypted and exfiltrated to the attacker by disguising it as a benign __VIEWSTATE response.
The payload is designed to exfiltrate the following system information for reconnaissance:
// Code Snippet from Host Reconnaissance Function
Information.BasicsInfo basicsInfo = new Information.BasicsInfo
{
Directories = new Information.Directories
{
CurrentWebDirectory = HostingEnvironment.MapPath("~/")
},
// Gather system information
OperatingSystemInformation = Information.GetOperatingSystemInformation(),
DiskInformation = Information.GetDiskInformation(),
NetworkAdapterInformation = Information.GetNetworkAdapterInformation(),
Process = Information.GetProcessInformation()
};
// Serialize the 'basicsInfo' object into a JSON string
JavaScriptSerializer javaScriptSerializer = new JavaScriptSerializer();
text = javaScriptSerializer.Serialize(basicsInfo);
WEEPSTEEL appears to borrow some functionality from ExchangeCmdPy.py, a public tool tailored for similar ViewState-related intrusions. This comparison was originally noted in Kaspersky’s write-up on the GhostContainer backdoor. Like ExchangeCmdPy, WEEPSTEEL sends its output through a hidden HTML field masquerading as a legitimate __VIEWSTATE parameter, shown as follows:
Subsequent HTTP POST requests to the blocked.aspx endpoint from the threat actor would result in HTTP 200 “OK” responses, which Mandiant assesses would have contained an output in the aforementioned format. As the threat actor continued their hands-on interaction with the server, Mandiant observed repeated HTTP POST requests with successful responses to the blocked.aspx endpoint.
Establish Foothold
Following successful exploitation, the threat actor gained the NETWORK SERVICE privilege, equivalent to the IIS worker process w3wp.exe. This access provided the actor a starting point for further malicious activities.
Config Extraction
The threat actor then exfiltrated critical configuration files by archiving the contents ofinetpubsitecoreSitecoreCDWebsite, a Sitecore Content Delivery (CD) instance’s web root. This directory contained sensitive files, such as the web.config file, that provide sensitive information about the application’s backend and its dependencies, which would help enable post-exploitation activities.
Host Reconnaissance
After obtaining the key server configuration files, the threat actor proceeded to fingerprint the compromised server through host and network reconnaissance, including but not limited to enumerating running processes, services, user accounts, TCP/IP configurations, and active network connections.
whoami
hostname
net user
tasklist
ipconfig /all
tasklist /svc
netstat -ano
nslookup <domain>
net group domain admins
net localgroup administrators
Staging Directory
The threat actor leveraged public directories such as Music and Video for staging and deploying their tooling. Files written into the Public directory include:
File:C:UsersPublicMusic7za.exe
Description: command-line executable for the 7-Zip file archiver
EARTHWORM is an open-source tunneler that allows attackers to create a covert channel to and from a victim system over a separate protocol to avoid detection and network filtering, or to enable access to otherwise unreachable systems.
During our investigation, EARTHWORMwas executed to initiate a reverse SOCKS proxy connection back to the following command-and-control (C2) server:
130.33.156[.]194:443
103.235.46[.]102:80.
File:C:UsersPublicMusic1.vbs
Description: Attack VBScript: Used to execute threat actor commands, its content varies based on the desired actions.
SHA-256: <hash varies>
In one instance where the file 1.vbs was retrieved, it contained a simple VBS code to launch the EARTHWORM.
Following initial compromise, the threat actor elevated their access from NETWORK SERVICE privileges to the SYSTEM or ADMINISTRATOR level.
This involved creating local administrator accounts and obtaining access to domain administrator accounts. The threat actor was observed using additional tools to escalate privileges.
Adding Local Administrators
asp$: The threat actor leveraged a privilege escalation tool to create the local administrator account, asp$. The naming convention mimicking an ASP.NET service account with a common suffix $ suggests an attempt to blend in and evade detection.
sawadmin: At a later stage, the threat actor established a DWAGENT remote session to create a second local administrator account.
net user sawadmin {REDACTED} /add
net localgroup administrators sawadmin /add
Credential Dumping
The threat actor established RDP access to the host using the two newly created accounts and proceeded to dump the SYSTEM and SAM registry hives from both accounts. While redundant, this gave the attacker the information necessary to extract password hashes of local user accounts on the system. The activities associated with each account are as follows:
asp$
reg save HKLMSYSTEM c:userspublicsystem.hive
reg save HKLMSAM c:userspublicsam.hive
sawadmin: Prior to dumping the registry hives, the threat actor executed GoToken.exe. Unfortunately, the binary was not available for analysis.
GoToken.exe -h
GoToken.exe -l
GoToken.exe -ah
GoToken.exe -t
reg save HKLMSYSTEM SYSTEM.hiv
reg save HKLMSAM SAM.hiv
Maintain Presence
The threat actor maintained persistence through a combination of methods, leveraging both created and compromised administrator credentials for RDP access. Additionally, the threat actor issued commands to maintain long-term access to accounts. This included modifying settings to disable password expiration for administrative accounts of interest:
net user <AdminUser> /passwordchg:no /expires:never
wmic useraccount where name='<AdminUser>' set PasswordExpires=False
For redundancy and continued remote access, the DWAGENT tool was also installed.
Remote Desktop Protocol
The actor used the Remote Desktop Protocol extensively. The traffic was routed through a reverse SOCKS proxy created by EARTHWORM to bypass security controls and obscure their activities. In one RDP session, the threat actor under the context of the account asp$downloaded additional attacker tooling, dwagent.exe and main.exe, into C:Usersasp$Downloads.
File Path
MD5
Description
C:Usersasp$Downloadsdwagent.exe
n/a
DWAgent installer
C:Usersasp$Downloadsmain.exe
be7e2c6a9a4654b51a16f8b10a2be175
Downloaded from hxxp://130.33.156[.]194/main.exe
Table 1: Files written in the RDP session
Remote Access Tool: DWAGENT
DWAGENT is a legitimate remote access tool that enables remote control over the host. DWAGENT operates as a service with SYSTEM privilege and starts automatically, ensuring elevated and persistence access. During the DWAGENT remote session, the attacker wrote the file GoToken.exe. The commands executed suggest that the tool was used to aid in extracting the registry hives.
File Path
MD5
Description
C:UsersPublicMusicGoToken.exe
62483e732553c8ba051b792949f3c6d0
Binary executed prior to dumping of SAM/SYSTEM hives.
Table 2: File written in the DWAgent remote session
Internal Reconnaissance
Active Directory Reconnaissance
During a DWAGENT remote session, the threat actor executed commands to identify Domain Controllers within the target network. The actor then accessed the SYSVOL share on these identified DCs to search for cpassword within Group Policy Object (GPO) XML files. This is a well-known technique attackers employ to discover privileged credentials mistakenly stored in a weakly encrypted format within the domain.
The threat actor then transitioned to a new RDP session using a legitimate administrator account. From this session, SHARPHOUND , the data collection component for the Active Directory security analysis platform BLOODHOUND, was downloaded via a browser and saved to C:UsersPublicMusicsh.exe.
Following the download, the threat actor returned to the DWAGENT remote session and executed sh.exe, performing extensive Active Directory reconnaissance.
sh.exe -c all
Once the reconnaissance concluded, the threat actor switched back to the RDP session (still using the compromised administrator account) to archive the SharpHound output, preparing it for exfiltration.
With administrator accounts compromised, the earlier created asp$ and sawadminaccounts were removed, signaling a shift to more stable and covert access methods.
Move Laterally
The compromised administrator accounts were used to RDP to other hosts. On these systems, the threat actor executed commands to continue their reconnaissance and deploy EARTHWORM.
On one host, the threat actor logged in via RDP using a compromised admin account. Under the context of this account, the threat actor then continued to perform internal reconnaissance commands such as:
quser
whoami
net user <AdminUser> /domain
nltest /DCLIST:<domain>
nslookup <domain-controller>
Recommendations
Mandiant recommends implementing security best practices in ASP.NET including implementing automated machine key rotation, enabling View State Message Authentication Code (MAC), and encrypting any plaintext secrets within the web.config file. For more details, refer to the following resources:
Google Security Operations Enterprise and Enterprise+ customers can leverage the following product threat detections and content updates to help identify and remediate threats. All detections have been automatically delivered to Google Security Operations tenants within the Mandiant Frontline Threats curated detections ruleset. To leverage these updated rules, access Content Hub and search on any of the strings above, then View and Manage each rule you wish to implement or modify.
Earthworm Tunneling Indicators
User Account Created By Web Server Process
Cmd Launching Process From Users Music
Sharphound Recon
User Created With No Password Expiration Execution
Discovery of Privileged Permission Groups by Web Server Process
We would like to extend our gratitude to the Sitecore team for their support throughout this investigation. Additionally, we are grateful to Tom Bennett and Nino Isakovic for their assistance with the payload analysis. We also appreciate the valuable input and technical review provided by Richmond Liclican and Tatsuhiko Ito.
Anthropic’s Claude Sonnet 4 is now available with Global cross-Region inference in Amazon Bedrock, so you can now use the Global Claude Sonnet 4 inference profile to route your inference requests to any supported commercial AWS Region for processing, optimizing available resources and enabling higher model throughput.
Amazon Bedrock is a comprehensive, secure, and flexible service for building generative AI applications and agents. When using on-demand and batch inference in Amazon Bedrock, your requests may be restricted by service quotas or during peak usage times. Cross-region inference enables you to seamlessly manage unplanned traffic bursts by utilizing compute across different AWS Regions. With cross-region inference, you can distribute traffic across multiple AWS Regions, enabling higher throughput. Previously, you were able to choose cross-region inference profiles tied to a specific geography such as the US, EU, or APAC, which automatically selected the optimal commercial AWS Region within that geography to process your inference requests. For your generative AI use cases that do not require you to choose inference profiles tied to a specific geography, you can now use the Global cross-region inference profile to further increase your model throughput.
AWS Clean Rooms now supports the ability to add data provider members to an existing collaboration, offering customers enhanced flexibility as they iterate on and develop new use cases with their partners. With this launch, you can collaborate with new data providers without having to set up a new collaboration. Collaboration owners can configure an existing Clean Rooms collaboration to add new members that only contribute data, while benefiting from the privacy controls existing members already configured within the collaboration. New data providers invited to an existing collaboration can be reviewed in the change history, enhancing transparency across members. For example, when a publisher creates a Clean Rooms collaboration with an advertiser, they can enable adding new data providers such as a measurement company, which allows the advertiser to enrich their audience segments with third-party data before activating an audience with the publisher. This approach reduces onboarding time while maintaining the existing privacy controls for you and your partners.
AWS Clean Rooms helps companies and their partners easily analyze and collaborate on their collective datasets without revealing or copying one another’s underlying data. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.
AWS Clean Rooms ML custom modeling enables you and your partners to train and run inference on a custom ML models using collective datasets at scale without having to share your sensitive data or intellectual property. With today’s launch, collaborators can configure a new privacy control that sends redacted error log summaries to specified collaboration members. Error log summaries include the exception type, error message, and line in the code where the error occurred. When associating the model to the collaboration, collaborators can decide and agree which members will receive error log summaries and whether those summaries will contain detectable Personally Identifiable Information (PII), numbers, or custom strings redacted.
AWS Clean Rooms ML helps you and your partners apply privacy-enhancing controls to safeguard your proprietary data and ML models while generating predictive insights—all without sharing or copying one another’s raw data or models. For more information about the AWS Regions where AWS Clean Rooms ML is available, see the AWS Regions table. To learn more, visit AWS Clean Rooms ML.
Amazon SageMaker Catalog now supports governed classification through Restricted Classification Terms, allowing catalog administrators to control which users and projects can apply sensitive glossary terms to their assets. This new capability is designed to help organizations enforce metadata standards and ensure classification consistency across teams and domains.
With this launch, glossary terms can be marked as “restricted”, and only authorized users or groups—defined through explicit policies—can use them to classify data assets. For example, a centralized data governance team may define terms like “Seller-MCF” or “PII” that reflect data handling policies. These terms can now be governed so only specific project members (e.g., trusted admin groups) can apply them, which helps support proper control over how sensitive classifications are assigned.
This feature is now available in all AWS regions where Amazon SageMaker Unified Studio is supported.
To get started and learn more about this feature, see SageMaker Unified Studio user guide.