Amazon Bedrock Data Automation (BDA) now supports enhanced transcription output for audio files by providing the option to distinguish between various speakers and separately process audio from each channel. Additionally, BDA expands support for blueprint creation using a guided and natural language-based interface for extracting custom insights to audio modality. BDA is a feature of Amazon Bedrock that automates generation of insights from unstructured multimodal content such as documents, images, audio, and videos for your GenAI-powered applications. With this launch, developers can now enable speaker diarization and channel identification in standard output. Speaker diarization detects each unique speaker and tracks speaker changes in a multi-party audio conversation. Channel identification enables separate processing of audio from each channel. For example, speakers such as a customer and sales agent can be separated into unique channels, making it easier to analyze the transcript.
Speaker diarization and channel identification make transcripts easier to read and extract custom insights from a variety of multi-party voice conversations such as customer calls, education sessions, public safety calls, clinical discussions, and meetings. This enables customers to identify ways to improve employee productivity, add subtitles to webinars, enhance customer experience, or increase regulatory compliance. For example, Telehealth customers can summarize the recommendations of a doctor by assigning the doctors and patients to pre-identified channels.
Amazon Bedrock Data Automation is available in a total of 7 AWS Regions in US West (Oregon), US East (N. Virginia), GovCloud (US-West), Europe (Frankfurt), Europe (London), Europe (Ireland), Asia Pacific (Mumbai) and Asia Pacific (Sydney). To learn more, visit the Bedrock Data Automation page, Amazon Bedrock Pricing page, or view documentation.
Amazon CloudWatch now helps you monitor large-scale distributed applications by automatically discovering and organizing services into groups based on configurations and their relationships. SREs and DevOps teams can identify critical dependencies and blast radius impacts to remediate issues faster. You get an always-on, out-of-the-box catalog and map that visualizes services and dependencies across AWS accounts and regions, organizing them into logical groups that align with how customers think about their systems—without manual configurations. You can also apply dynamic grouping based on how you organize applications—by teams, business units, criticality tiers, or other attributes.
With this new application performance monitoring (APM) capability, customers can quickly visualize which applications and dependencies to focus on while troubleshooting their distributed applications. For example, SRE and DevOps teams can now accelerate root cause analysis and reduce mean-time-to-resolution (MTTR) through high-level operational signals such as SLOs, health indicators, changes, and top observations. The application map integrates with a contextual troubleshooting drawer that surfaces relevant metrics and actionable insights to accelerate triage. When deeper investigation is needed, teams can pivot to an application-specific dashboard tailored for troubleshooting. The map, drawer, and dashboard dynamically update as new services are discovered or as customers adjust how their environments are grouped—ensuring the view is always accurate and aligned with how teams operate.
This new capability is now available in all AWS commercial regions where Application Signals have launched , at no additional cost. To learn more, please visit CloudWatch Application Signals documentation.
Amazon Detective now supports Amazon Virtual Private Cloud (VPC) endpoints via AWS PrivateLink, enabling you to securely initiate API calls to Detective from within your VPC without requiring Internet traversal. AWS PrivateLink support for Detective is available in all AWS Regions where Detective is available (see the AWS Region table). To try the new feature, you can create a VPC endpoint for Detective through the VPC console, API, or SDK. This creates an elastic network interface in your specified subnets. The interface has a private IP address that serves as an entry point for traffic destined for Detective. You can read more about Detective’s integration with PrivateLink here.
Amazon Detective automatically collects log data from your AWS resources and uses machine learning, statistical analysis, and graph theory to build interactive visualizations that enable you to conduct faster and more efficient security investigations. Detective analyzes trillions of events from multiple data sources like Amazon Virtual Private Cloud (Amazon VPC) Flow Logs, AWS CloudTrail logs, Amazon Elastic Kubernetes Service (Amazon EKS) audit logs, and findings from multiple AWS security services to create a unified, interactive view of security events. Detective also automatically groups related findings from Amazon GuardDuty, AWS Security Hub and Amazon Inspector to show you combined threats and vulnerabilities to help security analysts identify and prioritize potential high-severity security risks.
Today, AWS announces the v1.0.0 release of the AWS API model context protocol (MCP) server enabling foundation models (FMs) to interact with any AWS API through natural language by creating and executing syntactically correct CLI commands.
The v1.0.0 release of the AWS API MCP Server contains many enhancements that make the server easier to configure, use, and integrate with MCP clients and agentic frameworks. This release reduces startup time and removes several dependencies by converting the suggest_aws_command tool to a remote service rather than relying on local installation. Security enhancements offer improved secure file system controls, and better input validation. Customers using AWS CloudWatch agent can now collect logs from the API MCP Server for improved observability. In order to support more hosting and configuration options, the AWS API MCP Server now offers streamable HTTP transport in addition to the existing stdio. To make human-in-the-loop workflows requiring iterative inputs more reliable, the AWS API MCP Server now includes elicitation in supported MCP clients. To provide additional safeguards the API MCP Server can be configured to deny certain types of actions or require human oversight and consent for mutating actions. This release also includes a new experimental tool called get_execution_plan to provide prescriptive workflows for common AWS tasks. The tool can be enabled by setting the EXPERIMENTAL_AGENT_SCRIPTS flag to true.
Customers can configure the AWS API MCP Server for use with their MCP-compatible clients from several popular MCP registries. The AWS API MCP Server is also available packaged as a container in the Amazon ECR Public Gallery.
The AWS API MCP Server is open-source and available now. Visit the AWS Labs GitHub repository to view the source, download, and start experimenting with natural language interaction with AWS APIs today.
Today, AWS announces the general availability (GA) of the AWS Knowledge Model Context Protocol (MCP) Server. The AWS Knowledge server gives AI agents and MCP clients access to authoritative knowledge, including documentation, blog posts, What’s New announcements, and Well-Architected best practices, in an LLM-compatible format. With this release, the server also includes knowledge about the regional availability of AWS APIs and CloudFormation resources.
AWS Knowledge MCP Server enables MCP clients and agentic frameworks supporting MCP to anchor their responses in trusted AWS context, guidance, and best practices. Customers can now benefit from more accurate reasoning, increased consistency of execution, reduced manual context management so they can focus on business problems rather than MCP configurations.
The server is publicly accessible at no cost and does not require an AWS account. Usage is subject to rate limits. Give your developers and agents access to the most up-to-date AWS information today by configuring your MCP clients to use the AWS Knowledge MCP Server endpoint, and follow the Getting Started guide for setup instructions. The AWS Knowledge MCP Server is available globally.
Starting today, AWS Cloud WAN is available in the AWS GovCloud (US-West) and AWS GovCloud (US-East) regions.
With AWS Cloud WAN, you can use a central dashboard and network policies to create a global network that spans multiple locations and networks, removing the need to configure and manage different networks using different technologies. You can use network policies to specify the Amazon Virtual Private Clouds, AWS Transit Gateways, and on-premises locations you want to connect to using an AWS Site-to-Site VPN, AWS Direct Connect, or third-party software-defined WAN (SD-WAN) products. The AWS Cloud WAN central dashboard generates a comprehensive view of the network to help you monitor network health, security, and performance. In addition, AWS Cloud WAN automatically creates a global network across AWS Regions by using Border Gateway Protocol (BGP) so that you can easily exchange routes worldwide.
AWS DataSync now supports virtual private cloud (VPC) endpoint policies, allowing you to control access to DataSync API operations through DataSync VPC service endpoints and Federal Information Processing Standard (FIPS) 140-3 enabled VPC service endpoints. This new feature helps organizations strengthen their security posture and meet compliance requirements when accessing DataSync API operations through VPC endpoints.
VPC endpoint policies allow you to restrict specific DataSync API actions accessed through your VPC endpoints. For example, you can control which AWS principals can access DataSync operations such as CreateTask, StartTaskExecution, or ListAgents. These policies work in conjunction with identity-based policies and resource-based policies to secure access in your AWS environment.
This feature is available in all AWS Regions where AWS DataSync is available. For more information about FIPS 140-3 at AWS, visit FIPS 140-3 Compliance. To learn more about VPC endpoint policies for AWS DataSync, see the AWS DataSync User Guide.
Amazon SageMaker managed MLflow is now available in both AWS GovCloud (US-West) and AWS GovCloud (US-East) Regions.
Amazon SageMaker managed MLflow streamlines AI experimentation and accelerates your GenAI journey from idea to production. MLflow is a popular open-source tool that helps customers manage experiment tracking to providing end-to-end observability, reducing time-to-market for generative AI development.
Amazon CloudWatch and OpenSearch Service integrated analytics experience is now available in 5 additional commercial regions: Asia Pacific (Osaka), Asia Pacific (Seoul), Europe (Milan), Europe (Spain), and US West (N. California).
With this integration, CloudWatch Logs customers have two more query languages for log analytics, in addition to CloudWatch Logs Insights QL. Customers can use SQL to analyze data, correlate logs using JOIN, sub-queries, and use SQL functions, namely, JSON, mathematical, datetime, and string functions for intuitive log analytics. They can also use the OpenSearch PPL to filter, aggregate and analyze their data. With a few clicks, CloudWatch Logs customers can create OpenSearch dashboards for VPC, WAF, and CloudTrail logs to monitor, analyze, and troubleshoot using visualizations derived from the logs. OpenSearch customers no longer have to copy logs from CloudWatch for analysis, or create ETL pipelines. Now, they can use OpenSearch Discover to analyze CloudWatch logs in-place, and build indexes and dashboards on CloudWatch Logs.
For AI agents to be really useful, they need to be able to securely interact with enterprise data. In July, we introduced a toolset to help AI agents interact with and analyze business data in BigQuery through natural language, and with just a few lines of code. Today, we’re taking the next step, with “Ask data insights” for Conversational Analytics and the “BigQuery Forecast” for time-series predictions, going beyond fetching metadata and executing queries to full-scale data analysis and predictions. Both tools are available today in the MCP Toolbox as well as Agent Development Kit’sbuilt-in toolset.
Let’s dive into what you can do with these new tools.
ask_data_insights: Converse with BigQuery
With the ask_data_insights tool, you can now answer complex questions of your structured data in BigQuery using plain English.
Built on top of the powerful Conversational Analytics API, ask_data_insights enables an agent to utilize the API to offload the task of understanding the user’s question, pulling in relevant context, formulating and executing the queries, and summarizing the answer in plain English. Along the way, the ask_data_insights tool shows its work, returning a detailed step-by-step log of its process, so you have full transparency into how it arrived at the answer.
Predict the future with BigQuery Forecast
Information without insights is just noise. The ability to predict future trends, whether sales, user traffic, or inventory needs, is critical for any business. BigQuery Forecast simplifies time-series forecasting using BigQuery ML’s AI.FORECAST function based on the built-in TimesFM model.
With BigQuery Forecast, the agent can run the forecasting job directly within BigQuery, without you having to set up machine learning infrastructure. Point the tool at your data, specify what you want to predict and a time horizon, and the agent will make its predictions using TimesFM.
New tools in action: Building a Google Analytics Data Agent
Let’s explore how to build a simple agent to answer questions about Google Analytics 360 data using ask_data_insights and BigQuery Forecast. For this demo,
The data is stored in BigQuery tables. Users of this agent only require read access to these tables, which are available under the BigQuery public dataset. bigquery-public-data.google_analytics_sample.
We will use ADK to build this agent and “adk web” to test it.
We are using one tool from the ADK’s built-in tools and one from the MCP toolbox. You can choose to use either option depending on your agent architecture and needs.
This diagram shows the architecture of this simple agent:
And here is the agent code:
code_block
<ListValue: [StructValue([(‘code’, ‘import asynciornfrom google.adk.agents import Agentrnfrom google.adk.runners import Runnerrnfrom google.adk.sessions import InMemorySessionServicernfrom google.adk.tools.bigquery import BigQueryCredentialsConfigrnfrom google.adk.tools.bigquery import BigQueryToolsetrnfrom google.adk.tools.bigquery.config import BigQueryToolConfigrnfrom google.adk.tools.bigquery.config import WriteModernfrom google.genai import typesrnfrom toolbox_core import ToolboxSyncClientrnimport google.authrnrn# Constants for this example agentrnAGENT_NAME = “bigquery_agent”rnAPP_NAME = “bigquery_app”rnUSER_ID = “user1234″rnSESSION_ID = “1234”rnGEMINI_MODEL = “gemini-2.5-pro”rnrn# A tool configuration to block any write operationsrntool_config = BigQueryToolConfig(write_mode=WriteMode.BLOCKED)rnrn# We are using application default credentialsrnapplication_default_credentials, _ = google.auth.default()rncredentials_config = BigQueryCredentialsConfig(rn credentials=application_default_credentialsrn)rnrn# Instantiate the built in BigQuery toolset with single toolrn# Use “ask_data_insights” for deeper insightsrnbigquery_toolset = BigQueryToolset(rn credentials_config=credentials_config, bigquery_tool_config=tool_config, tool_filter=[‘ask_data_insights’]rn)rnrn# Instantiate a Toolbox toolset. Only forecasting tool used. Make sure the Toolbox MCP server is already running locally. You can learn more at this codelabrntoolbox = ToolboxSyncClient(“http://127.0.0.1:5000”)rnmcp_tools = toolbox.load_toolset(‘bq-mcp-toolset’)rnrn# Agent Definitionrnroot_agent = Agent(rn model=GEMINI_MODEL,rn name=AGENT_NAME,rn description=(rn “Agent to answer questions about Google Analytics data stored in BigQuery”rn ),rn instruction=”””\rn You are a Google Analytics agent that can answer questions on Google Analytics data. rnrntImportant context: rnttYou have access to Google Analytics data in BigQuery which you should use to answer the questionsrnttTables available to you are – rntt1. `<my-project>.google_analytics_sample.daily_total_visits`rntt2. `<my-project>.google_analytics_sample.ga_sessions_20170801`rntAutomatically choose the table based on the user question. When either can be used, use the first one. rn “””,rn tools=mcp_tools+[bigquery_toolset],rn)’), (‘language’, ‘lang-py’), (‘caption’, <wagtail.rich_text.RichText object at 0x7f440d109fd0>)])]>
Using the agent code above, let’s turn to the ADK’s developer UI, i.e., adk web, to test the agent and see it in action.
First, let’s use the tools to understand our data…
Agent using the insights tool to summarize the data
Then, let’s see if the agent can answer a business question.
The Conversational Analytics API backend is equipped with deeper thinking, and is able to bring out rich insights.
As you can see above, the Conversational Analytics API is equipped with the ability to perform deep thinking, so it can provide rich insights into our question.
Now, let’s see if the agent can predict the future.
Short answer, yes, yes it can, with a 95% confidence level. With these tools, the power of the TimesFM model is finally available to business users, regardless of their technical skill level.
Bring analysis and forecasts to your data
These new BigQuery capabilities will help developers reimagine how they build data-driven applications and agents. Together, we believe the combination of AI-powered Conversational Analytics and powerful, built-in forecasting capabilities will make performing sophisticated data analysis easier than ever.
Today, AWS announced the opening of a new AWS Direct Connect location within the Digital Realty MAD3 data center near Madrid, Spain. You can now establish private, direct network access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones from this location. This site is the third site in Madrid and the fourth AWS Direct Connect location within Spain. This Direct Connect location offers dedicated 10 Gbps and 100 Gbps connections with MACsec encryption available.
The Direct Connect service enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet.
For more information on the over 146 Direct Connect locations worldwide, visit the locations section of the Direct Connect product detail pages. Or, visit our getting started page to learn more about how to purchase and deploy Direct Connect.
Today, AWS announced the expansion of 10 Gbps and 100 Gbps dedicated connections with MACsec encryption capabilities at the existing AWS Direct Connect location in the Equinix BG1 data center near Bogota, Colombia. You can now establish private, direct network access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones from this location.
The Direct Connect service enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet.
For more information on the over 146 Direct Connect locations worldwide, visit the locations section of the Direct Connect product detail pages. Or, visit our getting started page to learn more about how to purchase and deploy Direct Connect.
Amazon Simple Notification Service (Amazon SNS) now allows customers to make API requests over Internet Protocol version 6 (IPv6) in the AWS GovCloud (US) Regions. The new endpoints have also been validated under the Federal Information Processing Standard (FIPS) 140-3 program.
Amazon SNS is a fully managed messaging service that enables publish/subscribe messaging between distributed systems, microservices, and event-driven serverless applications. With this update, customers have the option of using either IPv6 or IPv4 when sending requests over dual-stack public or VPC endpoints.
SNS now supports IPv6 in all Regions where the service is available, including AWS Commercial, AWS GovCloud (US), and China Regions. For more information on using IPv6 with Amazon SNS, please refer to our developer guide.
Amazon Simple Notification Service (Amazon SNS) now supports additional endpoints that have been validated under the Federal Information Processing Standard (FIPS) 140-3 program in AWS Regions in the United States and Canada.
FIPS compliant endpoints help companies contracting with the US federal government meet the FIPS security requirement to encrypt sensitive data in supported regions. With this expansion, you can use Amazon SNS for workloads that require a FIPS 140-3 validated cryptographic module when sending requests over dual-stack public or VPC endpoints.
Amazon SNS FIPS compliant endpoints are now available in the following regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Canada West (Calgary) and AWS GovCloud (US). To learn more about FIPS 140-3 at AWS, visit FIPS 140-3 Compliance.
AWS Transform now offers Terraform as an additional option to generate network infrastructure code automatically from VMware environments. The service converts your source network definitions into reusable Terraform modules, complementing current AWS CloudFormation and AWS Cloud Development Kit (CDK) support.
AWS Transform for VMware is an agentic AI service that automates the discovery, planning, and migration of VMware workloads, accelerating infrastructure modernization with increased speed and confidence. These migrations require recreating network configurations while maintaining operational consistency. The service now generates Terraform modules alongside CDK and AWS CloudFormation templates. This addition enables organizations to maintain existing deployment pipelines while using preferred tools for modular, customizable network configurations.
Written by: Omar ElAhdan, Matthew McWhirt, Michael Rudden, Aswad Robinson, Bhavesh Dhake, Laith Al
Background
Protecting software-as-a-service (SaaS) platforms and applications requires a comprehensive security strategy. Drawing from analysis of UNC6040’s specific attack methodologies, this guide presents a structured defensive framework encompassing proactive hardening measures, comprehensive logging protocols, and advanced detection capabilities. While emphasizing Salesforce-specific security recommendations, these strategies provide organizations with actionable approaches to safeguard their SaaS ecosystem against current threats.
Google Threat Intelligence Group (GTIG) is tracking UNC6040, a financially motivated threat cluster that specializes in voice phishing (vishing) campaigns specifically designed to compromise organizations’ Salesforce instances for large-scale data theft and subsequent extortion. Over the past several months, UNC6040 has demonstrated repeated success in breaching networks by having its operators impersonate IT support personnel in convincing telephone-based social engineering engagements. This approach has proven particularly effective in tricking employees, often within English-speaking branches of multinational corporations, into actions that grant the attackers access or lead to the sharing of sensitive credentials, ultimately facilitating the theft of organization’s Salesforce data. In all observed cases, attackers relied on manipulating end users, not exploiting any vulnerability inherent to Salesforce.
A prevalent tactic in UNC6040’s operations involves deceiving victims into authorizing a malicious connected app to their organization’s Salesforce portal. This application is often a modified version of Salesforce’s Data Loader, not authorized by Salesforce. During a vishing call, the actor guides the victim to visit Salesforce’s connected app setup page to approve a version of the Data Loader app with a name or branding that differs from the legitimate version. This step inadvertently grants UNC6040 significant capabilities to access, query, and exfiltrate sensitive information directly from the compromised Salesforce customer environments. This methodology of abusing Data Loader functionalities via malicious connected apps is consistent with recent observations detailed by Salesforce in their guidance on protecting Salesforce environments from such threats.
In some instances, extortion activities haven’t been observed until several months after the initial UNC6040 intrusion activity, which could suggest that UNC6040 has partnered with a second threat actor that monetizes access to the stolen data. During these extortion attempts, the actor has claimed affiliation with the well-known hacking group ShinyHunters, likely as a method to increase pressure on their victims.
Figure 1: Data Loader attack flow
We have observed the following patterns in UNC6040 victimology:
Motive: UNC6040 is a financially motivated threat cluster that accesses victim networks by vishing social engineering.
Focus: Upon obtaining access, UNC6040 has been observed immediately exfiltrating data from the victim’s Salesforce environment using Salesforce’s Data Loader application. Following this initial data theft, UNC6040 was observed leveraging end-user credentials obtained through credential harvesting or vishing to move laterally through victim networks, accessing and exfiltrating data from the victim’s accounts on other cloud platforms such as Okta and Microsoft 365.
Attacker infrastructure: UNC6040 primarily used Mullvad VPN IP addresses to access and perform the data exfiltration on the victim’s Salesforce environments and other services of the victim’s network.
Proactive Hardening Recommendations
The following section provides prioritized recommendations to protect against tactics utilized by UNC6040. This section is broken down to the following categories:
Note: While the following recommendations include strategies to protect SaaS applications, they also cover identity security controls and detections applicable at the Identity Provider (IdP) layer and security enhancements for existing processes, such as the help desk.
1. Identity
Positive Identity Verification
To protect against increasingly sophisticated social engineering and credential compromise attacks, organizations must adopt a robust, multilayered process for identity verification. This process moves beyond outdated, easily compromised methods and establishes a higher standard of assurance for all support requests, especially those involving account modifications (e.g., password resets or multi-factor authentication modifications).
Guiding Principles
Assume nothing: Do not inherently trust the caller’s stated identity. Verification is mandatory for all security-related requests.
Defense-in-depth: Rely on a combination of verification methods. No single factor should be sufficient for high-risk actions.
Reject unsafe identifiers: Avoid relying on publicly available or easily discoverable data. Information such as:
Date of birth
Last four digits of a Social Security number
High school names
Supervisor names
This data should not be used as primary verification factors, as it’s often compromised through data breaches or obtainable via open source intelligence (OSINT).
Standard Verification Procedures
Live Video Identity Proofing (Primary Method)
This is the most reliable method for identifying callers. The help desk agent must:
Initiate a video call with the user
Require the user to present a valid corporate badge or government-issued photo ID (e.g., driver’s license) on camera next to their face
Visually confirm that the person on the video call matches the photograph on the ID
Cross-reference the user’s face with their photo in the internal corporate identity system
Verify that the name on the ID matches the name in the employee’s corporate record Contingency for No Video: If a live video call is not possible, the user must provide a selfie showing their face, their photo ID, and a piece of paper with the current date and time written on it.
Additionally, before proceeding with any request – help desk personnel must check the user’s calendar for Out of Office (OOO) or vacation status. All requests from users who are marked as OOO should be presumptively denied until they have officially returned.
For high-risk changes like multi-factor authentication (MFA) resets or password changes for privileged accounts, an additional OOB verification step is required after the initial ID proofing. This can include:
Call-back: Placing a call to the user’s registered phone number on file
Manager approval: Sending a request for confirmation to the user’s direct manager via a verified corporate communication channel
Special Handling for Third-Party Vendor Requests
Mandiant has observed incidents where attackers impersonate support personnel from third-party vendors to gain access. In these situations, the standard verification principals may not be applicable.
Under no circumstances should the Help Desk move forward with allowing access. The agent must halt the request and follow this procedure:
End the inbound call without providing any access or information
Independently contact the company’s designated account manager for that vendor using trusted, on-file contact information
Require explicit verification from the account manager before proceeding with any request
Outreach to End Users
Mandiant has observed the threat actor UNC6040 targeting end-users who have elevated access to SaaS applications. Posing as vendors or support personnel, UNC6040 contacts these users and provides a malicious link. Once the user clicks the link and authenticates, the attacker gains access to the application to exfiltrate data.
To mitigate this threat, organizations should rigorously communicate to all end-users the importance of verifying any third-party requests. Verification procedures should include:
Hanging up and calling the official account manager using a phone number on file
Requiring the requester to submit a ticket through the official company support portal
Asking for a valid ticket number that can be confirmed in the support console
Organizations should also provide a clear and accessible process for end-users to report suspicious communications and ensure this reporting mechanism is included in all security awareness outreach.
Since access to SaaS applications is typically managed by central identity providers (e.g., Entra ID, Okta), Mandiant recommends that organizations enforce unified identity security controls directly within these platforms.
Guiding Principles
Mandiant’s approach focuses on the following core principles:
Authentication boundary This principle establishes a foundational layer of trust based on network context. Access to sensitive resources should be confined within a defined boundary, primarily allowing connections from trusted corporate networks and VPNs to create a clear distinction between trusted and untrusted locations.
Defense-in-depth This principle dictates that security cannot rely on a single control. Organizations should layer multiple security measures,such as strong authentication, device compliance checks, and session controls.
Identity detection and response Organizations must continuously integrate real-time threat intelligence into access decisions. This ensures that if an identity is compromised or exhibits risky behavior, its access is automatically contained or blocked until the threat has been remediated.
Identity Security Controls
The following controls are essential for securing access to SaaS applications through a central identity provider.
Utilize Single Sign-On (SSO)
Ensure that all users accessing SaaS applications are accessing via a corporate-managed SSO provider (e.g., Microsoft Entra ID or Okta), rather than through platform-native accounts. A platform-native break glass account should be created and vaulted for use only in the case of an emergency.
In the event that SSO through a corporate-managed provider is not available, refer to the content specific to the applicable SaaS application (e.g., Salesforce) rather than Microsoft Entra ID or Okta.
Mandate Phishing-Resistant MFA
Phishing-resistant MFA must be enforced for all users accessing SaaS applications. This is a foundational requirement to defend against credential theft and account takeovers. Consider enforcing physical FIDO2 keys for accounts with privileged access. Ensure that no MFA bypasses exist in authentication policies tied to business critical applications.
Access to corporate applications must be limited to devices that are either domain-joined or verified as compliant with the organization’s security standards. This policy ensures that a device meets a minimum security baseline before it can access sensitive data.
Key device posture checks should include:
Valid host certificate: The device must present a valid, company-issued certificate
Approved operating system: The endpoint must run an approved OS that meets current version and patch requirements
Active EDR agent: The corporate Endpoint Detection and Response (EDR) solution must be installed, active, and reporting a healthy status
Mandiant recommends that organizations implement dynamic authentication policies that respond to threats in real time. By integrating identity threat intelligence feeds—from both native platform services and third-party solutions—into the authentication process, organizations can automatically block or challenge access when an identity is compromised or exhibits risky behavior.
This approach primarily evaluates two categories of risk:
Risky sign-ins: The probability that an authentication request is illegitimate due to factors like atypical travel, a malware-linked IP address, or password spray activity
Risky users: The probability that a user’s credential has been compromised or leaked online
Based on the detected risk level, Mandiant recommends that organizations apply a tiered approach to remediation.
Recommended Risk-Based Actions
For high-risk events: Organizations should apply the most stringent security controls. This includes blocking access entirely.
For medium-risk events: Access should be granted only after a significant step-up in verification. This typically means requiring proof of both the user’s identity (via strong MFA) and the device’s integrity (by verifying its compliance and security posture).
For low-risk events: Organizations should still require a step-up authentication challenge, such as standard MFA, to ensure the legitimacy of the session and mitigate low-fidelity threats.
Event monitoring: Provides detailed logs of user actions—such as data access, record modifications, and login origins—and allows these logs to be exported for external analysis
Transaction security policies: Monitors for specific user activities, such as large data downloads, and can be configured to automatically trigger alerts or block the action when it occurs
2. SaaS Applications
Salesforce Targeted Hardening Controls
This section details specific security controls applicable for Salesforce instances. These controls are designed to protect against broad access, data exfiltration, and unauthorized access to sensitive data within Salesforce.
Network and Login Controls
Restrict logins to only originate from trusted network locations.
Threat actors often bypass interactive login controls by leveraging generic API clients and stolen OAuth tokens. This policy flips the model from “allow by default” to “deny by default,” to ensure that only vetted applications can connect.
Enable a “Deny by Default” API policy: Navigate to API Access Control and enable “For admin-approved users, limit API access to only allowed connected apps.” This blocks all unapproved clients.
Maintain a minimal application allowlist: Explicitly approve only essential Connected Apps. Regularly review this allowlist to remove unused or unapproved applications.
Enforce strict OAuth policies per app: For each approved app, configure granular security policies, including restricting access to trusted IP ranges, enforcing MFA, and setting appropriate session and refresh token timeouts.
Revoke sessions when removing apps: When revoking an app’s access, ensure all active OAuth tokens and sessions associated with it are also revoked to prevent lingering access.
Organizational process and policy: Create policies governing application integrations with third parties. Perform Third-Party Risk Management reviews of all integrations with business-critical applications (e.g., Salesforce, Google Workspace, Workday).
Users should only be granted the absolute minimum permissions required to perform their job functions.
Use a “Minimum Access” profile as a baseline: Configure a base profile with minimal permissions and assign it to all new users by default. Limit the assignment of “View All” and “Modify All” permissions.
Grant privileges via Permission Sets: Grant all additional access through well-defined Permission Sets based on job roles, rather than creating numerous custom profiles.
Disable API access for non-essential users: The “API Enabled” permission is required for tools like Data Loader. Remove this permission from all user profiles and grant it only via a controlled Permission Set to a small number of justified users.
Hide the ‘Setup’ menu from non-admin users: For all non-administrator profiles, remove access to the administrative “Setup” menu to prevent unauthorized configuration changes.
Enforce high-assurance sessions for sensitive actions: Configure session settings to require a high-assurance session for sensitive operations such as exporting reports.
Set the internal and external Organization-Wide Defaults (OWD) to “Private” for all sensitive objects.
Use strategic Sharing Rules or other sharing mechanisms to grant wider data access, rather than relying on broad access via the Role Hierarchy.
Leverage Restriction Rules for Row-Level Security
Restriction Rules act as a filter that is applied on top of all other sharing settings, allowing for fine-grained control over which records a user can see.
Ensure that any users with access to sensitive data or with privileged access to the underlying Salesforce instance are setting strict timeouts on any Salesforce support access grants.
Revoke any standing requests and only re-enable with strict time limits for specific use cases. Be wary of enabling these grants from administrative accounts.
Salesforce Targeted Logging and Detections Controls
This section outlines key logging and detection strategies for Salesforce instances. These controls are essential for identifying and responding to advanced threats within the SaaS environment.
SaaS Applications Logging
To gain visibility into the tactics, techniques, and procedures (TTPs) used by threat actors against SaaS Applications, Mandiant recommends enabling critical log types in the organization’s Salesforce environment and ingesting the logs into their Security Information and Event Management (SIEM).
What You Need in Place Before Logging
Before you turn on collection or write detections, make sure your organization is actually entitled to the logs you are planning to use – and that the right features are enabled.
Entitlement check (must-have)
Most security logs/features are gated behind Event Monitoring via Salesforce Shield or the Event Monitoring Add-On. This applies to Real-Time Event Monitoring (RTEM) streaming and viewing.
Pick your data model per use case
RTEM – Streams (near real-time alerting): Available in Enterprise/Unlimited/Developer subscriptions; streaming events retained ~3 days.
RTEM – Storage: Many are Big Objects (native storage); some are standard objects (e.g. Threat Detection stores)
Event Log Files (ELF) – CSV model (batch exports): Available in Enterprise/Performance/Unlimited editions.
Use Event Manager to enable/disable streaming and storing per event; viewing RTEM events.
Grant access via profiles/permissions sets for RTEM and Threat Detection UI.
Threat Detection & ETS
Threat Detection events are viewed in UI with Shield/add-on; stored in corresponding EventStore objects.
Enhanced Transaction Security (ETS) is included with RTEM for block/MFA/notify actions on real-time events.
Recommended Log Sources to Monitor
Login History (LoginHistory): Tracks all login attempts, including username, time, IP address, status (successful/failed), and client type. This allows you to identify unusual login times, unknown locations, or repeated failures, which could indicate credential stuffing or account compromise.
Login Events (LoginEventStream): LoginEvent tracks the login activity of users who log in to Salesforce.
Setup Audit Trail (SetupAuditTrail): Records administrative and configuration changes within your Salesforce environment. This helps track changes made to permissions, security settings, and other critical configurations, facilitating auditing and compliance efforts.
API Calls (ApiEventStream): Monitors API usage and potential misuse by tracking calls made by users or connected apps.
Report Exports (ReportEventStream): Provides insights into report downloads, helping to detect potential data exfiltration attempts.
List View Events (ListViewEventStream): Tracks user interaction with list views, including access and manipulation of data within those views.
Bulk API Events (BulkApiResultEvent): Track when a user downloads the results of a Bulk API request.
Permission Changes (PermissionSetEvent): Tracks changes to permission sets and permission set groups. This event initiates when a permission is added to, or removed from a permission set.
API Anomaly (ApiAnomalyEvent): Track anomalies in how users make API calls.
Unique Query Event Type: Unique Query events capture specific search queries (SOQL), filter IDs, and report IDs that are processed, along with the underlying database queries (SQL).
External Identity Provider Event Logs: Track information from login attempts using SSO. (Please follow the guidance provided by your Identity Provider for monitoring and collecting IdP event logs.)
These log sources will provide organizations with the logging capabilities to properly collect and monitor the common TTPs used by threat actors. The key log sources to monitor and observable Salesforce activities for each TTP are as follows:
TTP
Observable Salesforce Activities
Log Sources
Vishing
Suspicious login attempts (rapid failures).
Logins from unusual IPs/ASNs (e.g., Mullvad/Tor).
OAuth (“Remote Access 2.0”) from unrecognized clients.
Login History
LoginEventStream/LoginEvent
Setup Audit Trail
Malicious Connected App Authorization (e.g., Data Loader, custom scripts)
New Connected App creation/modification (broad scopes: api, refresh_token, offline_access).
Policy relaxations (Permitted Users, IP restrictions).
Granting of API Enabled / “Manage Connected Apps” via perms.
Setup Audit Trail
PermissionSetEvent
LoginEventStream/LoginEvent (OAuth)
Data Exfiltration (via API, Data Loader, reports)
High-rate Query/QueryMore/QueryAll bursts.
Large RowsProcessed/RecordCount in reports & list views (chunked).
Bulk job result downloads.
File/attachment downloads at scale
ApiEventStream/ApiEvent
ReportEventStream/ReportEvent
ListViewEventStream/ListViewEvent
BulkApiResultEvent
FileEvent/FileEventStore
ApiAnomalyEvent/ReportAnomalyEvent
Unique Query Event Type
Lateral Movement/Persistence (within Salesforce or to other cloud platforms)
Permissions elevated (e.g., View/Modify All Data, API Enabled).
New user/service accounts.
LoginAs activity.
Logins from VPN/Tor after SF OAuth.
Pivots to Okta/M365, then Graph data pulls.
Setup Audit Trail
PermissionSetEvent
LoginAsEventStream
SaaS Applications Detections
While native SIEM threat detections provide some protection, they often lack the centralized visibility needed to connect disparate events across a complex environment. By developing custom targeted detection rules, organizations can proactively detect malicious activities.
Data Exfiltration & Cross-SaaS Lateral Movement (Post-Authorization)
MITRE Mapping: TA0010 – Exfiltration & TA0008 – Lateral Movement
Scenario & Objectives
After an user authorizes a (malicious or spoofed) Connected App, UNC6040 typically:
Performs data exfiltration quickly (REST pagination bursts, Bulk API downloads, lards/sensitive report exports).
Pivots to Okta/Microsoft 365 from the same risky egress IP to expand access and steal more data.
The objective here is to detect Salesforce OAuth → Exfil within ≤10 minutes, and Salesforce OAuth → Okta/M365 login within ≤60 minutes (same risky IP), plus single-signal, low-noise exfil patterns.
Baseline & Allowlist
Re-use the lists you already maintain for the vishing phase and add two regex helpers for content focus.
STRING
ALLOWLIST_CONNECTED_APP_NAMES
KNOWN_INTEGRATION_USERS (user ids/emails that legitimately use OAuth)
VPN_TOR_ASNS (ASNs as strings)
CIDR
ENTERPRISE_EGRESS_CIDRS (your corporate/VPN public egress)
$oauth.metadata.product_name = "SALESFORCE"
$oauth.metadata.log_type = "SALESFORCE"
$oauth.extracted.fields["LoginType"] = "Remote Access 2.0"
($oauth.extracted.fields["Status"] = "Success" or $oauth.security_result.action_details = "Success")
( not ($app in %ALLOWLIST_CONNECTED_APP_NAMES)
or ( ($app in %ALLOWLIST_CONNECTED_APP_NAMES)
and ( not ($ip in cidr %ENTERPRISE_EGRESS_CIDRS)
or strings.concat(ip_to_asn($ip), "") in %VPN_TOR_ASNS ) ) )
$uid = coalesce($oauth.principal.user.userid, $oauth.extracted.fields["UserId"])
$bulk.metadata.product_name = "SALESFORCE"
$bulk.metadata.log_type = "SALESFORCE"
$bulk.metadata.product_event_type = "BulkApiResultEvent"
$uid = coalesce($bulk.principal.user.userid, $bulk.extracted.fields["UserId"])
match:
$uid over 10m
Or
$oauth.metadata.product_name = "SALESFORCE"
$oauth.metadata.log_type = "SALESFORCE"
$oauth.extracted.fields["LoginType"] = "Remote Access 2.0"
($oauth.extracted.fields["Status"] = "Success" or $oauth.security_result.action_details = "Success")
( not ($app in %ALLOWLIST_CONNECTED_APP_NAMES)
or ( ($app in %ALLOWLIST_CONNECTED_APP_NAMES)
and ( not ($ip in cidr %ENTERPRISE_EGRESS_CIDRS)
or strings.concat(ip_to_asn($ip), "") in %VPN_TOR_ASNS ) ) )
$uid = coalesce($oauth.principal.user.userid, $oauth.extracted.fields["UserId"])
$api.metadata.product_name = "SALESFORCE"
$api.metadata.log_type = "SALESFORCE"
$api.metadata.product_event_type = "ApiEventStream"
$uid = coalesce($api.principal.user.userid, $api.extracted.fields["UserId"])
match:
$uid over 10m
Or
$oauth.metadata.product_name = "SALESFORCE"
$oauth.metadata.log_type = "SALESFORCE"
$oauth.extracted.fields["LoginType"] = "Remote Access 2.0"
($oauth.extracted.fields["Status"] = "Success" or $oauth.security_result.action_details = "Success")
( not ($app in %ALLOWLIST_CONNECTED_APP_NAMES)
or ( ($app in %ALLOWLIST_CONNECTED_APP_NAMES)
and ( not ($ip in cidr %ENTERPRISE_EGRESS_CIDRS)
or strings.concat(ip_to_asn($ip), "") in %VPN_TOR_ASNS ) ) )
$uid = coalesce($oauth.principal.user.userid, $oauth.extracted.fields["UserId"])
$report.metadata.product_name = "SALESFORCE"
$report.metadata.log_type = "SALESFORCE"
$report.metadata.product_event_type = "ReportEventStream"
strings.to_lower(coalesce($report.extracted.fields["ReportName"], "")) in regex SENSITIVE_REPORT_REGEX
$uid = coalesce($report.principal.user.userid, $report.extracted.fields["UserId"])
match:
$uid over 10m
Note: Single event rule can also be used instead of multi-event rules in this case where only the Product Event Types like ApiEventStream, BulkApiResultEvent, ReportEventStream can be used as a single event rule to be monitored. But, care has to be taken if a single event rule is established as these can be very noisy, and thus the reference lists should be actively monitored.
Bulk API Large Result Download (Non-Integration User)
Bulk API/Bulk v2 result download above threshold by a human user.
Why high-fidelity: Clear exfil artifact.
Key signals: BulkApiResultEvent, user not in KNOWN_INTEGRATION_USERS.
$e.metadata.product_name = "SALESFORCE"
$e.metadata.log_type = "SALESFORCE"
$e.metadata.product_event_type = "ReportEventStream"
not (coalesce($e.principal.user.userid, $e.extracted.fields["UserId"]) in %KNOWN_INTEGRATION_USERS)
strings.to_lower(coalesce($e.extracted.fields["ReportName"], "")) in regex %SENSITIVE_REPORT_REGEX
Salesforce OAuth → Okta/M365 Login From Same Risky IP in ≤60 Minutes (Multi-Event)
Suspicious Salesforce OAuth followed within 60m by Okta or Entra ID login from the same public IP, where the IP is off-corp or VPN/Tor ASN.
Why high-fidelity: Ties the attacker’s egress IP across SaaS within a tight window.
Key signals:
Salesforce OAuth posture (unknown app OR allowlisted+risky egress)
OKTA* or OFFICE_365 USER_LOGIN from the same IP
Lists/knobs: ENTERPRISE_EGRESS_CIDRS, VPN_TOR_ASNS. (Optional sibling rule binding by user email if identities are normalized.)
$oauth.metadata.product_name = "SALESFORCE"
$oauth.metadata.log_type = "SALESFORCE"
$oauth.extracted.fields["LoginType"] = "Remote Access 2.0"
($oauth.extracted.fields["Status"] = "Success" or $oauth.security_result.action_details = "Success")
( not ($app in %ALLOWLIST_CONNECTED_APP_NAMES)
or ( ($app in %ALLOWLIST_CONNECTED_APP_NAMES)
and ( not ($ip in cidr %ENTERPRISE_EGRESS_CIDRS)
or strings.concat(ip_to_asn($ip), "") in %VPN_TOR_ASNS )
$ip = coalesce($oauth.principal.asset.ip, $oauth.principal.ip)
$okta.metadata.log_type in "OKTA"
$okta.metadata.event_type = "USER_LOGIN"
$ip = coalesce($okta.principal.asset.ip, $okta.principal.ip) = $ip
$o365.metadata.log_type = "OFFICE_365"
$o365.metadata.event_type = "USER_LOGIN"
$ip = coalesce($o365.principal.asset.ip, $o365.principal.ip)
match:
$ip over 10m
M365 Graph Data-Pull After Risky Login
Entra ID login from risky egress followed by Microsoft Graph endpoints that pull mail/files/reports.
Why high-fidelity: Captures post-login data access typical in account takeovers.
Key signals: OFFICE_365 USER_LOGIN with off-corp IP or VPN/Tor ASN, then HTTP to URLs matching M365_SENSITIVE_GRAPH_REGEX by the same account within hours.
$login.metadata.log_type = "OFFICE_365"
$login.metadata.event_type = "USER_LOGIN"
$ip = coalesce($login.principal.asset.ip, $login.principal.ip)
( not ($ip in cidr %ENTERPRISE_EGRESS_CIDRS)
or strings.concat(ip_to_asn($ip), "") in %VPN_TOR_ASNS )
$acct = coalesce($login.principal.user.userid, $login.principal.user.email_addresses)
$http.metadata.product_name in ("Entra ID","Microsoft")
($http.metadata.event_type = "NETWORK_HTTP" or $http.target.url != "")
$acct = coalesce($http.principal.user.userid, $http.principal.user.email_addresses)
strings.to_lower(coalesce($http.target.url, "")) in regex %M365_SENSITIVE_GRAPH_REGEX
match:
$acct over 30m
Tuning & Exceptions
Identity joins – The lateral rule groups by IP for robustness. If you have strong identity normalization (Salesforce <-> Okta <-> M365), clone it and match on user email instead of IP.
Change windows – Suppress time-bound rules during approved data migrations/Connected App onboarding (temporarily add vendor app to ALLOWLIST_CONNECTED_APP_NAMES)
Integration accounts – Keep KNOWN_INTEGRATION_USERS current; most noise in exfil rules comes from scheduled ETL.
Streaming vs stored – The aforementioned rules assume Real-Time Event Monitoring Stream objects (e.g., ApiEventStream, ReportEventStream, ListViewEventStream, BulkApiResultEvent). For historical hunts, query the stored equivalents (e.g., ApiEvent, ReportEvent, ListViewEvent) with the same logic.
IOC-Based Detections
Scenario & Objectives
A malicious threat actor has either successfully accessed or attempted to access an organization’s network.
The objective is to detect the presence of known UNC6040 IOCs in the environment based on all of the available logs.
Reference Lists
Reference lists organizations should maintain:
STRING
UNC6040_IOC_LIST (IP addresses from threat intel sources eg. VirusTotal)
AWS Firewall Manager announces that it is now available in AWS Asia Pacific (Taipei) Region. AWS Firewall Manager helps cloud security administrators and site reliability engineers protect applications while reducing the operational overhead of manually configuring and managing rules.
Working with AWS Firewall Manager, customers can provide defense in depth policies to address the full range of AWS security services for customers hosting their applications and workloads in AWS Taipei. Customers wishing to establish secured assets using AWS WAF can create and maintain security policies with AWS Firewall Manager.
To learn more about how AWS Firewall Manager works, see the AWS Firewall Manager documentation for more details and the AWS Region Table for the list of regions where AWS Firewall Manager is currently available. To learn more about AWS Firewall Manager, its features, and its pricing, visit the AWS Firewall Manager website.
Starting today, customers can use boot and data volumes backed by Dell PowerStore and HPE Alletra Storage MP B10000 storage arrays with Amazon Elastic Compute Cloud (Amazon EC2) instances on AWS Outposts, including authenticated and encrypted volumes. This enhancement extends our existing support for boot and data volumes to include Dell and HPE storage arrays, alongside our current support for NetApp® on-premises enterprise storage arrays and Pure Storage® FlashArray™. Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises or edge location for a truly consistent hybrid experience.
With Outposts, customers can maximize the value of their on-premises storage investments by leveraging their existing enterprise storage arrays for both boot and data volumes, complementing managed Amazon EBS and Local Instance Store options. This provides significant operational benefits, including streamlined operating system (OS) management via centralized boot volumes and advanced data management features through high-performance data volumes. By integrating their own storage, organizations can also satisfy data residency requirements and benefit from a consistent cloud operational model for their hybrid environments.
To simplify the process, AWS offers automation scripts through AWS Samples to help customers easily set up and use external block volumes with EC2 instances on Outposts. Customers can use the AWS Management Console or CLI to utilize third-party block volumes with EC2 instances on Outposts.
Third-party storage integration for Outposts with all compatible storage vendors is available on Outposts 2U servers and Outposts racks at no additional charge in all AWS Regions where Outposts is supported. See the FAQs for Outposts servers and Outposts racks for the latest list of supported Regions.
AWS Storage Gateway now supports Virtual Private Cloud (VPC) endpoint policies for your VPC endpoints. With this feature, administrators can attach endpoint policies to VPC endpoints, allowing granular access control over Storage Gateway direct APIs for improved data protection and security posture.
AWS Storage Gateway is a hybrid cloud storage service that provides on-premises applications access to virtually unlimited storage in the cloud. You can use AWS Storage Gateway for backing up and archiving data to AWS, providing on-premises file shares backed by cloud storage, and providing on-premises applications low latency access to data in the cloud.
AWS Storage Gateway support for VPC endpoint policies is available in all AWS Regions where Storage Gateway is available. To learn more, visit our documentation.
AWS Transfer Family now supports four new service-specific condition keys for Identity and Access Management (IAM). With this feature, administrators can create more granular IAM policies and service control policies (SCPs) to restrict configurations for Transfer Family resources, enhancing security controls and compliance management.
IAM condition keys allow you to author policies that enforce access control based on API request context. With these new condition keys, you can now author policies based on Transfer Family context to control which protocols, endpoint types, and storage domains can be configured through policy conditions. For example, you can use transfer:RequestServerEndpointType to prevent the creation of public servers, or transfer:RequestServerProtocols to ensure only SFTP servers can be created, enabling you to define additional permission guardrails for Transfer Family actions.