Today, AWS announced the expansion of 10 Gbps and 100 Gbps dedicated connections with MACsec encryption capabilities at the existing AWS Direct Connect location in the Equinix BG1 data center near Bogota, Colombia. You can now establish private, direct network access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones from this location.
The Direct Connect service enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet.
For more information on the over 146 Direct Connect locations worldwide, visit the locations section of the Direct Connect product detail pages. Or, visit our getting started page to learn more about how to purchase and deploy Direct Connect.
Amazon Simple Notification Service (Amazon SNS) now allows customers to make API requests over Internet Protocol version 6 (IPv6) in the AWS GovCloud (US) Regions. The new endpoints have also been validated under the Federal Information Processing Standard (FIPS) 140-3 program.
Amazon SNS is a fully managed messaging service that enables publish/subscribe messaging between distributed systems, microservices, and event-driven serverless applications. With this update, customers have the option of using either IPv6 or IPv4 when sending requests over dual-stack public or VPC endpoints.
SNS now supports IPv6 in all Regions where the service is available, including AWS Commercial, AWS GovCloud (US), and China Regions. For more information on using IPv6 with Amazon SNS, please refer to our developer guide.
Amazon Simple Notification Service (Amazon SNS) now supports additional endpoints that have been validated under the Federal Information Processing Standard (FIPS) 140-3 program in AWS Regions in the United States and Canada.
FIPS compliant endpoints help companies contracting with the US federal government meet the FIPS security requirement to encrypt sensitive data in supported regions. With this expansion, you can use Amazon SNS for workloads that require a FIPS 140-3 validated cryptographic module when sending requests over dual-stack public or VPC endpoints.
Amazon SNS FIPS compliant endpoints are now available in the following regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), Canada West (Calgary) and AWS GovCloud (US). To learn more about FIPS 140-3 at AWS, visit FIPS 140-3 Compliance.
AWS Transform now offers Terraform as an additional option to generate network infrastructure code automatically from VMware environments. The service converts your source network definitions into reusable Terraform modules, complementing current AWS CloudFormation and AWS Cloud Development Kit (CDK) support.
AWS Transform for VMware is an agentic AI service that automates the discovery, planning, and migration of VMware workloads, accelerating infrastructure modernization with increased speed and confidence. These migrations require recreating network configurations while maintaining operational consistency. The service now generates Terraform modules alongside CDK and AWS CloudFormation templates. This addition enables organizations to maintain existing deployment pipelines while using preferred tools for modular, customizable network configurations.
Written by: Omar ElAhdan, Matthew McWhirt, Michael Rudden, Aswad Robinson, Bhavesh Dhake, Laith Al
Background
Protecting software-as-a-service (SaaS) platforms and applications requires a comprehensive security strategy. Drawing from analysis of UNC6040’s specific attack methodologies, this guide presents a structured defensive framework encompassing proactive hardening measures, comprehensive logging protocols, and advanced detection capabilities. While emphasizing Salesforce-specific security recommendations, these strategies provide organizations with actionable approaches to safeguard their SaaS ecosystem against current threats.
Google Threat Intelligence Group (GTIG) is tracking UNC6040, a financially motivated threat cluster that specializes in voice phishing (vishing) campaigns specifically designed to compromise organizations’ Salesforce instances for large-scale data theft and subsequent extortion. Over the past several months, UNC6040 has demonstrated repeated success in breaching networks by having its operators impersonate IT support personnel in convincing telephone-based social engineering engagements. This approach has proven particularly effective in tricking employees, often within English-speaking branches of multinational corporations, into actions that grant the attackers access or lead to the sharing of sensitive credentials, ultimately facilitating the theft of organization’s Salesforce data. In all observed cases, attackers relied on manipulating end users, not exploiting any vulnerability inherent to Salesforce.
A prevalent tactic in UNC6040’s operations involves deceiving victims into authorizing a malicious connected app to their organization’s Salesforce portal. This application is often a modified version of Salesforce’s Data Loader, not authorized by Salesforce. During a vishing call, the actor guides the victim to visit Salesforce’s connected app setup page to approve a version of the Data Loader app with a name or branding that differs from the legitimate version. This step inadvertently grants UNC6040 significant capabilities to access, query, and exfiltrate sensitive information directly from the compromised Salesforce customer environments. This methodology of abusing Data Loader functionalities via malicious connected apps is consistent with recent observations detailed by Salesforce in their guidance on protecting Salesforce environments from such threats.
In some instances, extortion activities haven’t been observed until several months after the initial UNC6040 intrusion activity, which could suggest that UNC6040 has partnered with a second threat actor that monetizes access to the stolen data. During these extortion attempts, the actor has claimed affiliation with the well-known hacking group ShinyHunters, likely as a method to increase pressure on their victims.
Figure 1: Data Loader attack flow
We have observed the following patterns in UNC6040 victimology:
Motive: UNC6040 is a financially motivated threat cluster that accesses victim networks by vishing social engineering.
Focus: Upon obtaining access, UNC6040 has been observed immediately exfiltrating data from the victim’s Salesforce environment using Salesforce’s Data Loader application. Following this initial data theft, UNC6040 was observed leveraging end-user credentials obtained through credential harvesting or vishing to move laterally through victim networks, accessing and exfiltrating data from the victim’s accounts on other cloud platforms such as Okta and Microsoft 365.
Attacker infrastructure: UNC6040 primarily used Mullvad VPN IP addresses to access and perform the data exfiltration on the victim’s Salesforce environments and other services of the victim’s network.
Proactive Hardening Recommendations
The following section provides prioritized recommendations to protect against tactics utilized by UNC6040. This section is broken down to the following categories:
Note: While the following recommendations include strategies to protect SaaS applications, they also cover identity security controls and detections applicable at the Identity Provider (IdP) layer and security enhancements for existing processes, such as the help desk.
1. Identity
Positive Identity Verification
To protect against increasingly sophisticated social engineering and credential compromise attacks, organizations must adopt a robust, multilayered process for identity verification. This process moves beyond outdated, easily compromised methods and establishes a higher standard of assurance for all support requests, especially those involving account modifications (e.g., password resets or multi-factor authentication modifications).
Guiding Principles
Assume nothing: Do not inherently trust the caller’s stated identity. Verification is mandatory for all security-related requests.
Defense-in-depth: Rely on a combination of verification methods. No single factor should be sufficient for high-risk actions.
Reject unsafe identifiers: Avoid relying on publicly available or easily discoverable data. Information such as:
Date of birth
Last four digits of a Social Security number
High school names
Supervisor names
This data should not be used as primary verification factors, as it’s often compromised through data breaches or obtainable via open source intelligence (OSINT).
Standard Verification Procedures
Live Video Identity Proofing (Primary Method)
This is the most reliable method for identifying callers. The help desk agent must:
Initiate a video call with the user
Require the user to present a valid corporate badge or government-issued photo ID (e.g., driver’s license) on camera next to their face
Visually confirm that the person on the video call matches the photograph on the ID
Cross-reference the user’s face with their photo in the internal corporate identity system
Verify that the name on the ID matches the name in the employee’s corporate record Contingency for No Video: If a live video call is not possible, the user must provide a selfie showing their face, their photo ID, and a piece of paper with the current date and time written on it.
Additionally, before proceeding with any request – help desk personnel must check the user’s calendar for Out of Office (OOO) or vacation status. All requests from users who are marked as OOO should be presumptively denied until they have officially returned.
For high-risk changes like multi-factor authentication (MFA) resets or password changes for privileged accounts, an additional OOB verification step is required after the initial ID proofing. This can include:
Call-back: Placing a call to the user’s registered phone number on file
Manager approval: Sending a request for confirmation to the user’s direct manager via a verified corporate communication channel
Special Handling for Third-Party Vendor Requests
Mandiant has observed incidents where attackers impersonate support personnel from third-party vendors to gain access. In these situations, the standard verification principals may not be applicable.
Under no circumstances should the Help Desk move forward with allowing access. The agent must halt the request and follow this procedure:
End the inbound call without providing any access or information
Independently contact the company’s designated account manager for that vendor using trusted, on-file contact information
Require explicit verification from the account manager before proceeding with any request
Outreach to End Users
Mandiant has observed the threat actor UNC6040 targeting end-users who have elevated access to SaaS applications. Posing as vendors or support personnel, UNC6040 contacts these users and provides a malicious link. Once the user clicks the link and authenticates, the attacker gains access to the application to exfiltrate data.
To mitigate this threat, organizations should rigorously communicate to all end-users the importance of verifying any third-party requests. Verification procedures should include:
Hanging up and calling the official account manager using a phone number on file
Requiring the requester to submit a ticket through the official company support portal
Asking for a valid ticket number that can be confirmed in the support console
Organizations should also provide a clear and accessible process for end-users to report suspicious communications and ensure this reporting mechanism is included in all security awareness outreach.
Since access to SaaS applications is typically managed by central identity providers (e.g., Entra ID, Okta), Mandiant recommends that organizations enforce unified identity security controls directly within these platforms.
Guiding Principles
Mandiant’s approach focuses on the following core principles:
Authentication boundary This principle establishes a foundational layer of trust based on network context. Access to sensitive resources should be confined within a defined boundary, primarily allowing connections from trusted corporate networks and VPNs to create a clear distinction between trusted and untrusted locations.
Defense-in-depth This principle dictates that security cannot rely on a single control. Organizations should layer multiple security measures,such as strong authentication, device compliance checks, and session controls.
Identity detection and response Organizations must continuously integrate real-time threat intelligence into access decisions. This ensures that if an identity is compromised or exhibits risky behavior, its access is automatically contained or blocked until the threat has been remediated.
Identity Security Controls
The following controls are essential for securing access to SaaS applications through a central identity provider.
Utilize Single Sign-On (SSO)
Ensure that all users accessing SaaS applications are accessing via a corporate-managed SSO provider (e.g., Microsoft Entra ID or Okta), rather than through platform-native accounts. A platform-native break glass account should be created and vaulted for use only in the case of an emergency.
In the event that SSO through a corporate-managed provider is not available, refer to the content specific to the applicable SaaS application (e.g., Salesforce) rather than Microsoft Entra ID or Okta.
Mandate Phishing-Resistant MFA
Phishing-resistant MFA must be enforced for all users accessing SaaS applications. This is a foundational requirement to defend against credential theft and account takeovers. Consider enforcing physical FIDO2 keys for accounts with privileged access. Ensure that no MFA bypasses exist in authentication policies tied to business critical applications.
Access to corporate applications must be limited to devices that are either domain-joined or verified as compliant with the organization’s security standards. This policy ensures that a device meets a minimum security baseline before it can access sensitive data.
Key device posture checks should include:
Valid host certificate: The device must present a valid, company-issued certificate
Approved operating system: The endpoint must run an approved OS that meets current version and patch requirements
Active EDR agent: The corporate Endpoint Detection and Response (EDR) solution must be installed, active, and reporting a healthy status
Mandiant recommends that organizations implement dynamic authentication policies that respond to threats in real time. By integrating identity threat intelligence feeds—from both native platform services and third-party solutions—into the authentication process, organizations can automatically block or challenge access when an identity is compromised or exhibits risky behavior.
This approach primarily evaluates two categories of risk:
Risky sign-ins: The probability that an authentication request is illegitimate due to factors like atypical travel, a malware-linked IP address, or password spray activity
Risky users: The probability that a user’s credential has been compromised or leaked online
Based on the detected risk level, Mandiant recommends that organizations apply a tiered approach to remediation.
Recommended Risk-Based Actions
For high-risk events: Organizations should apply the most stringent security controls. This includes blocking access entirely.
For medium-risk events: Access should be granted only after a significant step-up in verification. This typically means requiring proof of both the user’s identity (via strong MFA) and the device’s integrity (by verifying its compliance and security posture).
For low-risk events: Organizations should still require a step-up authentication challenge, such as standard MFA, to ensure the legitimacy of the session and mitigate low-fidelity threats.
Event monitoring: Provides detailed logs of user actions—such as data access, record modifications, and login origins—and allows these logs to be exported for external analysis
Transaction security policies: Monitors for specific user activities, such as large data downloads, and can be configured to automatically trigger alerts or block the action when it occurs
2. SaaS Applications
Salesforce Targeted Hardening Controls
This section details specific security controls applicable for Salesforce instances. These controls are designed to protect against broad access, data exfiltration, and unauthorized access to sensitive data within Salesforce.
Network and Login Controls
Restrict logins to only originate from trusted network locations.
Threat actors often bypass interactive login controls by leveraging generic API clients and stolen OAuth tokens. This policy flips the model from “allow by default” to “deny by default,” to ensure that only vetted applications can connect.
Enable a “Deny by Default” API policy: Navigate to API Access Control and enable “For admin-approved users, limit API access to only allowed connected apps.” This blocks all unapproved clients.
Maintain a minimal application allowlist: Explicitly approve only essential Connected Apps. Regularly review this allowlist to remove unused or unapproved applications.
Enforce strict OAuth policies per app: For each approved app, configure granular security policies, including restricting access to trusted IP ranges, enforcing MFA, and setting appropriate session and refresh token timeouts.
Revoke sessions when removing apps: When revoking an app’s access, ensure all active OAuth tokens and sessions associated with it are also revoked to prevent lingering access.
Organizational process and policy: Create policies governing application integrations with third parties. Perform Third-Party Risk Management reviews of all integrations with business-critical applications (e.g., Salesforce, Google Workspace, Workday).
Users should only be granted the absolute minimum permissions required to perform their job functions.
Use a “Minimum Access” profile as a baseline: Configure a base profile with minimal permissions and assign it to all new users by default. Limit the assignment of “View All” and “Modify All” permissions.
Grant privileges via Permission Sets: Grant all additional access through well-defined Permission Sets based on job roles, rather than creating numerous custom profiles.
Disable API access for non-essential users: The “API Enabled” permission is required for tools like Data Loader. Remove this permission from all user profiles and grant it only via a controlled Permission Set to a small number of justified users.
Hide the ‘Setup’ menu from non-admin users: For all non-administrator profiles, remove access to the administrative “Setup” menu to prevent unauthorized configuration changes.
Enforce high-assurance sessions for sensitive actions: Configure session settings to require a high-assurance session for sensitive operations such as exporting reports.
Set the internal and external Organization-Wide Defaults (OWD) to “Private” for all sensitive objects.
Use strategic Sharing Rules or other sharing mechanisms to grant wider data access, rather than relying on broad access via the Role Hierarchy.
Leverage Restriction Rules for Row-Level Security
Restriction Rules act as a filter that is applied on top of all other sharing settings, allowing for fine-grained control over which records a user can see.
Ensure that any users with access to sensitive data or with privileged access to the underlying Salesforce instance are setting strict timeouts on any Salesforce support access grants.
Revoke any standing requests and only re-enable with strict time limits for specific use cases. Be wary of enabling these grants from administrative accounts.
Salesforce Targeted Logging and Detections Controls
This section outlines key logging and detection strategies for Salesforce instances. These controls are essential for identifying and responding to advanced threats within the SaaS environment.
SaaS Applications Logging
To gain visibility into the tactics, techniques, and procedures (TTPs) used by threat actors against SaaS Applications, Mandiant recommends enabling critical log types in the organization’s Salesforce environment and ingesting the logs into their Security Information and Event Management (SIEM).
What You Need in Place Before Logging
Before you turn on collection or write detections, make sure your organization is actually entitled to the logs you are planning to use – and that the right features are enabled.
Entitlement check (must-have)
Most security logs/features are gated behind Event Monitoring via Salesforce Shield or the Event Monitoring Add-On. This applies to Real-Time Event Monitoring (RTEM) streaming and viewing.
Pick your data model per use case
RTEM – Streams (near real-time alerting): Available in Enterprise/Unlimited/Developer subscriptions; streaming events retained ~3 days.
RTEM – Storage: Many are Big Objects (native storage); some are standard objects (e.g. Threat Detection stores)
Event Log Files (ELF) – CSV model (batch exports): Available in Enterprise/Performance/Unlimited editions.
Use Event Manager to enable/disable streaming and storing per event; viewing RTEM events.
Grant access via profiles/permissions sets for RTEM and Threat Detection UI.
Threat Detection & ETS
Threat Detection events are viewed in UI with Shield/add-on; stored in corresponding EventStore objects.
Enhanced Transaction Security (ETS) is included with RTEM for block/MFA/notify actions on real-time events.
Recommended Log Sources to Monitor
Login History (LoginHistory): Tracks all login attempts, including username, time, IP address, status (successful/failed), and client type. This allows you to identify unusual login times, unknown locations, or repeated failures, which could indicate credential stuffing or account compromise.
Login Events (LoginEventStream): LoginEvent tracks the login activity of users who log in to Salesforce.
Setup Audit Trail (SetupAuditTrail): Records administrative and configuration changes within your Salesforce environment. This helps track changes made to permissions, security settings, and other critical configurations, facilitating auditing and compliance efforts.
API Calls (ApiEventStream): Monitors API usage and potential misuse by tracking calls made by users or connected apps.
Report Exports (ReportEventStream): Provides insights into report downloads, helping to detect potential data exfiltration attempts.
List View Events (ListViewEventStream): Tracks user interaction with list views, including access and manipulation of data within those views.
Bulk API Events (BulkApiResultEvent): Track when a user downloads the results of a Bulk API request.
Permission Changes (PermissionSetEvent): Tracks changes to permission sets and permission set groups. This event initiates when a permission is added to, or removed from a permission set.
API Anomaly (ApiAnomalyEvent): Track anomalies in how users make API calls.
Unique Query Event Type: Unique Query events capture specific search queries (SOQL), filter IDs, and report IDs that are processed, along with the underlying database queries (SQL).
External Identity Provider Event Logs: Track information from login attempts using SSO. (Please follow the guidance provided by your Identity Provider for monitoring and collecting IdP event logs.)
These log sources will provide organizations with the logging capabilities to properly collect and monitor the common TTPs used by threat actors. The key log sources to monitor and observable Salesforce activities for each TTP are as follows:
TTP
Observable Salesforce Activities
Log Sources
Vishing
Suspicious login attempts (rapid failures).
Logins from unusual IPs/ASNs (e.g., Mullvad/Tor).
OAuth (“Remote Access 2.0”) from unrecognized clients.
Login History
LoginEventStream/LoginEvent
Setup Audit Trail
Malicious Connected App Authorization (e.g., Data Loader, custom scripts)
New Connected App creation/modification (broad scopes: api, refresh_token, offline_access).
Policy relaxations (Permitted Users, IP restrictions).
Granting of API Enabled / “Manage Connected Apps” via perms.
Setup Audit Trail
PermissionSetEvent
LoginEventStream/LoginEvent (OAuth)
Data Exfiltration (via API, Data Loader, reports)
High-rate Query/QueryMore/QueryAll bursts.
Large RowsProcessed/RecordCount in reports & list views (chunked).
Bulk job result downloads.
File/attachment downloads at scale
ApiEventStream/ApiEvent
ReportEventStream/ReportEvent
ListViewEventStream/ListViewEvent
BulkApiResultEvent
FileEvent/FileEventStore
ApiAnomalyEvent/ReportAnomalyEvent
Unique Query Event Type
Lateral Movement/Persistence (within Salesforce or to other cloud platforms)
Permissions elevated (e.g., View/Modify All Data, API Enabled).
New user/service accounts.
LoginAs activity.
Logins from VPN/Tor after SF OAuth.
Pivots to Okta/M365, then Graph data pulls.
Setup Audit Trail
PermissionSetEvent
LoginAsEventStream
SaaS Applications Detections
While native SIEM threat detections provide some protection, they often lack the centralized visibility needed to connect disparate events across a complex environment. By developing custom targeted detection rules, organizations can proactively detect malicious activities.
Data Exfiltration & Cross-SaaS Lateral Movement (Post-Authorization)
MITRE Mapping: TA0010 – Exfiltration & TA0008 – Lateral Movement
Scenario & Objectives
After an user authorizes a (malicious or spoofed) Connected App, UNC6040 typically:
Performs data exfiltration quickly (REST pagination bursts, Bulk API downloads, lards/sensitive report exports).
Pivots to Okta/Microsoft 365 from the same risky egress IP to expand access and steal more data.
The objective here is to detect Salesforce OAuth → Exfil within ≤10 minutes, and Salesforce OAuth → Okta/M365 login within ≤60 minutes (same risky IP), plus single-signal, low-noise exfil patterns.
Baseline & Allowlist
Re-use the lists you already maintain for the vishing phase and add two regex helpers for content focus.
STRING
ALLOWLIST_CONNECTED_APP_NAMES
KNOWN_INTEGRATION_USERS (user ids/emails that legitimately use OAuth)
VPN_TOR_ASNS (ASNs as strings)
CIDR
ENTERPRISE_EGRESS_CIDRS (your corporate/VPN public egress)
$oauth.metadata.product_name = "SALESFORCE"
$oauth.metadata.log_type = "SALESFORCE"
$oauth.extracted.fields["LoginType"] = "Remote Access 2.0"
($oauth.extracted.fields["Status"] = "Success" or $oauth.security_result.action_details = "Success")
( not ($app in %ALLOWLIST_CONNECTED_APP_NAMES)
or ( ($app in %ALLOWLIST_CONNECTED_APP_NAMES)
and ( not ($ip in cidr %ENTERPRISE_EGRESS_CIDRS)
or strings.concat(ip_to_asn($ip), "") in %VPN_TOR_ASNS ) ) )
$uid = coalesce($oauth.principal.user.userid, $oauth.extracted.fields["UserId"])
$bulk.metadata.product_name = "SALESFORCE"
$bulk.metadata.log_type = "SALESFORCE"
$bulk.metadata.product_event_type = "BulkApiResultEvent"
$uid = coalesce($bulk.principal.user.userid, $bulk.extracted.fields["UserId"])
match:
$uid over 10m
Or
$oauth.metadata.product_name = "SALESFORCE"
$oauth.metadata.log_type = "SALESFORCE"
$oauth.extracted.fields["LoginType"] = "Remote Access 2.0"
($oauth.extracted.fields["Status"] = "Success" or $oauth.security_result.action_details = "Success")
( not ($app in %ALLOWLIST_CONNECTED_APP_NAMES)
or ( ($app in %ALLOWLIST_CONNECTED_APP_NAMES)
and ( not ($ip in cidr %ENTERPRISE_EGRESS_CIDRS)
or strings.concat(ip_to_asn($ip), "") in %VPN_TOR_ASNS ) ) )
$uid = coalesce($oauth.principal.user.userid, $oauth.extracted.fields["UserId"])
$api.metadata.product_name = "SALESFORCE"
$api.metadata.log_type = "SALESFORCE"
$api.metadata.product_event_type = "ApiEventStream"
$uid = coalesce($api.principal.user.userid, $api.extracted.fields["UserId"])
match:
$uid over 10m
Or
$oauth.metadata.product_name = "SALESFORCE"
$oauth.metadata.log_type = "SALESFORCE"
$oauth.extracted.fields["LoginType"] = "Remote Access 2.0"
($oauth.extracted.fields["Status"] = "Success" or $oauth.security_result.action_details = "Success")
( not ($app in %ALLOWLIST_CONNECTED_APP_NAMES)
or ( ($app in %ALLOWLIST_CONNECTED_APP_NAMES)
and ( not ($ip in cidr %ENTERPRISE_EGRESS_CIDRS)
or strings.concat(ip_to_asn($ip), "") in %VPN_TOR_ASNS ) ) )
$uid = coalesce($oauth.principal.user.userid, $oauth.extracted.fields["UserId"])
$report.metadata.product_name = "SALESFORCE"
$report.metadata.log_type = "SALESFORCE"
$report.metadata.product_event_type = "ReportEventStream"
strings.to_lower(coalesce($report.extracted.fields["ReportName"], "")) in regex SENSITIVE_REPORT_REGEX
$uid = coalesce($report.principal.user.userid, $report.extracted.fields["UserId"])
match:
$uid over 10m
Note: Single event rule can also be used instead of multi-event rules in this case where only the Product Event Types like ApiEventStream, BulkApiResultEvent, ReportEventStream can be used as a single event rule to be monitored. But, care has to be taken if a single event rule is established as these can be very noisy, and thus the reference lists should be actively monitored.
Bulk API Large Result Download (Non-Integration User)
Bulk API/Bulk v2 result download above threshold by a human user.
Why high-fidelity: Clear exfil artifact.
Key signals: BulkApiResultEvent, user not in KNOWN_INTEGRATION_USERS.
$e.metadata.product_name = "SALESFORCE"
$e.metadata.log_type = "SALESFORCE"
$e.metadata.product_event_type = "ReportEventStream"
not (coalesce($e.principal.user.userid, $e.extracted.fields["UserId"]) in %KNOWN_INTEGRATION_USERS)
strings.to_lower(coalesce($e.extracted.fields["ReportName"], "")) in regex %SENSITIVE_REPORT_REGEX
Salesforce OAuth → Okta/M365 Login From Same Risky IP in ≤60 Minutes (Multi-Event)
Suspicious Salesforce OAuth followed within 60m by Okta or Entra ID login from the same public IP, where the IP is off-corp or VPN/Tor ASN.
Why high-fidelity: Ties the attacker’s egress IP across SaaS within a tight window.
Key signals:
Salesforce OAuth posture (unknown app OR allowlisted+risky egress)
OKTA* or OFFICE_365 USER_LOGIN from the same IP
Lists/knobs: ENTERPRISE_EGRESS_CIDRS, VPN_TOR_ASNS. (Optional sibling rule binding by user email if identities are normalized.)
$oauth.metadata.product_name = "SALESFORCE"
$oauth.metadata.log_type = "SALESFORCE"
$oauth.extracted.fields["LoginType"] = "Remote Access 2.0"
($oauth.extracted.fields["Status"] = "Success" or $oauth.security_result.action_details = "Success")
( not ($app in %ALLOWLIST_CONNECTED_APP_NAMES)
or ( ($app in %ALLOWLIST_CONNECTED_APP_NAMES)
and ( not ($ip in cidr %ENTERPRISE_EGRESS_CIDRS)
or strings.concat(ip_to_asn($ip), "") in %VPN_TOR_ASNS )
$ip = coalesce($oauth.principal.asset.ip, $oauth.principal.ip)
$okta.metadata.log_type in "OKTA"
$okta.metadata.event_type = "USER_LOGIN"
$ip = coalesce($okta.principal.asset.ip, $okta.principal.ip) = $ip
$o365.metadata.log_type = "OFFICE_365"
$o365.metadata.event_type = "USER_LOGIN"
$ip = coalesce($o365.principal.asset.ip, $o365.principal.ip)
match:
$ip over 10m
M365 Graph Data-Pull After Risky Login
Entra ID login from risky egress followed by Microsoft Graph endpoints that pull mail/files/reports.
Why high-fidelity: Captures post-login data access typical in account takeovers.
Key signals: OFFICE_365 USER_LOGIN with off-corp IP or VPN/Tor ASN, then HTTP to URLs matching M365_SENSITIVE_GRAPH_REGEX by the same account within hours.
$login.metadata.log_type = "OFFICE_365"
$login.metadata.event_type = "USER_LOGIN"
$ip = coalesce($login.principal.asset.ip, $login.principal.ip)
( not ($ip in cidr %ENTERPRISE_EGRESS_CIDRS)
or strings.concat(ip_to_asn($ip), "") in %VPN_TOR_ASNS )
$acct = coalesce($login.principal.user.userid, $login.principal.user.email_addresses)
$http.metadata.product_name in ("Entra ID","Microsoft")
($http.metadata.event_type = "NETWORK_HTTP" or $http.target.url != "")
$acct = coalesce($http.principal.user.userid, $http.principal.user.email_addresses)
strings.to_lower(coalesce($http.target.url, "")) in regex %M365_SENSITIVE_GRAPH_REGEX
match:
$acct over 30m
Tuning & Exceptions
Identity joins – The lateral rule groups by IP for robustness. If you have strong identity normalization (Salesforce <-> Okta <-> M365), clone it and match on user email instead of IP.
Change windows – Suppress time-bound rules during approved data migrations/Connected App onboarding (temporarily add vendor app to ALLOWLIST_CONNECTED_APP_NAMES)
Integration accounts – Keep KNOWN_INTEGRATION_USERS current; most noise in exfil rules comes from scheduled ETL.
Streaming vs stored – The aforementioned rules assume Real-Time Event Monitoring Stream objects (e.g., ApiEventStream, ReportEventStream, ListViewEventStream, BulkApiResultEvent). For historical hunts, query the stored equivalents (e.g., ApiEvent, ReportEvent, ListViewEvent) with the same logic.
IOC-Based Detections
Scenario & Objectives
A malicious threat actor has either successfully accessed or attempted to access an organization’s network.
The objective is to detect the presence of known UNC6040 IOCs in the environment based on all of the available logs.
Reference Lists
Reference lists organizations should maintain:
STRING
UNC6040_IOC_LIST (IP addresses from threat intel sources eg. VirusTotal)
AWS Firewall Manager announces that it is now available in AWS Asia Pacific (Taipei) Region. AWS Firewall Manager helps cloud security administrators and site reliability engineers protect applications while reducing the operational overhead of manually configuring and managing rules.
Working with AWS Firewall Manager, customers can provide defense in depth policies to address the full range of AWS security services for customers hosting their applications and workloads in AWS Taipei. Customers wishing to establish secured assets using AWS WAF can create and maintain security policies with AWS Firewall Manager.
To learn more about how AWS Firewall Manager works, see the AWS Firewall Manager documentation for more details and the AWS Region Table for the list of regions where AWS Firewall Manager is currently available. To learn more about AWS Firewall Manager, its features, and its pricing, visit the AWS Firewall Manager website.
Starting today, customers can use boot and data volumes backed by Dell PowerStore and HPE Alletra Storage MP B10000 storage arrays with Amazon Elastic Compute Cloud (Amazon EC2) instances on AWS Outposts, including authenticated and encrypted volumes. This enhancement extends our existing support for boot and data volumes to include Dell and HPE storage arrays, alongside our current support for NetApp® on-premises enterprise storage arrays and Pure Storage® FlashArray™. Outposts is a fully managed service that extends AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises or edge location for a truly consistent hybrid experience.
With Outposts, customers can maximize the value of their on-premises storage investments by leveraging their existing enterprise storage arrays for both boot and data volumes, complementing managed Amazon EBS and Local Instance Store options. This provides significant operational benefits, including streamlined operating system (OS) management via centralized boot volumes and advanced data management features through high-performance data volumes. By integrating their own storage, organizations can also satisfy data residency requirements and benefit from a consistent cloud operational model for their hybrid environments.
To simplify the process, AWS offers automation scripts through AWS Samples to help customers easily set up and use external block volumes with EC2 instances on Outposts. Customers can use the AWS Management Console or CLI to utilize third-party block volumes with EC2 instances on Outposts.
Third-party storage integration for Outposts with all compatible storage vendors is available on Outposts 2U servers and Outposts racks at no additional charge in all AWS Regions where Outposts is supported. See the FAQs for Outposts servers and Outposts racks for the latest list of supported Regions.
AWS Storage Gateway now supports Virtual Private Cloud (VPC) endpoint policies for your VPC endpoints. With this feature, administrators can attach endpoint policies to VPC endpoints, allowing granular access control over Storage Gateway direct APIs for improved data protection and security posture.
AWS Storage Gateway is a hybrid cloud storage service that provides on-premises applications access to virtually unlimited storage in the cloud. You can use AWS Storage Gateway for backing up and archiving data to AWS, providing on-premises file shares backed by cloud storage, and providing on-premises applications low latency access to data in the cloud.
AWS Storage Gateway support for VPC endpoint policies is available in all AWS Regions where Storage Gateway is available. To learn more, visit our documentation.
AWS Transfer Family now supports four new service-specific condition keys for Identity and Access Management (IAM). With this feature, administrators can create more granular IAM policies and service control policies (SCPs) to restrict configurations for Transfer Family resources, enhancing security controls and compliance management.
IAM condition keys allow you to author policies that enforce access control based on API request context. With these new condition keys, you can now author policies based on Transfer Family context to control which protocols, endpoint types, and storage domains can be configured through policy conditions. For example, you can use transfer:RequestServerEndpointType to prevent the creation of public servers, or transfer:RequestServerProtocols to ensure only SFTP servers can be created, enabling you to define additional permission guardrails for Transfer Family actions.
Today, AWS announces the general availability of AWS Service Quotas integration with AWS Step Functions, enabling customers to monitor and manage their Step Functions quotas directly from the Service Quotas console. AWS Service Quotas is a service that helps you view and manage your AWS service quotas from a central location.AWS Step Functions is a visual workflow service that helps customers orchestrate AWS services, automate business processes, and build serverless applications. This integration improves service quota visibility and management for AWS Step Functions users.
With this launch, you can now view your AWS Step Functions account-level quota values through the Service Quotas console and monitor quota utilization through Amazon CloudWatch metrics. This enhanced visibility is particularly valuable for customers running high-volume workflow operations at scale, helping them proactively monitor resource usage and avoid potential service disruptions. Additionally, you can now request quota increases directly from the Service Quotas console. For eligible requests, quota changes are automatically updated without manual intervention, streamlining the quota management process.
Service Quotas console integration for AWS Step Functions is available in all commercial AWS Regions and the AWS GovCloud (US) Regions where AWS Step Functions is available.
To learn more about managing AWS Step Functions quotas, visit the AWS Step Functions documentation. You can access this feature through the Service Quotas console or through the CLI.
Today, we’re announcing that Amazon Elastic VMware Service (Amazon EVS) is now available in all availability zones in the Asia Pacific (Singapore) and Europe (London) Regions. This expansion provides more options to leverage AWS scale and flexibility for running your VMware workloads in the cloud.
Amazon EVS lets you run VMware Cloud Foundation (VCF) directly within your Amazon Virtual Private Cloud (VPC) on EC2 bare-metal instances, powered by AWS Nitro. Using either our step-by-step configuration workflow or the AWS Command Line Interface (CLI) with automated deployment capabilities, you can set up a complete VCF environment in just a few hours. This rapid deployment enables faster workload migration to AWS, helping you eliminate aging infrastructure, reduce operational risks, and meet critical timelines for exiting your data center.
The added availability in the Asia Pacific (Singapore) and Europe (London) Regions gives your VMware workloads lower latency through closer proximity to your end users, compliance with data residency or sovereignty requirements, and additional high availability and resiliency options for your enhanced redundancy strategy.
You can now deploy AWS IAM Identity Center in 36 AWS Regions, including Asia Pacific (Bangkok) and Mexico Central (Querétaro).
IAM Identity Center is the recommended service for managing workforce access to AWS applications. It enables you to connect your existing source of workforce identities to AWS once and offer your users single sign on experience across AWS. It powers the personalized experiences offered by AWS applications, such as Amazon Q, and the ability to define and audit user-aware access to data in AWS services, such as Amazon Redshift. It can also help you manage access to multiple AWS accounts from a central place. IAM Identity Center is available at no additional cost in these AWS Regions.
AWS Transfer Family now supports Virtual Private Cloud (VPC) endpoint policies for your VPC endpoints. With this feature, administrators can attach an endpoint policy to an interface VPC endpoint, allowing granular access control over Transfer Family APIs for improved data protection and security posture. Additionally, Transfer Family now supports Federal Information Processing Standards (FIPS) 140-3 enabled VPC endpoints.
Previously, customers had full access to Transfer Family APIs through an interface VPC endpoint, powered by AWS PrivateLink. With this launch, you can now manage which Transfer Family API actions (CreateServer, StartServer, DeleteServer, etc) can be performed, which principals can perform them, and which resources they can act upon. These policies work with existing IAM user and role policies and organizational service control policies.
Today, AWS announces the launch of Amazon Elastic Container Service (Amazon ECS) Managed Instances, a new fully managed compute option designed to eliminate infrastructure management overhead while giving you access to the full capabilities of Amazon EC2. By offloading infrastructure operations to AWS, ECS Managed Instances helps you quickly launch and scale your workloads, while enhancing performance and reducing your total cost of ownership.
With ECS Managed Instances, you get the application performance you want and the simplicity you need. Simply define your task requirements such as the number of vCPUs, memory size, and CPU architecture, and Amazon ECS automatically provisions, configures and operates most optimal EC2 instances within your AWS account using AWS-controlled access. You can also specify desired instance types in Managed Instances Capacity Provider configuration, including GPU-accelerated, network-optimized, and burstable performance, to run your workloads on the instance families you prefer.
ECS Managed Instances dynamically scales EC2 instances to match your workload requirements and continuously optimizes task placement to reduce infrastructure costs. It also enhances your security posture through regular security patching initiated every 14 days. You can use EC2 event windows to schedule patching to occur within weekly maintenance windows, minimizing the risk of interruptions during critical hours.
ECS Managed Instances is now available in six AWS regions: US East (North Virginia), US West (Oregon), Europe (Dublin), Africa (Cape Town), Asia Pacific (Singapore), and Asia Pacific (Tokyo). To get started with ECS Managed Instances, use the AWS Console, Amazon ECS MCP Server, or your favorite infrastructure-as-code tooling to enable it in a new or existing Amazon ECS cluster. You will be charged for the management of compute provisioned, in addition to your regular Amazon EC2 costs. To learn more about ECS Managed Instances, visit the feature page, documentation, and AWS News launch blog.
Amazon EC2 Auto Scaling (ASG) now supports Internet Protocol Version 6 (IPv6), enabling dual-stack configuration (IPv4 and IPv6) connectivity for your Auto Scaling groups. IPv6 enables an expanded address space, enabling you to scale your application on AWS beyond the typical constraints of the number of IPv4 addresses in your VPC.
With IPv6, you can assign easy to manage contiguous IP ranges to micro-services and can get virtually unlimited scale for your applications. Moreover, with support for both IPv4 and IPv6, you can gradually transition applications from IPv4 to IPv6, enabling safer migration. IPv6 support is available in all commercial AWS regions (except New Zealand) and GovCloud regions where ASG is available. To learn more about configuring your network to use IPv6 endpoints, see the documentation.
Starting today, Amazon EC2 Auto Scaling (ASG) supports Federal Information Processing Standard (FIPS) 140-3 validated VPC endpoints. With this launch, you can use AWS PrivateLink with ASG for regulated workloads that require secure connections using FIPS 140-3 validated cryptographic modules.
FIPS-compliant endpoints help organizations contracting with the U.S. federal government meet FIPS security requirements for encrypting sensitive data in supported regions. To create a VPC endpoint that connects to an ASG endpoint, see Setting up a VPC endpoint for Amazon EC2 Auto Scaling.
This capability is available in the following regions: US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Canada (Central), and Canada West (Calgary).
For more information about FIPS 140-3 at AWS, visit FIPS 140-3 Compliance. To learn more about Amazon EC2 Auto Scaling, visit the ASG product page.
Amazon Elastic Container Service (Amazon ECS) now supports running tasks in IPv6-only subnets. With this launch, Amazon ECS tasks and services can run using only IPv6 addresses, without requiring IPv4. This enables customers to deploy containerized applications in IPv6-only environments, scale without being limited by IPv4 address availability, and meet IPv6 compliance requirements through native IPv6 support in Amazon ECS.
Previously, Amazon ECS tasks always required an IPv4 address, even when launched in dual-stack subnets. This requirement could create scaling and management challenges for customers operating large fleets of containerized applications, where IPv4 address space became a bottleneck. With IPv6-only support, Amazon ECS tasks launched in IPv6-only subnets use only IPv6 addresses. This removes IPv4 as a dependency and helps organizations that must meet IPv6 adoption or regulatory mandates.
The feature works across all Amazon ECS launch types and can be used with awsvpc, bridge, and host networking modes. To get started, create IPv6-only subnets in your VPC and launch Amazon ECS services or tasks in those subnets. Amazon ECS automatically detects the configuration and provisions the appropriate networking. To learn more about IPv6-only task networking and supported AWS Regions, see the Amazon ECS task networking documentation for AWS Fargate launch type and EC2 launch type. You can also read our blog post for a detailed walkthrough and migration strategies.
Today, we’re announcing the general availability of Claude Sonnet 4.5, Anthropic’s most intelligent model and its best-performing model for complex agents, coding, and computer use, on Vertex AI.
Claude Sonnet 4.5 is built to work independently for hours, maintaining clarity while orchestrating tools and coordinating multiple agents to solve complex problems. It’s designed to excel at long-running tasks with enhanced domain knowledge in coding, finance, research, and cybersecurity. Key use cases include:
Coding: Autonomously complete long-horizon coding tasks. Plan and execute software projects that span hours or days, as well as everyday development tasks.
Cybersecurity: Deploy agents that autonomously patch vulnerabilities before exploitation, shifting from reactive detection to proactive defense.
Financial analysis: Handle everything from entry-level financial analysis to advanced predictive analysis like continuously monitoring global regulatory changes and preemptively adapting compliance systems.
Research: Handle tools, context, and deliver ready-to-go office files to drive expert analysis into final deliverables and actionable insights.
We’re also announcing Vertex AI support for Anthropic’s upgrades to Claude Code—including a VS Code extension and the next version (2.0) of the terminal interface, complete with checkpoints for more autonomous operation. Powered by Claude Sonnet 4.5, Claude Code now handles longer, more complex development tasks than the previous version.
Get started
Start building with Claude Sonnet 4.5 today by following these instructions:
Navigate to the Claude Sonnet 4.5 model card in the Vertex AI Model Garden, select “Enable”, and follow the proceeding instructions. You can also find and easily procure Claude Sonnet 4.5 on Google Cloud Marketplace.
Visit our documentation for pricing and regional support details.
The Vertex AI advantage: production-ready agents and applications
Bringing powerful models like Claude Sonnet 4.5 to production quickly requires more than just API access; it requires a unified AI platform. Vertex AI gives you an all-in-one platform to build, deploy, and manage your AI with confidence:
Orchestrate multi-agent systems: Build agents with a flexible, open approach through Agent Development Kit (ADK), then scale them in production with Agent Engine.
Get committed capacity: Reserve dedicated capacity and prioritized processing for your Claude Sonnet 4.5 applications at a fixed cost. This is made possible with provisioned throughput. To get started, contact your Google Cloud sales representative.
Boost model performance and efficiency: Get the most out of Claude with supported features. Run large-scale jobs with batch predictions, analyze codebases with a 1 million token context window, reduce costs with prompt caching, and ground responses with citation support. For detailed information on Claude-supported features, refer to our documentation.
Deploy with highly efficient infrastructure: Deploying on Vertex AI means running on infrastructure that is purpose-built for AI and designed to provide optimal performance and cost across your workloads. The global endpoint for Claude also enhances availability and reduces latency by dynamically serving traffic from the nearest available region.
Operate securely, by design: Build securely from day one, knowing your data is protected and you can easily manage compliance. This is provided by Vertex AI and Google Cloud’s robust, built-in security and data governance controls.
How customers are building with Claude on Vertex AI
Leading organizations are already leveraging the powerful combination of Claude and Google Cloud to drive significant business impact.
Augment Code is powering its AI coding assistant, which helps developers navigate and contribute to production-grade codebases, with Claude on Vertex AI.
“What we’re able to get out of Claude is truly extraordinary, but all of the work we’ve done to deliver knowledge of customer code, used in conjunction with Claude and the other models we host on Google Cloud, is what makes our product so powerful.” – Scott Dietzen, CEO, Augment Code
spring.new is helping users build custom business applications and tools in hours using natural language prompts.
“Customers tell us that our platform, powered by Claude models and Google Cloud, enables one person to create applications in one to two hours that previously took up to three months.” – Amitay Gilboa, CEO, spring.new
TELUS built its generative AI platform, Fuel iX™, on Google Cloud to give its team members a choice of curated AI models, like Claude, inspiring engineering excellence and enterprise-wide productivity.
“Getting a model as powerful as Claude on Vertex AI is a win-win that makes life so much easier. We get a model that excels at tool calling on a comprehensive platform that integrates with our core Google Cloud workloads like GKE and Cloud Run — that’s the magic.” – Justin Watts, Distinguished Engineer, TELUS
Welcome to the second Cloud CISO Perspectives for September 2025. Today, Google Cloud COO Francis deSouza offers his insights on how boards of directors and CISOs can thrive with a good working relationship, adapted from a recent episode of the Cyber Savvy Boardroom podcast.
As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.
aside_block
<ListValue: [StructValue([(‘title’, ‘Get vital board insights with Google Cloud’), (‘body’, <wagtail.rich_text.RichText object at 0x7f6c1d998340>), (‘btn_text’, ‘Visit the hub’), (‘href’, ‘https://cloud.google.com/solutions/security/board-of-directors?utm_source=cloud_sfdc&utm_medium=email&utm_campaign=FY24-Q2-global-PROD941-physicalevent-er-CEG_Boardroom_Summit&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
Boards should be ‘bilingual’ in AI and security to gain a competitive advantage
By Francis deSouza, chief operating officer, Google Cloud
Francis deSouza, chief operating officer, Google Cloud
AI is one of the fastest, most impactful technology shifts I’ve seen in my career. As adoption continues to surge, companies are facing complex and often technical questions about how AI intersects with corporate governance and strategy. One way forward is for boards of directors and cybersecurity teams to become “bilingual” in how AI and cybersecurity affect each other — to understand how AI needs to be secured against threats, how AI can be used to empower defenders, and how both needs affect business outcomes.
Organizations that adopt AI should evolve its cybersecurity posture because AI models and agents expand the surface area that needs to be protected. That requires hardening existing data infrastructure, developing access controls for agents, and understanding how those changes affect governance and risk management.
Cybersecurity should be a core duty of every board member, not just those serving on audit and risk committees. Becoming bilingual in AI can help board members focus on why they should understand their organization’s security posture, and be prepared for potential breaches. But there’s much more that boards can do — here are four steps leaders can take to drive effective change in today’s dynamic environment.
Becoming bilingual in AI can help board members focus on why they should understand their organization’s security posture, and be prepared for potential breaches.
1. Integrate cybersecurity into business strategy
What used to be a landscape dominated by individual hackers has now dramatically expanded to sophisticated groups that have been specifically formed to extract value from organizations by stealing and ransoming their data.
While it’s important to be fluent in business strategy, boards should also work with security leaders towards integrating cybersecurity into their overall roadmap. Boards can encourage a collaborative approach to align cybersecurity with critical business services, which can help strengthen security posture, protect critical assets, and enhance resilience against evolving and emerging threats.
2. Develop a framework for cybersecurity investments
Boards should ask questions to ensure cybersecurity investments deliver real business value — beyond compliance. Key areas for boards to investigate include identifying and understanding the protection of critical digital and physical assets with software components, assessing the maturity level of protection, and knowing the potential cost of different types of breaches.
Here’s where boards should encourage third-party assessments, running simulations, and tabletop exercises to help prepare an organization for breach responses. It’s also important for boards to develop a framework for cybersecurity investments to help them benchmark spending against industry data, and assess the effectiveness of that investment.
When boards understand the risks and costs associated with different types of breaches, including remediation and reputational damage, they are better positioned to help assess the actual value of cybersecurity investments.
3. Prioritize cybersecurity in mergers and acquisitions
One area cybersecurity becomes especially critical is in mergers and acquisitions. Assessing a target company’s security posture is a critical component of due diligence, and can help create a roadmap for integrating the target company into the acquirer’s security and compliance posture.
This approach includes non-negotiables for day one, such as issuing new, compliant laptops, planning network segregation, and a remediation roadmap for any existing vulnerabilities. Third-party assessments also have a role to play here to help inform post-acquisition plans.
4. Create a cyber-aware culture from the top down
We’ve been vocal about how creating a cyber-aware culture starts at the top. Boards should set the tone by regularly placing cybersecurity on the agenda at the main board level at least once a year.
They can also review internal and third-party attestations, and examine breach action plans to encourage a holistic approach to cybersecurity. Executive leadership must champion the security-first mindset, setting clear expectations, allocating necessary resources, and holding teams accountable. This top-down approach sends a powerful message that security is a non-negotiable priority.
Why boards should have more AI cyber-awareness
Cybersecurity has emerged as a board-level issue because of digital transformation and the emergence of AI, and this presents an opportunity and a challenge. By becoming bilingual in AI and security, boards can ensure their companies are moving decisively to not only improve efficiency and security, but to redefine what’s possible in their industries.
For more on Google Cloud’s cybersecurity guidance for boards of directors, you can check out the resources at our insights hub.
aside_block
<ListValue: [StructValue([(‘title’, ‘Tell us what you think’), (‘body’, <wagtail.rich_text.RichText object at 0x7f6c1d998850>), (‘btn_text’, ‘Join the conversation’), (‘href’, ‘https://google.qualtrics.com/jfe/form/SV_2n82k0LeG4upS2q’), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
In case you missed it
Here are the latest updates, products, services, and resources from our security teams so far this month:
Blocking shadow agents won’t work. Here’s a more secure way forward: Shadow IT. Shadow AI. It’s human nature to use technology in the most expedient way possible, but shadow agents pose great risks. Here’s how to secure them, and your business. Read more.
How to combat bucket-squatting in five steps: Threat actors target cloud storage buckets to intercept your data and impersonate your business. Here’s five steps you can take to make them more secure. Read more.
How to secure your remote MCP server on Google Cloud: Here are five key MCP deployment risks you should be aware of, and how using a centralized proxy architecture on Google Cloud can help mitigate them. Read more.
The global harms of restrictive cloud licensing, one year later: Microsoft’s restrictive cloud licensing has harmed the global economy, but ending it could help supercharge Europe’s economic engine. Read more.
Introducing DNS Armor to mitigate domain name system risks: Google Cloud is partnering with Infoblox to deliver Google Cloud DNS Armor, a cloud-native DNS security service available now in preview. Read more.
Solve security operations challenges with expertise and speed: At Google Cloud, we understand the value that MSSPs can bring, so we’ve built a robust ecosystem of MSSP partners, specifically empowered to help you modernize security operations and achieve better security outcomes, faster. Read more.
New GCE and GKE dashboards strengthen security posture: We’ve introduced new, integrated security dashboards in GCE and GKE consoles, powered by Security Command Center, to provide critical insights. Read more.
Please visit the Google Cloud blog for more security stories published this month.
aside_block
<ListValue: [StructValue([(‘title’, ‘Join the Google Cloud CISO Community’), (‘body’, <wagtail.rich_text.RichText object at 0x7f6c1d998d90>), (‘btn_text’, ‘Learn more’), (‘href’, ‘https://rsvp.withgoogle.com/events/ciso-community-interest?utm_source=cgc-blog&utm_medium=blog&utm_campaign=2024-cloud-ciso-newsletter-events-ref&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
Threat Intelligence news
Backdoor BRICKSTORM enabling espionage into tech and legal sectors: Google Threat Intelligence Group (GTIG) is tracking BRICKSTORM malware activity, which is being used to maintain persistent access to victim organizations in the U.S. across a range of industry verticals, including legal services, software as a service (SaaS) providers, business process outsourcers (BPOs), and technology companies. The value of these targets extends beyond typical espionage missions, potentially providing data to feed development of zero-days and establishing pivot points for broader access to downstream victims. Read more.
Widespread data theft targets Salesforce instances via Salesloft Drift: An investigation into Salesloft Drift has led Google Threat Intelligence Group (GTIG) to issue an advisory to alert organizations about widespread data theft from Salesloft Drift customer integrations, affecting Salesforce and others. The campaign is carried out by the actor tracked as UNC6395. We are advising Salesloft Drift customers to treat all authentication tokens stored in or connected to the Drift platform as potentially compromised. Read more.
Please visit the Google Cloud blog for more threat intelligence stories published this month.
Now hear this: Podcasts from Google Cloud
The AI future of SOAPA: Jon Oltsik, who coined Security Operations and Analytics Platform Architecture (SOAPA), gives hosts Anton Chuvakin and Tim Peacock an update on the ongoing debate between consolidating security around a single platform versus a more disaggregated, best-of-breed approach — including how agentic AI has changed the conversation. Listen here.
The AI-fueled arms race for email security: Email security is a settled matter, right? Not if AI has anything to say about it. AegisAI CEO Cy Khormaee and CTO Ryan Luo chat with Anton and Tim on how AI has upended email security best practices. Listen here.
Cyber Savvy Boardroom: Enterprise cyber leadership: Francis deSouza, chief operating officer, Google Cloud, joins Office of the CISO’s Nick Godfrey and David Homovich to talk about the biggest challenge facing boards in the next three to five years: governing agentic AI. Listen here.
Defender’s Advantage: How vSphere became a target for adversaries: Mandiant Consulting’s Stuart Carrera joins host Luke McNamara to discuss how threat actors are increasingly targeting the VMware vSphere estate, and leveraging in this environment to conduct extortion and data theft. Listen here.
Behind the Binary: Inside the FLARE-On reverse-engineering gauntlet: Host Josh Stroschein is joined by FLARE-On challenge host and author Nick Harbour, and regular challenge author Blas Kojusner, for an in-depth tour of its history, and discuss how it has grown into a must-do event for malware analysts and reverse engineers. Listen here.
To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in a few weeks with more security-related updates from Google Cloud.
Broadcom recently announced a change to its VMware licensing model for hyperscalers, moving to an exclusive “bring your own” subscription model for VMware Cloud Foundation (VCF) starting on November 1, 2025. This means that in the future, Google Cloud VMware Engine (GCVE) customers will need to purchase portable VCF subscriptions directly from Broadcom to use with Google Cloud VMware Engine instead of buying VCF-included subscriptions of GCVE.
Google Cloud already offers a Bring Your Own License (BYOL) option for GCVE. In fact, we were the first hyperscaler to do so, in 2024.
Google and Broadcom plan for the following:
Customers with a committed use discount (CUD)
If you purchased a 1- or 3-year GCVE CUD on or before October 15, 2025, you can continue to use GCVE with the VCF license included for the remainder of your term.
For any new CUDs purchased after October 15, 2025, you’ll need to purchase the VCF licenses directly from Broadcom and the BYOL option of the GCVE service from Google Cloud.
Customers using on-demand
You can continue to operate VCF license-included, on-demand nodes that exist as of October 15, 2025 until June 30, 2027.
What’s not changing
The core capabilities of VMware Engine remain the same; it is still a managed Google service that provides a dedicated VMware Cloud Foundation environment running natively on Google Cloud’s infrastructure.
Helpful resources
We stand ready to help you navigate these changes. Here are some additional resources to guide you:
Google Cloud VMware Engine: Learn more about the service and options:
Your Google Cloud Account Team: Please reach out to your Google Cloud account team, who can help review your existing commitments, discuss the implications of these changes for your organization, and help you plan for a smooth transition.