Amazon Textract is a managed machine learning service that automatically extracts text, handwriting, and data from any document or image. We regularly improve the underlying machine learning models based on customer feedback to provide even better accuracy. Today, we are pleased to announce feature and accuracy updates to the text detection model used in Textract DetectDocumentText and AnalyzeDocument APIs.
This update adds support for superscripts, subscripts, and rotated text in documents. The update also includes accuracy improvements for text detection in box forms, extraction of visually similar character sets (e.g., ‘0’ vs. ‘O’), and lower-resolution documents such as faxes.
This update is now available in US East (Ohio, N. Virginia), US West (N. California, Oregon), Asia Pacific (Mumbai, Seoul, Singapore, Sydney), Canada (Central), Europe (Frankfurt, Ireland, London, Paris, Spain), and AWS GovCloud (US-East, US-West) Regions.
For over two decades, Google has been a pioneer in AI, conducting groundwork that has shaped the industry. Concurrently, in the Web3 space, Google focuses on empowering the developer community by providing public goods resources like BigQuery blockchain datasets and testnet faucets, as well as the cloud infrastructure builders will need to bring their decentralized applications to life.
AI x Web3 Landscape
AI for Web3 compasses the practical ways AI can be applied as a tool to improve efficiency and effectiveness of Web3 companies and projects – from analytics to market research to chatbots. But one of the most powerful synergies is Web3 AI agents. These autonomous agents leverage AI’s intelligence to operate within the Web3 ecosystem, and they rely on Web3’s principles of decentralization and provenance to operate in a trustworthy manner, for use cases ranging from cross-border payments to trust and provenance.
AI agents – autonomous software systems, often powered by Large Language Models (LLMs) – are set to revolutionize Web3 interactions. They can execute complex tasks, manage DeFi portfolios, enhance gaming, analyze data, and interact with blockchains or even other agents without direct human intervention. Imagine agents, equipped with crypto wallets, engage in transactions between each other using the A2A protocol and facilitate economic activities using stablecoins, simplifying complex transactions.
Key applications of AI for Web3
Some sophisticated libraries now equip developers with the tools to build and deploy them. These libraries often come with ready-to-use “skills” or “tools” that grant agents immediate capabilities, such as executing swaps on a DEX, posting to decentralized social media, or fetching and interpreting on-chain data. A key innovation is the ability to understand natural language instructions and take action on them. For example, an agent can “swap 1 ETH for USDC on the most liquid exchange” without manual intervention. To function, these agents must be provisioned with access to essential Web3 components: RPC nodes to read and write to the blockchain, indexed datasets for efficient querying, and dedicated crypto wallets to hold and transact with digital assets.
How to build Web3 AI Agents with Google Cloud
Google Cloud provides a flexible, end-to-end suite of tools for building Web3 AI Agents, allowing you to start simple and scale to highly complex, customized solutions:
1. For rapid prototyping and no-code development: Vertex AI Agent Builder Conversational Agents allows for rapid prototyping and deployment of agents through a user-friendly interface, making it accessible even for non-technical users (refer to the Agent Builder codelab for a quick start). To facilitate this simplicity and speed, the platform provides a focused set of foundational tools. Agents can be easily augmented with standard capabilities like leveraging datastores, performing Google searches, or accessing websites and files. However, for more advanced functionalities—such as integrating crypto wallets, ensuring MCP compatibility, or implementing custom models and orchestration—custom development is the recommended path.
2. For full control and custom agent architecture: Open-source frameworks on Vertex AI For highly customized needs, developers can build their own agent architecture using open-source frameworks (Agent Development Kit, LangGraph, CrewAI) powered by state-of-the-art LLMs like Gemini (including Gemini 2.5 Pro which leads the Chatbot Arena at the time of publication)and Claude which are available through Vertex AI. A typical Web3 Agent architecture (shown below) involves a user interface, an agent runtime orchestrating tasks, an LLM for reasoning, memory for state management, and various tools/plugins (blockchain connectors, wallet managers, search, etc.) connected via adapters.
Example of a Web3 agent architecture
Some of the key features when using Agent Development Kit are as follows:
Easily define and orchestrate multiple agents across many agents and tools – For example you can use sub agents each handling part of the logic. In the crypto agent example above, one agent can find trending projects or tokens on Twitter/X, while another agent will do some research about those projects via Google Search and another agent can take actions on the user’s behalf using the crypto wallet.
Model agnostic – you can use any model from Google or other providers and change very easily
Intuitive local development for fast iteration – One can visualize agent topology and trace agent’s actions very easily. Just run the ADK agent locally and start testing by chatting with the agent.
Screenshot of ADK Dev UI used for testing and developing agents
Supports MCP and A2A (agent to agent standard) out-of-the-box: Allow your agents to communicate with other services and other agents seamlessly using standardised protocols
Deployment agnostic: Agents can be containerized and deployed on Agent Engine, Cloud Run or GKE easily. Vertex AI Agent Engine offers a managed runtime environment, where Google Cloud handles scaling, security, infrastructure management, as well as providing easy tools for evaluating and testing the agents. This abstracts away deployment and scaling complexities, letting developers focus on agent functionality.
Get started
We are always looking for Web3 companies to build with us. If this is an area you want to explore, please express your interest here.
For more details on how Web3 customers are leveraging Google Cloud, refer to this webinar on the Intersection of AI and Web3.
Thank you to Pranav Mehrotra, Web3 Strategic Pursuit Lead, for his help writing and reviewing this article.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud AI and ML’), (‘body’, <wagtail.rich_text.RichText object at 0x3e173bf8a370>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/vertex-ai/’), (‘image’, None)])]>
Amazon Simple Email Service (Amazon SES) is now available in the Asia Pacific (Hyderabad), Middle East (UAE), and Europe (Zurich) Regions. Customers can now use these new Regions to leverage Amazon SES to send emails and, if needed, to help manage data sovereignty requirements.
Amazon SES is a scalable, cost-effective, and flexible cloud-based email service that allows digital marketers and application developers to send marketing, notification, and transactional emails from within any application. To learn more about Amazon SES, visit this page.
With this launch, Amazon SES is available in 27 AWS Regions globally: US East (Virginia, Ohio), US West (N. California, Oregon), AWS GovCloud (US-West, US-East), Asia Pacific (Osaka, Mumbai, Hyderabad, Sydney, Singapore, Seoul, Tokyo, Jakarta), Canada (Central), Europe (Ireland, Frankfurt, London, Paris, Stockholm, Milan, Zurich), Israel (Tel Aviv), Middle East (Bahrain, UAE), South America (São Paulo), and Africa (Cape Town).
For a complete list of all of the regional endpoints for Amazon SES, see AWS Service Endpoints in the AWS General Reference.
Zonal autoshift practice runs take place once a week to ensure your application is ready for a zonal shift. Now with on-demand practice runs, you can trigger a practice run anytime you want to and validate your application’s preparedness. When a practice run is started, a pre-check will be performed to ensure your application has balanced capacity across AZs. This check is done for Application Load Balancers, Network Load Balancers, and EC2 Auto Scaling groups.
To get started, you can initiate an on-demand practice run in the ARC console, API, or CLI. This allows you to test your application’s practice run configuration to ensure the alarms are configured correctly and your application behaves as expected. For both automated and on-demand practice, pre-checks for balanced capacity will validate your resource’s capacity and ensure it’s safe to start the practice. If the pre-check fails, you’ll be alerted, so you can take corrective action.
Amazon Connect can now integrate agent activities from third-party applications as Connect Tasks, which can be evaluated alongside work completed in Connect, providing managers with a unified application for quality management. You can programmatically ingest activities from third-party applications (such as application processing, social media posts, etc.) as completed Tasks within Connect, capturing details relevant for performance evaluation as Task attributes. Managers can then evaluate these external activities, alongside native Connect interactions to get a unified view of agent performance within Connect dashboards.
This feature is available in all regions where Contact Lens performance evaluations are already available. To learn more, please visit our documentation and our webpage. For information about Contact Lens pricing, please visit our pricing page.
We are excited to announce that Amazon Athena is now available in Asia Pacific (Taipei).
Athena is a serverless, interactive query service that makes it simple to analyze petabytes of data using SQL, without requiring infrastructure setup or management. Athena is built on open-source Trino and Presto query engines, providing powerful and flexible interactive query capabilities, and supports popular data formats such as Apache Parquet and Apache Iceberg.
For more information about the AWS Regions where Athena is available, see the AWS Region table. To learn more, see Amazon Athena.
Amazon Elastic Container Services (Amazon ECS) now makes it easier to troubleshoot unhealthy tasks by adding the Task ID in service action events generated due to health failures.
Amazon ECS is designed to help easily launch and scale your applications. When your Amazon ECS task fails Elastic Load Balancing (ELB) health checks, Amazon ECS produces an unhealthy service action event. With today’s launch, the Task ID is also included as part of the generated event, so you can quickly pinpoint the Task in question for faster troubleshooting.
You can now use Amazon EBS General Purpose SSD volumes (gp3) volumes with the second-generation AWS Outposts racks for your workloads that require local data processing and data residency. The latest generation of gp3 enables you to provision performance independently of storage capacity, delivering a baseline performance of 3,000 IOPS and 125 MB/s at any volume size. With gp3 volumes, you can scale up to 16,000 IOPS and 1,000 MB/s, delivering 4x the maximum throughput of the previously supported gp2 volumes.
EBS gp3 volumes on second-generation AWS Outposts are ideal for a wide variety of performance-intensive applications, including MySQL, Cassandra, virtual desktops, and Hadoop analytics clusters. AWS Outposts racks offer the same AWS infrastructure, AWS services, APIs, and tools to virtually any on-premises data center or colocation space for a truly consistent hybrid experience. Second-generation AWS Outposts racks support the latest generation of x86-powered Amazon Elastic Compute Cloud (Amazon EC2) instances, starting with C7i, M7i, and R7i instances. These instances deliver twice the vCPU, memory, and network bandwidth, as well as up to 40% better performance compared to C5, M5, and R5 instances on first-generation AWS Outposts racks.
You can manage gp3 volumes using the AWS Management Console, the AWS Command Line Interface (CLI), or the AWS SDKs in all Regions and countries/territories where second-generationAWS Outposts racks are supported. For more information on gp3 volumes, see the product overview page. For a current list of AWS Regions and countries/territories where second-generation AWS Outposts racks are supported, check out the AWS Outposts racks FAQs page.
Welcome to the second Cloud CISO Perspectives for June 2025. Today, Thiébaut Meyer and Bhavana Bhinder from Google Cloud’s Office of the CISO discuss our work to help defend European healthcare against cyberattacks.
As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.
aside_block
<ListValue: [StructValue([(‘title’, ‘Get vital board insights with Google Cloud’), (‘body’, <wagtail.rich_text.RichText object at 0x3e4d7ea57130>), (‘btn_text’, ‘Visit the hub’), (‘href’, ‘https://cloud.google.com/solutions/security/board-of-directors?utm_source=cloud_sfdc&utm_medium=email&utm_campaign=FY24-Q2-global-PROD941-physicalevent-er-CEG_Boardroom_Summit&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
The global threats facing European hospitals and health organizations
By Thiébaut Meyer, director, Office of the CISO, and Bhavana Bhinder, European healthcare and life sciences lead, Office of the CISO
Thiébaut Meyer, director, Office of the CISO
As the global threat landscape continues to evolve, hospitals and healthcare organizations remain primary targets for cyber threat actors. To help healthcare organizations defend themselves so they can continue to provide critical, life-saving patient care — even while facing cyberattacks — the European Commission has initiated the European Health Security Action Plan to improve the cybersecurity of hospitals and healthcare providers.
There are two imperative steps that would both support Europe’s plan and bolster resilience in our broader societal fabric: Prioritizing healthcare as a critical domain for cybersecurity investment, and emphasizing collaboration with the private sector. This approach, acknowledging the multifaceted nature of cyber threats and the interconnectedness of healthcare systems, is precisely what is required to secure public health in an increasingly digitized world. It’s great to see the European Commission has recently announced funding to improve cybersecurity, including for European healthcare entities.
Bhavana Bhinder, European healthcare and life sciences lead, Office of the CISO
At Google, we have cultivated extensive industry partnerships across the European Union to help healthcare organizations of all levels of digital sophistication and capability be more resilient in the face of cyberattacks.
Collaboration across healthcare organizations, regulators, information sharing bodies and technology providers like Google is essential to get and stay ahead of these attacks.
Cyberattacks targeting the healthcare domain, especially those that leverage ransomware, can take over healthcare systems – completely upending their operations and stopping them from providing life-saving medical procedures, coordinating critical scheduling and payment activities, stopping delivery of critical supplies like blood and tissue donations, and can even render the care facilities physically unsafe. In some cases, these cyberattacks have contributed to patient mortality. The statistics paint a grim picture:
Ransomware attacks accounted for 54% of analyzed cybersecurity incidents in the EU health sector between 2021 and 2023, with 83% financially motivated.
71% of ransomware attacks impacted patient care and were often coupled with patient data breaches, according to a 2024 European Commission report.
Healthcare’s share of posts on data leak sites has doubled over the past three years, even as the number of data leak sites tracked by Google Threat Intelligence Group increased by nearly 50% in 2024. In one example, a malicious actor targeting European organizations said that they were willing to pay 2% to 5% more for hospitals — particularly ones with emergency services.
In-hospital mortality shoots up 35% to 41% among patients already admitted to a hospital when a ransomware attack takes place.
The U.K.’s National Health Service (NHS) has confirmed that a major cyberattack harmed 170 patients in 2024.
“Achieving resilience necessitates a holistic and adaptive approach, encompassing proactive prevention that uses modern, secure-by-design technologies paired with robust detection and incident response, stringent supply chain management, comprehensive human factor mitigation, strategic utilization of artificial intelligence, and targeted investment in securing unique healthcare vulnerabilities,” said Google Cloud’s Taylor Lehmann, director, Healthcare and Life Sciences, Office of the CISO. “Collaboration across healthcare organizations, regulators, information sharing bodies and technology providers like Google is essential to get and stay ahead of these attacks.”
Bold action is needed to combat this scourge, and that action should include helping healthcare providers migrate to modern technology that has been built securely by design and stays secure in use. We believe security must be embedded from the outset — not as an afterthought — and continuously thereafter. Google’s secure-by-design products and services have helped support hospitals and health organizations across Europe in addressing the pervasive risks posed by cyberattacks, including ransomware.
Secure-by-design is a proactive approach that ensures core technologies like Google Cloud, Google Workspace, Chrome, and ChromeOS are built with inherent protections, such as:
Encrypting Google Cloud customer data at rest by default and data in transit across its physical boundaries, offering multiple options for encryption key management and key access justification.
Building security and compliance into ChromeOS, which powers Chromebooks, to help protect against ransomware attacks. ChromeOS boasts a record of no reported ransomware attacks. Its architecture includes capabilities such as Verified Boot, sandboxing, blocked executables, and user space isolation, along with automatic, seamless updates that proactively patch vulnerabilities.
Providing health systems with a secure alternative through Chrome Enterprise Browser and ChromeOS for accessing internet-based and internal IT resources crucial for patient care.
Committing explicitly in our contracts to implementing and maintaining robust technical, organizational, and physical security measures, and supporting NIS2 compliance efforts for Google Cloud and Workspace customers.
Our products and services are already helping modernize and secure European healthcare organizations, including:
In Germany, healthcare startup Hypros has been collaborating with Google Cloud to help hospitals detect health incidents without compromising patient privacy. Hypros’ innovative patient monitoring system uses our AI and cloud computing capabilities to detect and alert staff to in-hospital patient emergencies, such as out-of-bed falls, delirium onset, and pressure ulcers. They’ve tested the technology in real-world trials at leading institutions including the University Hospital Schleswig-Holstein, one of the largest medical care centers in Europe.
With the CUF, Portugal’s largest healthcare provider with 19 hospitals and clinics. CUF has embraced Google Chrome and cloud applications to enhance energy efficiency and streamline IT operations. ChromeOS is noted in the industry for its efficiency, enabling operations on machines that consume less energy and simplifying IT management by reducing the need for on-site hardware maintenance.
For the Canary Islands 112 Emergency and Safety Coordination Center, which is migrating to Google Cloud. Led by the public company Gestión de Servicios para la Salud y Seguridad en Canary Islands (GCS) and developed in conjunction with Google Cloud, this migration is one of the first in which a public emergency services administration has moved to the public cloud. They’re also using Google Cloud’s sovereign cloud solutions to help securely share critical information, such as call recordings and personal data, with law enforcement and judicial bodies.
We believe that information sharing must extend beyond threat intelligence to encompass data-supported conclusions regarding effective practices, counter-measures, and successes. Reducing barriers to sophisticated and rapid intelligence-sharing, coupled with verifiable responses, can be the decisive factor between a successful defense and a vulnerable one.
Our engagement with organizations including the international Health-ISAC and ENISA underscores our commitment to building trust across many communities, a concept highly pertinent to the EU’s objective of supporting the European Health ISAC and the U.S.-based Health-ISAC’s EU operations.
Protecting European health data with Sovereign Cloud and Confidential Computing
We’re committed to digital sovereignty for the EU and to helping healthcare organizations take advantage of the transformative potential of cloud and AI without compromising on security or patient privacy.
We’ve embedded our secure-by-design principles in our approach to our digital sovereignty solutions. By enabling granular control over data location, processing, and access, European healthcare providers can confidently adopt scalable cloud infrastructure and deploy advanced AI solutions, secure in the knowledge that their sensitive patient data remains protected and compliant with European regulations like GDPR, the European Health Data Space (EHDS), and the Network and Information Systems Directive.
Additionally, Confidential Computing, technology that we helped pioneer, has helped narrow that critical security gap by protecting data in use.
Google Cloud customers such as AiGenomix leverage Confidential Computing to deliver infectious disease surveillance and early cancer detection. Confidential Computing helps them ensure privacy and security for genomic and related health data assets, and also align with the EHDS’s vision for data-driven improvements in healthcare delivery and outcomes.
Building trust in global healthcare resilience
We believe that these insights and capabilities offered by Google can significantly contribute to the successful implementation of the European Health Security Action Plan. We are committed to continued collaboration with the European Commission, EU member states, and all stakeholders to build a more secure and resilient digital future for healthcare.
To learn more about Google’s efforts to secure and support healthcare organizations around the world, contact our Office of the CISO.
aside_block
<ListValue: [StructValue([(‘title’, ‘Join the Google Cloud CISO Community’), (‘body’, <wagtail.rich_text.RichText object at 0x3e4d7ea57af0>), (‘btn_text’, ‘Learn more’), (‘href’, ‘https://rsvp.withgoogle.com/events/ciso-community-interest?utm_source=cgc-blog&utm_medium=blog&utm_campaign=2024-cloud-ciso-newsletter-events-ref&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
In case you missed it
Here are the latest updates, products, services, and resources from our security teams so far this month:
Securing open-source credentials at scale: We’ve developed a powerful tool to scan open-source package and image files by default for leaked Google Cloud credentials. Here’s how to use it. Read more.
Audit smarter: Introducing our Recommended AI Controls framework: How can we make AI audits more effective? We’ve developed an improved approach that’s scalable and evidence-based: the Recommended AI Controls framework. Read more.
Google named a Strong Performer in The Forrester Wave for security analytics platforms: Google has been named a Strong Performer in The Forrester Wave™: Security Analytics Platforms, Q2 2025, in our first year of participation. Read more.
Mitigating prompt injection attacks with a layered defense strategy: Our prompt injection security strategy is comprehensive, and strengthens the overall security framework for Gemini. We found that model training with adversarial data significantly enhanced our defenses against indirect prompt injection attacks in Gemini 2.5 models. Read more.
Just say no: Build defense in depth with IAM Deny and Org Policies: IAM Deny and Org Policies provide a vital, scalable layer of security. Here’s how to use them to boost your IAM security. Read more.
Please visit the Google Cloud blog for more security stories published this month.
What’s in an ASP? Creative phishing attack on prominent academics and critics of Russia: We detail two distinct threat actor campaigns based on research from Google Threat Intelligence Group (GTIG) and external partners, who observed a Russia state-sponsored cyber threat actor targeting prominent academics and critics of Russia and impersonating the U.S. Department of State. The threat actor often used extensive rapport building and tailored lures to convince the target to set up application specific passwords (ASPs). Read more.
Remote Code Execution on Aviatrix Controller: A Mandiant Red Team case study simulated an “Initial Access Brokerage” approach and discovered two vulnerabilities on Aviatrix Controller, a software-defined networking utility that allows for the creation of links between different cloud vendors and regions. Read more.
Please visit the Google Cloud blog for more threat intelligence stories published this month.
Now hear this: Podcasts from Google Cloud
AI red team surprises, strategies, and lessons: Daniel Fabian joins hosts Anton Chuvakin and Tim Peacock to talk about lessons learned from two years of AI red teaming at Google. Listen here.
Practical detection-as-code in the enterprise: Is detection-as-code just another meme phrase? Google Cloud’s David French, staff adoption engineer, talks with Anton and Tim about how detection-as-code can help security teams. Listen here.
Cyber-Savvy Boardroom: What Phil Venables hears on the street: Phil Venables, strategic security adviser for Google Cloud, joins Office of the CISO’s Alicja Cade and David Homovich to discuss what he’s hearing directly from boards and executives about the latest in cybersecurity, digital transformation, and beyond. Listen here.
Beyond the Binary: Attributing North Korean cyber threats: Who names the world’s most notorious APTs? Google reverse engineer Greg Sinclair shares with host Josh Stroschein how he hunts down and names malware and threat actors, including Lazarus Group, the North Korean APT. Listen here.
To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in a few weeks with more security-related updates from Google Cloud.
Written by: Seemant Bisht, Chris Sistrunk, Shishir Gupta, Anthony Candarini, Glen Chason, Camille Felx Leduc
Introduction — Why Securing Protection Relays Matters More Than Ever
Substations are critical nexus points in the power grid, transforming high-voltage electricity to ensure its safe and efficient delivery from power plants to millions of end-users. At the core of a modern substation lies the protection relay: an intelligent electronic device (IED) that plays a critical role in maintaining the stability of the power grid by continuously monitoring voltage, current, frequency, and phase angle. Upon detecting a fault, it instantly isolates the affected zone by tripping circuit breakers, thus preventing equipment damage, fire hazards, and cascading power outages.
As substations become more digitized, incorporating IEC 61850, Ethernet, USB, and remote interfaces, relays are no longer isolated devices, but networked elements in a broader SCADA network. While this enhances visibility and control, it also exposes relays to digital manipulation and cyber threats. If compromised, a relay can be used to issue false trip commands, alter breaker logic, and disable fault zones. Attackers can stealthily modify vendor-specific logic, embed persistent changes, and even erase logs to avoid detection. A coordinated attack against multiple critical relays can lead to a cascading failure across the grid, potentially causing a large-scale blackout.
This threat is not theoretical. State-sponsored adversaries have repeatedly demonstrated their capability to cause widespread blackouts, as seen in the INDUSTROYER (2016), INDUSTROYER.V2 (2022), and novel living-off-the-land technique (2022) attacks in Ukraine, where they issued unauthorized commands over standard grid protocols. The attack surface extends beyond operational protocols to the very tools engineers rely on; as Claroty’s Team82 revealed a denial-of-service vulnerability in Siemens DIGSI 4 configuration software. Furthermore, the discovery of malware toolkits like INCONTROLLER shows attackers are developing specialized capabilities to map, manipulate, and disable protection schemes across multiple vendors.
Recent events have further underscored the reality of these threats, with heightened risks of Iranian cyberattacks targeting vital networks in the wake of geopolitical tensions. Iran-nexus threat actors such as UNC5691 (aka CyberAv3ngers) have a history of targeting operational technology, in some cases including U.S. water facilities. Similarly, persistent threats from China, such as UNC5135, which at least partially overlaps with publicly reported Volt Typhoon activity, demonstrate a strategic effort to embed within U.S. critical infrastructure for potential future disruptive or destructive cyberattacks. The tactics of these adversaries, which range from exploiting weak credentials to manipulating the very logic of protection devices, make the security of protection relays a paramount concern.
These public incidents mirror the findings from our own Operational Technology (OT) Red Team simulations, which consistently reveal accessible remote pathways into local substation networks and underscore the potential for adversaries to manipulate protection relays within national power grids.
Protection relays are high-value devices, and prime targets for cyber-physical attacks targeting substation automation systems and grid management systems. Securing protection relays is no longer just a best practice; it’s absolutely essential for ensuring the resilience of both transmission and distribution power grids.
Inside a Substation — Components and Connectivity
To fully grasp the role of protection relays within the substation, it’s important to understand the broader ecosystem they operate in. Modern substations are no longer purely electrical domains. They are cyber-physical environments where IEDs, deterministic networking, and real-time data exchange work in concert to deliver grid reliability, protection, and control.
Core Components
Protection & Control Relays (IEDs): Devices such as the SEL-451, ABB REL670, GE D60, and Siemens 7SJ85 serve as the brains of both protection and control. They monitor current, voltage, frequency, and phase angle, and execute protection schemes like:
Overcurrent (ANSI 50/51)
Distance protection (ANSI 21)
Differential protection (ANSI 87)
Under/over-frequency (ANSI 81)
Synch-check (ANSI 25)
Auto-reclose (ANSI 79)
Breaker failure protection (ANSI 50BF)
Logic-based automation and lockout (e.g., ANSI 94)
(Note: These ANSI function numbers follow the IEEE Standard C37.2 and are universally used across vendors to denote protective functions.)
Circuit Breakers & Disconnectors: High-voltage switching devices operated by relays to interrupt fault current or reconfigure line sections. Disconnectors provide mechanical isolation and are often interlocked with breaker status to prevent unsafe operation.
Current Transformers (CTs) & Potential Transformers (PTs): Instrument transformers that step down high voltage and current for safe and precise measurement. These form the primary sensing inputs for protection and metering functions.
Station Human-Machine Interfaces (HMIs): Provide local visualization and control for operators. HMIs typically connect to relay networks via the station bus, offering override, acknowledgment, and command functions without needing SCADA intervention.
Remote Terminal Units (RTUs) or Gateway Devices: In legacy or hybrid substations, RTUs aggregate telemetry from field devices and forward it to control centers. In fully digital substations, this function may be handled by SCADA gateways or station-level IEDs that natively support IEC 61850 or legacy protocol bridging.
Time Synchronization Devices: GPS clocks or PTP servers are deployed to maintain time alignment across relays, sampled value streams, and event logs. This is essential for fault location, waveform analysis, and sequence of events (SoE) correlation.
Network Architecture
Modern digital substations are engineered with highly segmented network architectures to ensure deterministic protection, resilient automation, and secure remote access. These systems rely on fiber-based Ethernet communication and time-synchronized messaging to connect physical devices, intelligent electronics, SCADA systems, and engineering tools across three foundational layers.
Figure 1: Substation Network Architecture
Network Topologies: Substations employ redundant Ethernet designs to achieve high availability and zero-packet-loss communication, especially for protection-critical traffic.
Common topologies include:
RSTP (Rapid Spanning Tree Protocol) – Basic redundancy by blocking loops in switched networks
PRP (Parallel Redundancy Protocol) – Simultaneous frame delivery over two independent paths
HSR (High-availability Seamless Redundancy) – Ring-based protocol that allows seamless failover for protection traffic
Communication Layers: Zones and Roles
Modern substations are structured into distinct functional network layers, each responsible for different operations, timing profiles, and security domains. Understanding this layered architecture is critical to both operational design and cyber risk modeling.
Process Bus / Bay Level Communication
This is the most time-sensitive layer in the substation. It handles deterministic, peer-to-peer communication between IEDs (Intelligent Electronic Devices), Merging Units (MUs), and digital I/O modules that directly interact with primary field equipment.
Includes:
Protection and Control IEDs – Relay logic for fault detection and breaker actuation
MUs – Convert CT/PT analog inputs into digitized Sampled Values (SV)
IED I/O Modules – Digitally interface with trip coils and status contacts on breakers
Circuit Breakers, CTs, and PTs – Primary electrical equipment connected through MUs and I/O
Master clock or time source – Ensures time-aligned SV and event data using PTP (IEEE 1588) or IRIG-B
IEC 61850-9-2 (SV) – Real-time sampled analog measurements
Time Sync (PTP/IRIG-B) – Sub-millisecond alignment across protection systems
Station Bus / Substation Automation LAN (Supervisory and Control Layer)
The Station Bus connects IEDs, local operator systems, SCADA gateways, and the Substation Automation System (SAS). It is responsible for coordination, data aggregation, event recording, and forwarding data to control centers.
Includes:
SAS – Central event and logic manager
HMIs – Local operator access
Engineering Workstation (EWS) – Access point for authorized relay configuration and diagnostics
RTUs / SCADA Gateways – Bridge to EMS/SCADA networks
Managed Ethernet Switches (PRP/HSR) – Provide reliable communication paths
IEC 60870-5-104 / DNP3 – Upstream telemetry to control center
Modbus (legacy) – Field device communication
SNMP (secured) – Network health monitoring
Engineering Access (Role-Based, Cross-Layer): Engineering access is not a stand-alone communication layer but a privileged access path used by protection engineers and field technicians to perform maintenance, configuration, and diagnostics.
Access Components:
EWS – Direct relay interface via MMS or console
Jump Servers / VPNs – Controlled access to remote or critical segments
Terminal/ Serial Consoles – Used for maintenance and troubleshooting purposes
What Protection Relays Really Do
In modern digital substations, protection relays—more accurately referred to as IEDs—have evolved far beyond basic trip-and-alarm functions. These devices now serve as cyber-physical control points, responsible not only for detecting faults in real time but also for executing programmable logic, recording event data, and acting as communication intermediaries between digital and legacy systems.
At their core, IEDs monitor electrical parameters, such as voltage, current, frequency, and phase angle, and respond to conditions like overcurrent, ground faults, and frequency deviations. Upon fault detection, they issue trip commands to circuit breakers—typically within one power cycle (e.g., 4–20 ms)—to safely isolate the affected zone and prevent equipment damage or cascading outages.
Beyond traditional protection: Modern IEDs provide a rich set of capabilities that make them indispensable in fully digitized substations.
Trip Logic Processing: Integrated logic engines (e.g., SELogic, FlexLogic, CFC) evaluate multiple real-time conditions to determine if, when, and how to trip, block, or permit operations.
Event Recording and Fault Forensics: Devices maintain Sequence of Events (SER) logs and capture high-resolution oscillography (waveform data), supporting post-event diagnostics and root-cause analysis.
Local Automation Capabilities: IEDs can autonomously execute transfer schemes, reclose sequences, interlocking, and alarm signaling often without intervention from SCADA or a centralized controller.
Protocol Bridging and Communication Integration: Most modern relays support and translate between multiple protocols, including IEC 61850, DNP3, Modbus, and IEC 60870-5-104, enabling them to function as data gateways or edge translators in hybrid communication environments.
Application across the grid: These devices ensure rapid fault isolation, coordinated protection, and reliable operation across transmission, distribution, and industrial networks.
Transmission and distribution lines (e.g., SIPROTEC)
Power Transformers (e.g., ABB RET615)
Feeders, Motors and industrial loads (e.g., GE D60)
How Attackers Can Recon and Target Protection Relays
As substations evolve into digital control hubs, their critical components, particularly protection relays, are no longer isolated devices. These IEDs are now network-connected through Ethernet, serial-to-IP converters, USB interfaces, and in rare cases, tightly controlled wireless links used for diagnostics or field tools.
While this connectivity improves maintainability, remote engineering access, and real-time visibility, it also expands the cyberattack surface exposing relays to risks of unauthorized logic modification, protocol exploitation, or lateral movement from compromised engineering assets.
Reconnaissance From the Internet
Attackers often begin with open-source intelligence (OSINT), building a map of the organization’s digital and operational footprint. They aren’t initially looking for IEDs or substations; they’re identifying the humans who manage them.
Social Recon: Using LinkedIn, engineering forums, or vendor webinars, attackers look for job titles like “Substation Automation Engineer,” “Relay Protection Specialist,” or “SCADA Administrator.”
OSINT Targeting: Public resumes and RFI documents may reference software like DIGSI, PCM600, or AcSELerator. Even PDF metadata from utility engineering documents can reveal usernames, workstation names, or VPN domains.
Infrastructure Scanning: Tools like Shodan or Censys help identify exposed VPNs, engineering portals, and remote access gateways. If these systems support weak authentication or use outdated firmware, they become initial entry points.
Exploitation of Weak Vendor Access: Many utilities still use stand-alone VPN credentials for contractors and OEM vendors. These accounts often bypass centralized identity systems, lack 2FA, and are reused across projects.
Reconnaissance in IT — Mapping the Path to OT
Once an attacker gains a foothold within the IT network—typically through phishing, credential theft, or exploiting externally exposed services—their next objective shifts toward internal reconnaissance. The target is not just domain dominance, but lateral movement toward OT-connected assets such as substations or Energy Management Systems (EMS).
Domain Enumeration: Using tools like BloodHound, attackers map Active Directory for accounts, shares, and systems tagged with OT context (e.g., usernames like scada_substation_admin, and groups like scada_project and scada_communication).
This phase allows the attacker to pinpoint high-value users and their associated devices, building a shortlist of engineering staff, contractors, or control center personnel who likely interface with OT assets.
Workstation & Server Access: Armed with domain privileges and OT-centric intelligence, the attacker pivots to target the workstations or terminal servers used by the identified engineers. These endpoints are rich in substation-relevant data, such as:
Relay configuration files (.cfg, .prj, .set)
VPN credentials or profiles for IDMZ access
Passwords embedded in automation scripts or connection managers
Access logs or RDP histories indicating commonly used jump hosts
At this stage, the attacker is no longer scanning blindly; they’re executing highly contextual moves to identify paths from IT into OT.
IDMZ Penetration — Crossing the Last Boundary
Using gathered VPN credentials, hard-coded SSH keys, or jump host details, the attacker attempts to cross into the DMZ. This zone typically mediates communication between IT and OT, and may be accessed via:
Engineering jump hosts (dual-homed systems, often less monitored)
Poorly segmented RDP gateways with reused credentials
Exposed management ports on firewalls or remote access servers
Once in the IDMZ, attackers map accessible subnets and identify potential pathways into live substations.
Substation Discovery and Technical Enumeration
Once an attacker successfully pivots into the substation network often via compromised VPN credentials, engineering jump hosts, or dual-homed assets bridging corporate and OT domains—the next step is to quietly enumerate the substation landscape. At this point, they are no longer scanning broadly but conducting targeted reconnaissance to identify and isolate high-value assets, particularly protection relays.
Rather than using noisy tools like nmap with full port sweeps, attackers rely on stealthier techniques tailored for industrial networks. These include passive traffic sniffing and protocol-specific probing to avoid triggering intrusion detection systems or log correlation engines. For example, using custom Python or Scapy scripts, the attacker might issue minimal handshake packets for protocols such as IEC 61850 MMS, DNP3, or Modbus, observing how devices respond to crafted requests. This helps fingerprint device types and capabilities without sending bulk probes.
Simultaneously, MAC address analysis plays a crucial role in identifying vendors. Many industrial devices use identifiable prefixes unique to specific power control system manufacturers. Attackers often leverage this to differentiate protection relays from HMIs, RTUs, or gateways with a high degree of accuracy.
Additionally, by observing mirrored traffic on span ports or through passive sniffing on switch trunks, attackers can detect GOOSE messages, Sampled Values (SV), or heartbeat signals indicative of live relay communication. These traffic patterns confirm the presence of active IEDs, and in some cases, help infer the device’s operational role or logical zone.
Once relays, protocol gateways, and engineering HMIs have been identified, the attacker begins deeper technical enumeration. At this stage, they analyze which services are exposed on each device such as Telnet, HTTP, FTP or MMS, and gather banner information or port responses that reveal firmware versions, relay models or serial numbers. Devices with weak authentication or legacy configurations are prioritized for exploitation.
The attacker may next attempt to log in using factory-set or default credentials, which are often easily obtainable from device manuals. Alarmingly, these credentials are often still active in many substations due to lax commissioning processes. If login is successful, the attacker escalates from passive enumeration to active control—gaining the ability to view or modify protection settings, trip logic, and relay event logs.
If the relays are hardened with proper credentials or access controls, attackers might try other methods, such as accessing rear-panel serial ports via local connections or probing serial-over-IP bridges linked to terminal servers. Some adversaries have even used vendor software (e.g., DIGSI, AcSELerator, PCM600) found on compromised engineering workstations to open relay configuration projects, review programmable logic (e.g., SELogic or FlexLogic), and make changes through trusted interfaces.
Another critical risk in substation environments is the presence of undocumented or hidden device functionality. As highlighted in CISA advisory ICSA-24-095-02, SEL 700-series protection relays were found to contain undocumented capabilities accessible to privileged users.
Separately, some relays may expose backdoor Telnet access through hard-coded or vendor diagnostic accounts. These interfaces are often enabled by default and left undocumented, giving attackers an opportunity to inject firmware, wipe configurations, or issue commands that can directly trip or disable breakers.
By the end of this quiet but highly effective reconnaissance phase, the attacker has mapped out the protection relay landscape, assessed device exposure, and identified access paths. They now shift from understanding the network to understanding what each relay actually controls, entering the next phase: process-aware enumeration.
Process-Aware Enumeration
Once attackers have quietly mapped out the substation network (identifying protection relays, protocol gateways, engineering HMIs, and confirming which devices expose insecure services) their focus shifts from surface-level reconnaissance to gaining operational context. Discovery alone isn’t enough. For any compromise to deliver strategic impact, adversaries must understand how these devices interact with the physical power system.
This is where process-aware enumeration begins. The attacker is no longer interested in just controlling any relay they want to control the right relay. That means understanding what each device protects, how it’s wired into the breaker scheme, and what its role is within the substation topology.
Armed with access to engineering workstations or backup file shares, the attacker reviews substation single-line diagrams (SLDs), often from SCADA HMI screens or documentation from project folders. These diagrams reveal the electrical architecture—transformers, feeders, busbars—and show exactly where each relay fits. Identifiers like “BUS-TIE PROT” or “LINE A1 RELAY” are matched against configuration files to determine their protection zone.
By correlating relay names with breaker control logic and protection settings, the attacker maps out zone hierarchies: primary and backup relays, redundancy groups, and dependencies between devices. They identify which relays are linked to auto-reclose logic, which ones have synch-check interlocks, and which outputs are shared across multiple feeders.
This insight enables precise targeting. For example, instead of blindly disabling protection across the board, which would raise immediate alarms, the attacker may suppress tripping on a backup relay while leaving the primary untouched. Or, they might modify logic in such a way that a fault won’t be cleared until the disturbance propagates, creating the conditions for a wider outage.
At this stage, the attacker is not just exploiting the relay as a networked device. They’re treating it as a control surface for the substation itself. With deep process context in hand, they move from reconnaissance to exploitation: manipulating logic, altering protection thresholds, injecting malicious firmware, or spoofing breaker commands and because their changes are aligned with system topology, they maximize impact while minimizing detection.
Practical Examples of Exploiting Protection Relays
The fusion of network awareness and electrical process understanding makes modern substation attacks particularly dangerous—and why protection relays, when compromised, represent one of the highest-value cyber-physical targets in the grid.
To illustrate how such knowledge is operationalized by attackers, let’s examine a practical example involving the SEL-311C relay, a device widely deployed across substations. Note: While this example focuses on SEL, the tactics described here apply broadly to comparable relays from other major OEM vendors such as ABB, GE, Siemens, and Schneider Electric. In addition, the information presented in this section does not constitute any unknown vulnerabilities or proprietary information, but instead demonstrates the potential for an attacker to use built-in device features to achieve adversarial objectives.
Figure 2: Attack Vectors for a SEL-311C Protection Relay
Physical Access
If an attacker gains physical access to a protection relay, either through the front panel or by opening the enclosure they can trigger a hardware override by toggling the internal access jumper, typically located on the relay’s main board. This bypasses all software-based authentication, granting unrestricted command-level access without requiring a login. Once inside, the attacker can modify protection settings, reset passwords, disable alarms, or issue direct breaker commands effectively assuming full control of the relay.
However, such intrusions can be detected, if the right safeguards are in place. Most modern substations incorporate electronic access control systems (EACS) and SCADA-integrated door alarms. If a cabinet door is opened without an authorized user logged as onsite (via badge entry or operator check-in), alerts can be escalated to dispatch field response teams or security personnel.
Relays themselves provide telemetry for physical access events. For instance, SEL relays pulse the ALARM contact output upon use of the 2ACCESS command, even when the correct password is entered. Failed authentication attempts assert the BADPASS logic bit, while SETCHG flags unauthorized setting modifications. These SEL WORDs can be continuously monitored through SCADA or security detection systems for evidence of tampering.
Toggling the jumper to bypass relay authentication typically requires power-cycling the device, a disruptive action that can itself trigger alarms or be flagged during operational review.
To further harden the environment, utilities increasingly deploy centralized relay management suites (e.g., SEL Grid Configurator, GE Cyber Asset Protection, or vendor-neutral tools like Subnet PowerSystem Center) that track firmware integrity, control logic uploads, and enforce version control tied to access control mechanisms.
In high-assurance deployments, relay configuration files are often encrypted, access-restricted, and protected by multi-factor authentication, reducing the risk of rollback attacks or lateral movement even if the device is physically compromised.
Command Interfaces and Targets
With access established whether through credential abuse, exposed network services, or direct hardware bypass the attacker is now in a position to issue live commands to the relay. At this stage, the focus shifts from reconnaissance to manipulation, leveraging built-in interfaces to override protection logic and directly influence power system behavior.
Here’s how these attacks unfold in a real-world scenario:
Manual Breaker Operation: An attacker can directly issue control commands to the relay to simulate faults or disrupt operations.
Example commands include:
==>PUL OUT101 5; Pulse output for 5 seconds to trip breaker
=>CLO; Force close breaker
=>OPE; Force open breaker
These commands bypass traditional protection logic, allowing relays to open or close breakers on demand. This can isolate critical feeders, create artificial faults, or induce overload conditions—all without triggering standard fault detection sequences.
Programmable Trip Logic Manipulation
Modern protection relays such as those from SEL (SELogic), GE (FlexLogic), ABB (CAP tools), and Siemens (CFC), support customizable trip logic through embedded control languages. These programmable logic engines enable utilities to tailor protection schemes to site-specific requirements. However, this powerful feature also introduces a critical attack surface. If an adversary gains privileged access, they can manipulate core logic equations to suppress legitimate trips, trigger false operations, or embed stealthy backdoors that evade normal protection behavior.
One of the most critical targets in this logic chain is the Trip Request (TR) output, the internal control signal that determines whether the relay sends a trip command to the circuit breaker.
This equation specifies the fault conditions under which the relay should initiate a trip. Each element represents a protection function or status input, such as zone distance, overcurrent detection, or breaker position and collectively they form the basis of coordinated relay response.
In the relay operation chain, the TR equation is at the core of the protection logic.
Figure 3: TR Logic Evaluation within the Protection Relay Operation Chain
In SEL devices, for example, this TR logic is typically defined using a SELogic control equation. A representative version might look like this:
Zone 1 Ground distance element, trips on ground faults within Zone 1
M2PT
Phase distance element from Channel M2, Phase Trip (could be Zone 2)
Z2GT
Zone 2 Ground distance Trip element, for ground faults in Zone 2
51GT
Time-overcurrent element for ground faults (ANSI 51G)
51QT
Time-overcurrent element for negative-sequence current (unbalanced faults)
50P1
Instantaneous phase overcurrent element (ANSI 50P) for Zone 1
SH0
Breaker status input, logic 1 when breaker is closed
Table 1: Elements of TR
In the control equation, the + operator means logical OR, and * means logical AND. Therefore, the logic asserts TR if:
Any of the listed fault elements (distance, overcurrent) are active, or
An instantaneous overcurrent occurs while the breaker is closed.
In effect, the breaker is tripped:
If a phase or ground fault is detected in Zone 1 or Zone 2
If a time-overcurrent condition develops
Or if there’s an instantaneous spike while the breaker is in service
How Attackers Can Abuse the TR Logic
With editing access, attackers can rewrite this logic to suppress protection, force false trips, or inject stealthy backdoors.
Table 2 shows common logic manipulation variants.
Attack Type
Modified Logic
Effect
Disable All Trips
TR = 0
Relay never trips, even during major faults. Allows sustained short circuits, potentially leading to fires or equipment failure.
Force Constant Tripping
TR = 1, TRQUAL = 0
Relay constantly asserts trip, disrupting power regardless of fault status.
Impossible Condition
TR = 50P1 * !SH0
Breaker only trips when already open, a condition that never occurs.
Remove Ground Fault Detection
TR = M1P + M2PT + 50P1 * SH0
Relay ignores ground faults entirely, a dangerous and hard-to-detect attack.
Hidden Logic Backdoor
TR = original + RB15
Attacker can trigger trip remotely via RB15 (a Remote Bit), even without a real fault.
Table 2: TR logic bombs
Disable Trip Unlatching (ULTR)
ULTR = 0
Impact: Prevents the relay from resetting after a trip. The breaker stays open until manually reset, which delays recovery and increases outage durations.
Reclose Logic Abuse
79RI = 1 ; Reclose immediately
79STL = 0 ; Skip supervision logic
Impact: Forces breaker to reclose repeatedly, even into sustained faults. Can damage transformer windings, burn breaker contacts, or create oscillatory failures.
LED Spoofing
LED12 = !TRIP
Impact: Relay front panel shows a “healthy” status even while tripped. Misleads field technicians during visual inspections.
Event Report Tampering
=>EVE; View latest event
=>TRI; Manually trigger report
=>SER C; Clear Sequential Event Recorder
Impact: Covers attacker footprints by erasing evidence. Removes Sequential Event Recorder (SER) logs and trip history. Obstructs post-event forensics.
Change Distance Protection Settings
In the relay protection sequence, distance protection operates earlier in the decision chain, evaluating fault conditions based on impedance before the trip logic is executed to issue breaker commands.
Figure 4: Distance protection settings in a Relay Operation Chain
Impact: Distance protection relies on accurately configured impedance reach (Z1MAG) and impedance angle (Z1ANG) to detect faults within a predefined section of a transmission line (typically 80–100% of line length for Zone 1). Manipulating these values can have the following consequences:
Under-Reaching: Reducing Z1MAG to 0.3 causes the relay to detect faults only within 30% of the line length, making it blind to faults in the remaining 70% of the protected zone. This can result in missed trips, delayed fault clearance, and cascading failures if the backup protection does not act in time.
Impedance Angle Misalignment: Changing Z1ANG affects the directional sensitivity and fault classification. If the angle deviates from system characteristics, the relay may misclassify faults or fail to identify high-resistance faults, particularly on complex line configurations like underground cables or series-compensated lines.
False Trips: In certain conditions, especially with heavy load or load encroachment, a misconfigured distance zone may interpret normal load flow as a fault, resulting in nuisance tripping and unnecessary outages.
Compromised Selectivity & Coordination: The distance element’s coordination with other relays (e.g., Zone 2 or remote end Zone 1) becomes unreliable, leading to overlapping zones or gaps in coverage, defeating the core principle of selective protection.
Restore Factory Defaults
=>>R_S
Impact: Wipes all hardened settings, password protections, and customized logic. Resets the relay to an insecure factory state.
Password Modification for Persistence
=>>PAS 1 <newpass>
Impact: Locks out legitimate users. Maintains long-term attacker access. Prevents operators from reversing changes quickly during incident response.
What Most Environments Still Get Wrong
Despite increasing awareness, training, and incident response playbooks, many substations and critical infrastructure sites continue to exhibit foundational security weaknesses. These are not simply oversights—they’re systemic, shaped by the realities of substation lifecycle management, legacy system inertia, and the operational constraints of critical grid infrastructure.
Modernizing substation cybersecurity is not as simple as issuing new policies or buying next-generation tools. Substations typically undergo major upgrades on decade-long cycles, often limited to component replacement rather than full network redesigns. Integrating modern security features like encrypted protocols, central access control, or firmware validation frequently requires adding computers, increasing bandwidth, and introducing centralized key management systems. These changes are non-trivial in bandwidth-constrained environments built for deterministic, low-latency communication—not IT-grade flexibility.
Further complicating matters, vendor product cycles move faster than infrastructure refresh cycles. It’s not uncommon for new protection relays or firmware platforms to be deprecated or reworked before they’re fully deployed across even one utility’s fleet, let alone hundreds of substations.
The result? A patchwork of legacy protocols, brittle configurations, and incomplete upgrades that adversaries continue to exploit. In the following section, we examine some of the most critical and persistent gaps, why they still exist, and what can realistically be done to address them.
This section highlights the most common and dangerous security gaps observed in real-world environments.
Legacy Protocols Left Enabled
Relays often come with older communication protocols such as:
Telnet (unencrypted remote access)
FTP (insecure file transfer)
Modbus RTU/TCP (lacks authentication or encryption)
These are frequently left enabled by default, exposing relays to:
Credential sniffing
Packet manipulation
Unauthorized control commands
Recommendation: Where possible, disable legacy services and transition to secure alternatives (e.g., SSH, SFTP, or IEC 62351 for secured GOOSE/MMS). If older services must be retained, tightly restrict access via VLANs, firewalls, and role-based control.
IT/OT Network Convergence Without Isolation
Modern substations may share network infrastructure with enterprise IT environments:
VPN access to Substation networks
Shared switches or VLANs between SCADA systems and relay networks
Lack of firewalls or access control lists (ACLs)
This exposes protection relays to malware propagation, ransomware, or lateral movement from compromised IT assets.
Recommendation: Establish strict network segmentation using firewalls, ACLs, and dedicated protection zones. All remote access should be routed through Privileged Access Management (PAM) platforms with MFA, session recording, and Just-In-Time access control.
Default or Weak Relay Passwords
In red team and audit exercises, default credentials are still found in the field sometimes printed on the relay chassis itself.
Factory-level passwords like LEVEL2, ADMIN, or OPERATOR remain unchanged.
Passwords physically labeled on devices
Password sharing among field teams compromises accountability.
These practices persist due to operational convenience, lack of centralized credential management, and difficulty updating devices in the field.
Recommendation: Mandate site-specific, role-based credentials with regular rotation and enforced via centralized relay management tools. Ensure audit logging of all access attempts and password changes.
Built-in Security Features Left Unused
OEM vendors already provide a suite of built-in security features, yet these are rarely configured in production environments. Security features such as role-based access control (RBAC), secure protocol enforcement (e.g., HTTPS, SSH), user-level audit trails, password retry lockouts, and alert triggers (e.g., BADPASS or SETCHG bits) are typically disabled or ignored during commissioning. In many cases, these features are not even evaluated due to time constraints, lack of policy enforcement, or insufficient familiarity among field engineers.
These oversight patterns are particularly common in environments that inherit legacy commissioning templates, where security features are left in their default or least-restrictive state for the sake of expediency or compatibility.
Recommendation: Security configurations must be explicitly reviewed during commissioning and validated periodically. At a minimum:
Enable RBAC and enforce user-level permission tiers.
Configure BADPASS, ALARM, SETCHG, and similar relay logic bits to generate real-time telemetry.
Use secure protocols (HTTPS, SSH, IEC 62351) where supported.
Integrate security bit changes and access logs into central SIEM or NMS platforms for correlation and alerting.
Engineering Laptops with Stale Firmware Tools
OEM vendors also release firmware updates to fix any known security vulnerabilities and bugs. However:
Engineering laptops often use outdated configuration software
Old firmware loaders may upload legacy or vulnerable versions
Security patches are missed entirely
Recommendation: Maintain hardened engineering baselines with validated firmware signing, trusted toolchains, and controlled USB/media usage. Track firmware versions across the fleet for vulnerability exposure.
No Alerting on Configuration or Logic Changes
Protection relays support advanced logic and automation features like SELogic and FlexLogic but in many environments, no alerting is configured for changes. This makes it easy for attackers (or even insider threats) to silently:
Modify protection logic
Switch setting groups
Suppress alarms or trips
Recommendation: Enable relay-side event-based alerting for changes to settings, logic, or outputs. Forward logs to a central SIEM or security operations platform capable of detecting unauthorized logic uploads or suspicious relay behavior.
Relays Not Included in Security Audits or Patch Cycles
Relays are often excluded from regular security practices:
Not scanned for vulnerabilities
Not included in patch management systems
No configuration integrity monitoring or version tracking
This blind spot leaves highly critical assets unmanaged, and potentially exploitable.
Recommendation: Bring protection relays into the fold of cybersecurity governance, with scheduled audits, patch planning, and configuration monitoring. Use tools that can validate settings integrity and detect tampering, whether via vendor platforms or third-party relay management suites.
Physical Tamper Detection Features Not Monitored
Many modern protection relays include hardware-based tamper detection features designed to alert operators when the device enclosure is opened or physically manipulated. These may include:
Chassis tamper switches that trigger digital inputs or internal flags when the case is opened.
Access jumper position monitoring, which can be read via relay logic or status bits.
Power cycle detection, especially relevant when jumpers are toggled (e.g., SEL relays require a power reset to apply jumper changes).
Relay watchdog or system fault flags, indicating unexpected reboots or logic resets post-manipulation.
Despite being available, these physical integrity indicators are rarely wired into the SCADA system or included in alarm logic. As a result, an attacker could open a relay, trigger the access jumper, or insert a rogue SD card—and leave no real-time trace unless other controls are in place.
Recommendation: Utilities should enable and monitor all available hardware tamper indicators:
Wire tamper switches or digital input changes into RTUs or SCADA for immediate alerts.
Monitor ALARM, TAMPER, SETCHG, or similar logic bits in relays that support them (e.g., SEL WORD bits).
Configure alert logic to correlate with badge access logs or keycard systems—raising a flag if physical access occurs outside scheduled maintenance windows.
Include physical tamper status as a part of substation security monitoring dashboards or intrusion detection platforms.
From Oversights to Action — A New Baseline for Relay Security
The previously outlined vulnerabilities aren’t limited to isolated cases, they reflect systemic patterns across substations, utilities, and industrial sites worldwide. As the attack surface expands with increased connectivity, and as adversaries become more sophisticated in targeting protection logic, these security oversights can no longer be overlooked.
But securing protection relays doesn’t require reinventing the wheel. It begins with the consistent application of fundamental security practices, drawn from real-world incidents, red-team assessments, and decades of power system engineering wisdom.
While these practices can be retrofitted into existing environments, it’s critical to emphasize that security is most effective when it’s built in by design, not bolted on later. Retrofitting controls in fragile operational environments often introduces more complexity, risk, and room for error. For long-term resilience, security considerations must be embedded into system architecture from the initial design and commissioning stages.
To help asset owners, engineers, and cybersecurity teams establish a defensible and vendor-agnostic baseline, Mandiant has compiled the “Top 10 Security Practices for Substation Relays,” a focused and actionable framework applicable across protocols, vendors, and architectures.
In developing this list, Mandiant has drawn inspiration from the broader ICS security community—particularly initiatives like the “Top 20 Secure PLC Coding Practices” developed by experts in the field of industrial automation and safety instrumentation. While protection relays are not the same as PLCs, they share many characteristics: firmware-driven logic, critical process influence, and limited error tolerance.
The Top 20 Secure PLC Coding Practices have shaped secure programming conversations for logic-bearing control systems and Mandiant aims for this “Top 10 Security Practices for Substation Relays” list to serve a similar purpose for the protection engineering domain.
Top 10 Security Practices for Substation Relays
#
Practice
What It Protects
Explanation
1
Authentication & Role Separation
Prevents unauthorized relay access and privilege misuse
Ensure each user has their own account with only the permissions they need (e.g., Operator, Engineer). Remove default or unused credentials.
2
Secure Firmware & Configuration Updates
Prevents unauthorized or malicious software uploads
Only allow firmware/configuration updates using verified, signed images through secure tools or physical access. Keep update logs.
Disable unused services like HTTP, Telnet, or FTP. Use authenticated communication for SCADA protocols (IEC 61850, DNP3). Whitelist IPs.
4
Time Synchronization & Logging Protection
Ensures forensic accuracy and prevents log tampering or replay attacks
Use authenticated SNTP or IRIG-B for time. Protect event logs (SER, fault records) from unauthorized deletion or overwrite.
5
Custom Logic Integrity Protection
Prevents logic-based sabotage or backdoors in protection schemes
Monitor and restrict changes to programmable logic (trip equations, control rules). Maintain version history and hash verification.
6
Physical Interface Hardening
Blocks unauthorized access via debug ports or jumpers
Disable, seal, or password-protect physical interfaces like USB, serial, or Ethernet service ports. Protect access jumpers.
7
Redundancy and Failover Readiness
Ensures protection continuity during relay failure or communication outage
Test pilot schemes (POTT, DCB, 87L). Configure redundant paths and relays with identical settings and failover behavior.
8
Remote Access Restrictions & Monitoring
Prevents dormant vendor backdoors and insecure remote control
Disable remote services when not needed. Remove unused vendor/service accounts. Alert on all remote access attempts.
9
Command Supervision & Breaker Output Controls
Prevents unauthorized tripping or closing of breakers
Add logic constraints (status checks, delays, dual-conditions) to all trip/close outputs. Log all manual commands.
10
Centralized Log Forwarding & SIEM Integration
Enables detection of attacks and misconfigurations across systems
Relay logs and alerts should be sent to a central monitoring system (SIEM or historian) for correlation, alerts, and audit trails.
Call to Action
In an era of increasing digitization and escalating cyber threats, the integrity of our power infrastructure hinges on the security of its most fundamental guardians: protection relays. The focus of this analysis is to highlight the criticality of enabling existing security controls and incorporating security as a core design principle for every new substation and upgrade. As sophisticated threat actors, including nation-state-sponsored groups from countries like Russia, China and Iran, actively target critical infrastructure, the need to secure these devices has never been more urgent.
Mandiant recommends that all asset owners prioritize auditing remote access paths to substation automation systems and investigate the feasibility of implementing the “Top 10 Security Practices for Substation Relays” highlighted in this document. Defenders should also consider building a test relay lab or a relay digital twin, which are cloud-based replicas of their physical systems offered by some relay vendors, for robust security and resilience testing in a safe environment. By using real-time data, organizations can use test relay labs or digital twins to—among other things—test for essential subsystem interactions and repercussions of their systems transitioning from a secure state to an insecure state, all without disrupting production. To validate these security controls against a realistic adversary, a Mandiant OT Red Team exercise can safely simulate the tactics, techniques, and procedures used in real-world attacks and assess your team’s detection and response capabilities. By taking proactive steps to harden these vital components, we can collectively enhance the resilience of the grid against a determined and evolving threat landscape.
In the world of Google, networking is the invisible backbone supporting everything from traditional applications to cutting-edge AI-driven workloads. If you’re a developer navigating this complex landscape, understanding the underlying network infrastructure is no longer optional—it’s essential.
This guide cuts through the complexity, offering short, easy-to-digest explanations of core networking terms you need to know. But we don’t stop there. We also dive into the specialized networking concepts crucial for AI Data Centers, including terms like RDMA, InfiniBand, RoCE, NVLink, GPU, and TPU. Plus, we tackle common questions and answers to solidify your understanding.
Whether you’re working on-premises or leveraging the vast power of the Google Cloud, mastering these fundamental networking concepts will empower you to build, deploy, and optimize your applications with confidence.Networking categories and definitions
We would like to announce the general availability of Amazon Q Developer Java upgrade transformation CLI (command line interface). Using the CLI, customers can invoke Q Developer’s transformation capabilities from the command line and perform Java upgrades at scale.
The following capabilities are available:
Java application upgrades from source versions 8, 11, 17, or 21 to target versions 17 or 21 (now available in CLI in addition to IDE)
Selective transformation with options to choose steps from transformation plans, and libraries and versions to upgrade
The ability to convert embedded SQL to complete Oracle-to-PostgreSQL database migration with AWS Database Migration Service (AWS DMS)
With this launch, the capabilities are now available in the AWS Regions US East (N. Virginia) and Europe (Frankfurt). They can be accessed in the command line, on Linux and Mac OS. For more details, please visit the documentation page.
Today we announce Research and Engineering Studio (RES) on AWS Version 2025.06, which introduces significant improvements to instance bootstrapping, security configurations, and logging capabilities. This release streamlines the RES deployment process, enhances security controls for infrastructure hosts, adds Amazon CloudWatch logging for virtual desktop instances (VDI), and provides new customization options.
RES 2025.06 features a streamlined bootstrapping process that accelerates infrastructure and VDI launch times. The improved process also enables customers to create RES-ready Amazon Machine Images (AMIs) without requiring an active RES deployment, making it easier to apply patches and customizations. Enhanced security configurations for infrastructure hosts now have more granular permissions, helping reduce security risks from compromised hosts. Additionally, a new Amazon CloudWatch Logs integration, enabled by default, centralizes VDI logs to simplify troubleshooting and monitoring.
RES 2025.06 adds support for Amazon Linux 2023 for both infrastructure hosts and VDIs, while also introducing support for Rocky Linux 9 for VDIs. Customers can now specify prefixes on the AWS Identity and Access Management (IAM) roles used by RES, providing greater control over IAM resource naming conventions. The release also introduces the ability to delete or remove mounted file systems directly from the RES user interface, simplifying storage management. Furthermore, RES 2025.06 expands regional availability to include AWS GovCloud (US-East), offering an additional deployment option for government customers.
AWS Firewall Manager announces security policy support for enhanced application layer (L7) DDoS protection within AWS WAF. The application layer (L7) DDoS protection is an AWS Managed Rule group that automatically detects and mitigates DDoS events of any applications on Amazon CloudFront, Application Load Balancer (ALB) and other AWS services supported by WAF. AWS Firewall Manager helps cloud security administrators and site reliability engineers protect applications while reducing the operational overhead of manually configuring and managing rules.
Working with AWS Firewall Manager, customers can provide defense in depth policies to address the full range of web site protections from the newly released AWS WAF (L7) DDoS protections to non-HTTP based threats to web site infrastructure. By looking at the totality of a web-sites’ technology stack, customers can define and deploy all the needed protections.
AWS Firewall Manager support for application layer (L7) DDoS protection can be enabled for all AWS WAF and AWS Shield users. Customers can add this specialized Amazon Managed Rule set to a new or existing AWS Firewall Manager policy. AWS Firewall Manager supports this Amazon Managed Rule set in all regions where WAF offers the feature which means all Advanced subscribers in all supported AWS Regions, except Asia Pacific (Thailand), Mexico (Central), and China (Beijing and Ningxia). You can deploy this AWS Managed Rule group for your Amazon CloudFront, ALB, and other supported AWS resources.
To learn more about how AWS Firewall Manager works with WAF’s new Managed Rules, see the AWS Firewall Manager documentation for more details and the AWS Region Table for the list of regions where AWS Firewall Manager is currently available. To learn more about AWS Firewall Manager, its features, and its pricing, visit the AWS Firewall Manager website.
Amazon Web Services (AWS) announces the availability of Amazon EC2 I7ie instances in the AWS Asia Pacific (Sydney), Asia Pacific (Malaysia) and AWS GovCloud (US-East) Regions. Designed for large storage I/O intensive workloads, these new instances are powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances.
I7ie instances offer up to 120TB local NVMe storage density—the highest available in the cloud for storage optimized instances—and deliver up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, these instances achieve up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to existing I3en instances. Additionally, the 16KB torn write prevention feature, enables customers to eliminate performance bottlenecks for database workloads.
I7ie instances are high-density storage-optimized instances, for workloads that demand rapid local storage with high random read/write performance and consistently low latency for accessing large data sets. These instances are offered in eleven different sizes including 2 metal sizes, providing flexibility for customers computational needs. They deliver up to 100 Gbps of network performance bandwidth, and 60 Gbps of dedicated bandwidth for Amazon Elastic Block Store (EBS), ensuring fast and efficient data transfer for applications.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7i instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in the Middle East (UAE) Region. C7i instances are supported by custom Intel processors, available only on AWS, and offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.
C7i instances deliver up to 15% better price-performance versus C6i instances and are a great choice for all compute-intensive workloads, such as batch processing, distributed analytics, ad-serving, and video encoding. C7i instances offer larger instance sizes, up to 48xlarge, and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads.
C7i instances support new Intel Advanced Matrix Extensions (AMX) that accelerate matrix multiplication operations for applications such as CPU-based ML. Customers can attach up to 128 EBS volumes to a C7i instance vs. up to 28 EBS volumes to a C6i instance. This allows processing of larger amounts of data, scale workloads, and improved performance over C6i instances.
Starting today, you can enable Amazon CloudWatch metric (ResolverEndpointCapacityStatus) to monitor the status of the query capacity for Elastic Network Interfaces (ENIs) associated with your Route 53 Resolver endpoint in Amazon Virtual Private Cloud (VPC). The new metric enables you to quickly view whether the Resolver endpoint is at the risk of meeting the service limit for query capacity, and take remediation steps like instantiating additional ENIs to meet the capacity needs.
Before today, you could enable CloudWatch to monitor the number of DNS queries that were forwarded by Route 53 Resolver endpoints, over a default five-minute interval, and make further estimations on when your endpoints will meet the query limits. With this launch, you can now enable the new metric to get direct alerts on the current status of your Resolver endpoint capacity, without requiring you to make additional estimations for calculating capacity of each endpoint. The status is reported for each Resolver endpoint, indicating whether the endpoint is operating within the normal capacity limit (0 – OK), has at least one ENI exceeding 50% capacity utilization (1 – Warning), or has at least one ENI exceeding 75% capacity utilization (2 – Critical). The new metric simplifies capacity management for Route 53 Resolver endpoints by providing clear, actionable signals for scaling decisions, without requiring additional analysis on the query volume.
To learn more about the launch, read the documentation or visit the Route 53 Resolver page. There is no charge for the metric, although you will incur charges for usage of Resolver endpoints.
Today, AWS HealthOmics introduces automatic interpolation of input parameters for Nextflow private workflows, eliminating the need for manual parameter template creation. This enhancement intelligently identifies and extracts both required and optional input parameters directly from workflow definitions, along with their descriptions.AWS HealthOmics is a HIPAA-eligible service that helps healthcare and life sciences customers accelerate scientific breakthroughs with fully managed biological data stores and workflows.
With this new feature, customers can launch bioinformatics workflows more quickly since they no longer need to manually identify, define, and validate each workflow parameter. This also helps reduce configuration errors that can occur when parameters are incorrectly specified or omitted. For specialized requirements, customers can still provide custom parameter templates to override the automatically generated configurations.
Input parameter interpolation for Nextflow workflows is now supported in all regions where AWS HealthOmics is available: US East (N. Virginia), US West (Oregon), Europe (Frankfurt, Ireland, London), Asia Pacific (Singapore), and Israel (Tel Aviv). Automatic parameter interpolation is already supported for WDL and CWL workflows today.
To learn more about automatic parameter interpolation and how to build private workflows, see the AWS HealthOmics documentation.
In today’s cloud landscape, safeguarding your cloud environment requires bolstering your Identity and Access Management (IAM) approach with more than allow policies and the principle of least privilege. To bolster your defenses, we offer a powerful tool: IAM Deny Policies.
Relying only on IAM Allow policies leaves room for potential over-permissioning, and can make it challenging for security teams to consistently enforce permission-level restrictions at scale. This is where IAM Deny comes in.
IAM Deny provides a vital, scalable layer of security that allows you to explicitly define which actions principals can not take, regardless of the roles they have been assigned. This proactive approach can help prevent unauthorized access, and strengthens your overall security posture, providing admin teams overriding guardrail policies throughout their environment.
Understanding IAM Deny
The foundation of IAM Deny is built on IAM Allow policies. Allow policies define who can do what and where in a Google Cloud organization, binding principals (users, groups, service accounts) to roles that grant access to resources at various levels (organization, folder, project, resource).
IAM Deny, conversely, defines restrictions. While it also targets principals, the binding occurs at the organization, folder, or project level — not at the resource level.
Key differences between Allow and Deny Policies:
IAMAllow: Focuses on granting permissions through role bindings to principals.
IAM Deny: Focuses on restricting permissions by overriding role bindings given by IAM Allow, at a hierarchical level.
IAM Deny acts as a guardrail for your Google Cloud environment, helping to centralize the management of administrative privileges, reduce the need for numerous custom roles, and ultimately enhance the security of your organization.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud security products’), (‘body’, <wagtail.rich_text.RichText object at 0x3dfdc48d85b0>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>
How IAM Deny works
IAM Deny policies use several key components to build restrictions.
Denied Principals (Who): The users, groups, or service accounts you want to restrict. This can even be “everyone” in your organization, or even any principal regardless of organization (noted by the allUsers identifier).
Denied Permissions (What): The specific actions or permissions that the denied principals cannot use. Most Google Cloud services support IAM Deny, but it’s important to verify support for new services.
Attachment Points (Where): The organization, folder, or project where the deny policy is applied. Deny policies can not be attached directly to individual resources.
Conditions (How): While optional, these allow for more granular control over when a deny policy is enforced. Conditions are set with Resource Tags using Common Expression Language (CEL) expressions, enabling you to apply deny policies conditionally (such as only in specific environments or unless a certain tag is present).
IAM Deny core components.
Start with IAM Deny
A crucial aspect of IAM Deny is its evaluation order. Deny policies are evaluated first, before any Allow policies. If a Deny policy applies to a principal’s action, the request is explicitly denied, regardless of any roles the principal might have. Only if no Deny policy applies does the system then evaluate Allow policies to determine if the action is permitted.
There are built-in ways you can configure exceptions to this rule, however. Deny policies can specify principals who are exempt from certain restrictions. This can provide flexibility to allow necessary actions for specific administrative or break-glass accounts.
Deny policies always evaluate before IAM Allow policies.
When you can use IAM Deny
IAM Deny policies can be used to implement common security guardrails. These include:
Restricting high-privilege permissions: Prevent developers from creating or managing IAM roles, modifying organization policies, or accessing sensitive billing information in development environments.
Enforcing organizational standards: By limiting a set of permissions no roles can use, you can do things like prevent the misuse of overly-permissive Basic Roles, or restrict the ability to enable Google Cloud services in certain folders.
Implementing security profiles: Define sets of denied permissions for different teams (including billing, networking, and security) to enforce separation of duties.
Securing tagged resources: Apply organization-level deny policies to resources with specific tags (such as iam_deny=enabled).
Creating folder-level restrictions: Deny broad categories of permissions (including billing, networking, and security) on resources within a specific folder, unless they have any tag applied.
Complementary security layers
IAM Deny is most effective when used in conjunction with other security controls. Google Cloud provides several tools that complement IAM Deny:
Organization Policies: Allow you to centrally configure and manage organizational constraints across your Google Cloud hierarchy, such as restricting which APIs are available in your organization with Resource Usage Restriction policies. You can even define IAM Custom Constraints to limit which roles can be granted.
Policy troubleshooter: Can help you understand why a principal has access or has been denied access to a resource. It allows you to analyze both Allow and Deny policies to pinpoint the exact reason for an access outcome.
Policy Simulator: Enables you to simulate the impact of changes to your deny policies before applying them in your live environment. It can help you identify potential disruptions and refine your policies. Our Deny Simulator is now available in preview.
IAM Recommender: Uses machine learning to analyze how you’ve applied IAM permissions, and provide recommendations for reducing overly permissive role assignments. It can help you move towards true least privilege.
Privileged Access Management (PAM): Can manage temporary, just-in-time elevated access for principals who might need exceptions to deny policies. PAM solutions provide auditing and control over break-glass accounts and other privileged access scenarios.
Principal Access Boundaries: Lets you define the resources that principals in your organization can access. For example, you can use these to prevent your principals from accessing resources in other organizations, which can help prevent phishing attacks or data exfiltration.
Implementing IAM Deny with Terraform
The provided GitHub repository offers a Terraform configuration to help you get started with implementing IAM Deny and Organization Policies. This configuration includes:
An organization-level IAM Deny Policy targeting specific administrative permissions on tagged resources.
A folder-level IAM Deny Policy restricting Billing, Networking, and Security permissions on untagged resources.
A Custom Organization Policy Constraint to prevent the use of the roles/owner role.
An Organization Policy restricting the usage of specific Google Cloud services within a designated folder.
3. Prepare terraform.tfvars: Copy terraform.tfvars.example to terraform.tfvars and edit it to include your Organization ID, Target Folder ID, and principal group emails for exceptions.
You can name these whatever you want, but for our example you can use tag key (iamdeny) and tag value (enabled).
5. Update `main.tf` Tag IDs: Replace placeholder tag key and value IDs with your actual tag IDs in the denial_condition section for each policy.
code_block
<ListValue: [StructValue([(‘code’, ‘denial_condition {rn title = “Match IAM Deny Tag”rn expression = “resource.matchTagId(‘tagKeys/*’, ‘tagValues/*’)” #Tag=iam_deny, value=enabledrn }’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3dfdc31aeb50>)])]>
a. NOTE: This is optional, you can also use this expression to deny all resources when the policy is applied
code_block
<ListValue: [StructValue([(‘code’, ‘denial_condition {rn title = “deny all”rn expression = “!resource.matchTag(‘*/\\*’, ‘\\*’)”rn }’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3dfdc209ad60>)])]>
Remember to review the predefined denied permissions in files like `billing.json`, `networking.json`, and `securitycenter.json` (located in the `/terraform/profiles/` directory) and the `denied_perms.tf` file to align them with your organization’s security requirements.
Implementing IAM Deny policies is a crucial step in enhancing your Google Cloud security posture. By explicitly defining what principals cannot do, you add a powerful layer of defense against both accidental misconfigurations and malicious actors.
When combined with Organization Policies, Policy Troubleshooter, Policy Simulator, and IAM Recommender, IAM Deny empowers you to enforce least privilege more effectively and build a more secure cloud environment. Start exploring the provided Terraform example and discover the Power of No in your Google Cloud security strategy.
This content was created from learnings gathered from work by Google Cloud Consulting with enterprise Google Cloud Customers. If you would like to accelerate your Google Cloud journey with our best experts and innovators, contact us at Google Cloud Consulting to get started.
In today’s fast-paced digital landscape, businesses are choosing to build their networks alongside various networking and network security vendors on Google Cloud – and it’s not hard to see why. Google cloud has not only partnered with the best of breed service vendors – it has built an ecosystem that allows its customers to plug in and readily use these services
Cloud WAN: Global connectivity with best in class ISV ecosystem.
This year, we launched Cloud WAN, a key use case of Cross-Cloud Network, that provides a fully managed global WAN solution built on Google’s Premium Tier – planet-scale infrastructure, which spans over 200 countries and 2 million miles of subsea and terrestrial cables — a robust foundation for global connectivity. Cloud WAN provides up to a 40% TCO savings over a customer-managed global WAN leveraging colocation facilities1, while Cross-Cloud Network provides up to 40% improved performance compared to the public internet2.
The ISV Ecosystem advantage
Beyond global connectivity, Cloud WAN also offers customers a robust and adaptable ecosystem that includes market-leading SD-WAN partners, managed SSE vendors integrated via NCC Gateway, DDI solutions from Infoblox and network automation and intelligence solutions from Juniper Mist.These partners are integrated into the networking fabric using Cloud WAN architecture components such as network connectivity center for centralised hub architecture, Cloud VPN and Cloud Interconnect for high bandwidth connectivity to campus and data center networks. You can learn more about our Cloud WAN partners here.
In this post, we explore Google Cloud’s enhanced networking capabilities like multi-tenant, high-scale network address translation (NAT) and zonal affinity that allow ISVs to integrate their offerings natively with the networking fabric – giving Google Cloud customers a plug-and-play solution for cloud network deployments.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 to try Google Cloud networking’), (‘body’, <wagtail.rich_text.RichText object at 0x3e68672ab040>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectpath=/products?#networking’), (‘image’, None)])]>
1. Cloud NAT source-based rules for multi-tenancy
As ISVs scale and expand their services to customers around the globe, infrastructure management can become challenging. When an ISV builds a service for their customers across multiple regions and languages, a single-tenant infrastructure becomes costly, prompting the ISVs to build a shared infrastructure to handle multi-tenancy. But multi-tenancy on shared infrastructure, brings complexities in its own right, especially around network address translation (NAT) and post-service processing. Tenant traffic needs to be translated to the correct allowlisted IP based on region, tenant and language markers. Unfortunately, most NAT solutions don’t handle multi-tenant infrastructure complexity and bandwidth load very well.
Source-based NAT rules in Google Cloud’s Cloud NAT service allow ISVs to NAT their traffic on a granular, per-tenant level, using the tenant and regional context to apply a public NAT IP to traffic after processing it. ISVs can assign IP markers to tenant traffic after they process it through their virtual appliances; Cloud NAT then uses rules to match IP markers and allocates the tenant’s allowlisted public NAT IPs for address translations before sending the traffic to its destination on the internet. This multi-tenant IP management fix provides a scalable way to handle address translation in a service-chaining environment.
Source-based NAT rules will be available for preview in Q3’25.
2. Zonal affinity keeps traffic local to the zone
Another key Cloud WAN advance is zonal affinity for Google Cloud’s internal passthrough Network Load Balancer. This feature minimizes cross-zone traffic, keeping your data local, for improved performance and lower cost of operations. By configuring zonal affinity, you direct client traffic to the managed instance group (MIG) or network endpoint group (NEG) within the same zone. If the number of healthy backends in the local zone dips below your set threshold, the load balancer smartly reverts to distributing traffic across all healthy endpoints in the region. You can control whether traffic spills over to other zones and set the spillover ratio. For an ISV’s network deployment on Google Cloud, zonal affinity helps ensure their applications run smoothly and at a lower TCO, while making the most of a multi-zonal architecture.
Learn more
With its simplicity, high performance, wide range of service options, and cost-efficiency, Cloud WAN is revolutionizing global enterprise connectivity and security. And with source-based NAT rules, and zonal affinity, ISVs and Google Cloud customers can more easily adopt multi-tenant architectures without increasing their operational burden. Visit the Cloud WAN Partners page to learn more about how to integrate your solution as part of Cloud WAN.
1. Architecture includes SD-WAN and 3rd party firewalls, and compares a customer-managed WAN using multi-site colocation facilities to a WAN managed and hosted by Google Cloud. 2. During testing, network latency was more than 40% lower when traffic to a target traveled over the Cross-Cloud Network compared to when traffic to the same target traveled across the public internet.