We would like to announce the general availability of Amazon Q Developer Java upgrade transformation CLI (command line interface). Using the CLI, customers can invoke Q Developer’s transformation capabilities from the command line and perform Java upgrades at scale.
The following capabilities are available:
Java application upgrades from source versions 8, 11, 17, or 21 to target versions 17 or 21 (now available in CLI in addition to IDE)
Selective transformation with options to choose steps from transformation plans, and libraries and versions to upgrade
The ability to convert embedded SQL to complete Oracle-to-PostgreSQL database migration with AWS Database Migration Service (AWS DMS)
With this launch, the capabilities are now available in the AWS Regions US East (N. Virginia) and Europe (Frankfurt). They can be accessed in the command line, on Linux and Mac OS. For more details, please visit the documentation page.
Today we announce Research and Engineering Studio (RES) on AWS Version 2025.06, which introduces significant improvements to instance bootstrapping, security configurations, and logging capabilities. This release streamlines the RES deployment process, enhances security controls for infrastructure hosts, adds Amazon CloudWatch logging for virtual desktop instances (VDI), and provides new customization options.
RES 2025.06 features a streamlined bootstrapping process that accelerates infrastructure and VDI launch times. The improved process also enables customers to create RES-ready Amazon Machine Images (AMIs) without requiring an active RES deployment, making it easier to apply patches and customizations. Enhanced security configurations for infrastructure hosts now have more granular permissions, helping reduce security risks from compromised hosts. Additionally, a new Amazon CloudWatch Logs integration, enabled by default, centralizes VDI logs to simplify troubleshooting and monitoring.
RES 2025.06 adds support for Amazon Linux 2023 for both infrastructure hosts and VDIs, while also introducing support for Rocky Linux 9 for VDIs. Customers can now specify prefixes on the AWS Identity and Access Management (IAM) roles used by RES, providing greater control over IAM resource naming conventions. The release also introduces the ability to delete or remove mounted file systems directly from the RES user interface, simplifying storage management. Furthermore, RES 2025.06 expands regional availability to include AWS GovCloud (US-East), offering an additional deployment option for government customers.
AWS Firewall Manager announces security policy support for enhanced application layer (L7) DDoS protection within AWS WAF. The application layer (L7) DDoS protection is an AWS Managed Rule group that automatically detects and mitigates DDoS events of any applications on Amazon CloudFront, Application Load Balancer (ALB) and other AWS services supported by WAF. AWS Firewall Manager helps cloud security administrators and site reliability engineers protect applications while reducing the operational overhead of manually configuring and managing rules.
Working with AWS Firewall Manager, customers can provide defense in depth policies to address the full range of web site protections from the newly released AWS WAF (L7) DDoS protections to non-HTTP based threats to web site infrastructure. By looking at the totality of a web-sites’ technology stack, customers can define and deploy all the needed protections.
AWS Firewall Manager support for application layer (L7) DDoS protection can be enabled for all AWS WAF and AWS Shield users. Customers can add this specialized Amazon Managed Rule set to a new or existing AWS Firewall Manager policy. AWS Firewall Manager supports this Amazon Managed Rule set in all regions where WAF offers the feature which means all Advanced subscribers in all supported AWS Regions, except Asia Pacific (Thailand), Mexico (Central), and China (Beijing and Ningxia). You can deploy this AWS Managed Rule group for your Amazon CloudFront, ALB, and other supported AWS resources.
To learn more about how AWS Firewall Manager works with WAF’s new Managed Rules, see the AWS Firewall Manager documentation for more details and the AWS Region Table for the list of regions where AWS Firewall Manager is currently available. To learn more about AWS Firewall Manager, its features, and its pricing, visit the AWS Firewall Manager website.
Amazon Web Services (AWS) announces the availability of Amazon EC2 I7ie instances in the AWS Asia Pacific (Sydney), Asia Pacific (Malaysia) and AWS GovCloud (US-East) Regions. Designed for large storage I/O intensive workloads, these new instances are powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances.
I7ie instances offer up to 120TB local NVMe storage density—the highest available in the cloud for storage optimized instances—and deliver up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, these instances achieve up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to existing I3en instances. Additionally, the 16KB torn write prevention feature, enables customers to eliminate performance bottlenecks for database workloads.
I7ie instances are high-density storage-optimized instances, for workloads that demand rapid local storage with high random read/write performance and consistently low latency for accessing large data sets. These instances are offered in eleven different sizes including 2 metal sizes, providing flexibility for customers computational needs. They deliver up to 100 Gbps of network performance bandwidth, and 60 Gbps of dedicated bandwidth for Amazon Elastic Block Store (EBS), ensuring fast and efficient data transfer for applications.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C7i instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in the Middle East (UAE) Region. C7i instances are supported by custom Intel processors, available only on AWS, and offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.
C7i instances deliver up to 15% better price-performance versus C6i instances and are a great choice for all compute-intensive workloads, such as batch processing, distributed analytics, ad-serving, and video encoding. C7i instances offer larger instance sizes, up to 48xlarge, and two bare metal sizes (metal-24xl, metal-48xl). These bare-metal sizes support built-in Intel accelerators: Data Streaming Accelerator, In-Memory Analytics Accelerator, and QuickAssist Technology that are used to facilitate efficient offload and acceleration of data operations and optimize performance for workloads.
C7i instances support new Intel Advanced Matrix Extensions (AMX) that accelerate matrix multiplication operations for applications such as CPU-based ML. Customers can attach up to 128 EBS volumes to a C7i instance vs. up to 28 EBS volumes to a C6i instance. This allows processing of larger amounts of data, scale workloads, and improved performance over C6i instances.
Starting today, you can enable Amazon CloudWatch metric (ResolverEndpointCapacityStatus) to monitor the status of the query capacity for Elastic Network Interfaces (ENIs) associated with your Route 53 Resolver endpoint in Amazon Virtual Private Cloud (VPC). The new metric enables you to quickly view whether the Resolver endpoint is at the risk of meeting the service limit for query capacity, and take remediation steps like instantiating additional ENIs to meet the capacity needs.
Before today, you could enable CloudWatch to monitor the number of DNS queries that were forwarded by Route 53 Resolver endpoints, over a default five-minute interval, and make further estimations on when your endpoints will meet the query limits. With this launch, you can now enable the new metric to get direct alerts on the current status of your Resolver endpoint capacity, without requiring you to make additional estimations for calculating capacity of each endpoint. The status is reported for each Resolver endpoint, indicating whether the endpoint is operating within the normal capacity limit (0 – OK), has at least one ENI exceeding 50% capacity utilization (1 – Warning), or has at least one ENI exceeding 75% capacity utilization (2 – Critical). The new metric simplifies capacity management for Route 53 Resolver endpoints by providing clear, actionable signals for scaling decisions, without requiring additional analysis on the query volume.
To learn more about the launch, read the documentation or visit the Route 53 Resolver page. There is no charge for the metric, although you will incur charges for usage of Resolver endpoints.
Today, AWS HealthOmics introduces automatic interpolation of input parameters for Nextflow private workflows, eliminating the need for manual parameter template creation. This enhancement intelligently identifies and extracts both required and optional input parameters directly from workflow definitions, along with their descriptions.AWS HealthOmics is a HIPAA-eligible service that helps healthcare and life sciences customers accelerate scientific breakthroughs with fully managed biological data stores and workflows.
With this new feature, customers can launch bioinformatics workflows more quickly since they no longer need to manually identify, define, and validate each workflow parameter. This also helps reduce configuration errors that can occur when parameters are incorrectly specified or omitted. For specialized requirements, customers can still provide custom parameter templates to override the automatically generated configurations.
Input parameter interpolation for Nextflow workflows is now supported in all regions where AWS HealthOmics is available: US East (N. Virginia), US West (Oregon), Europe (Frankfurt, Ireland, London), Asia Pacific (Singapore), and Israel (Tel Aviv). Automatic parameter interpolation is already supported for WDL and CWL workflows today.
To learn more about automatic parameter interpolation and how to build private workflows, see the AWS HealthOmics documentation.
In today’s cloud landscape, safeguarding your cloud environment requires bolstering your Identity and Access Management (IAM) approach with more than allow policies and the principle of least privilege. To bolster your defenses, we offer a powerful tool: IAM Deny Policies.
Relying only on IAM Allow policies leaves room for potential over-permissioning, and can make it challenging for security teams to consistently enforce permission-level restrictions at scale. This is where IAM Deny comes in.
IAM Deny provides a vital, scalable layer of security that allows you to explicitly define which actions principals can not take, regardless of the roles they have been assigned. This proactive approach can help prevent unauthorized access, and strengthens your overall security posture, providing admin teams overriding guardrail policies throughout their environment.
Understanding IAM Deny
The foundation of IAM Deny is built on IAM Allow policies. Allow policies define who can do what and where in a Google Cloud organization, binding principals (users, groups, service accounts) to roles that grant access to resources at various levels (organization, folder, project, resource).
IAM Deny, conversely, defines restrictions. While it also targets principals, the binding occurs at the organization, folder, or project level — not at the resource level.
Key differences between Allow and Deny Policies:
IAMAllow: Focuses on granting permissions through role bindings to principals.
IAM Deny: Focuses on restricting permissions by overriding role bindings given by IAM Allow, at a hierarchical level.
IAM Deny acts as a guardrail for your Google Cloud environment, helping to centralize the management of administrative privileges, reduce the need for numerous custom roles, and ultimately enhance the security of your organization.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud security products’), (‘body’, <wagtail.rich_text.RichText object at 0x3dfdc48d85b0>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>
How IAM Deny works
IAM Deny policies use several key components to build restrictions.
Denied Principals (Who): The users, groups, or service accounts you want to restrict. This can even be “everyone” in your organization, or even any principal regardless of organization (noted by the allUsers identifier).
Denied Permissions (What): The specific actions or permissions that the denied principals cannot use. Most Google Cloud services support IAM Deny, but it’s important to verify support for new services.
Attachment Points (Where): The organization, folder, or project where the deny policy is applied. Deny policies can not be attached directly to individual resources.
Conditions (How): While optional, these allow for more granular control over when a deny policy is enforced. Conditions are set with Resource Tags using Common Expression Language (CEL) expressions, enabling you to apply deny policies conditionally (such as only in specific environments or unless a certain tag is present).
IAM Deny core components.
Start with IAM Deny
A crucial aspect of IAM Deny is its evaluation order. Deny policies are evaluated first, before any Allow policies. If a Deny policy applies to a principal’s action, the request is explicitly denied, regardless of any roles the principal might have. Only if no Deny policy applies does the system then evaluate Allow policies to determine if the action is permitted.
There are built-in ways you can configure exceptions to this rule, however. Deny policies can specify principals who are exempt from certain restrictions. This can provide flexibility to allow necessary actions for specific administrative or break-glass accounts.
Deny policies always evaluate before IAM Allow policies.
When you can use IAM Deny
IAM Deny policies can be used to implement common security guardrails. These include:
Restricting high-privilege permissions: Prevent developers from creating or managing IAM roles, modifying organization policies, or accessing sensitive billing information in development environments.
Enforcing organizational standards: By limiting a set of permissions no roles can use, you can do things like prevent the misuse of overly-permissive Basic Roles, or restrict the ability to enable Google Cloud services in certain folders.
Implementing security profiles: Define sets of denied permissions for different teams (including billing, networking, and security) to enforce separation of duties.
Securing tagged resources: Apply organization-level deny policies to resources with specific tags (such as iam_deny=enabled).
Creating folder-level restrictions: Deny broad categories of permissions (including billing, networking, and security) on resources within a specific folder, unless they have any tag applied.
Complementary security layers
IAM Deny is most effective when used in conjunction with other security controls. Google Cloud provides several tools that complement IAM Deny:
Organization Policies: Allow you to centrally configure and manage organizational constraints across your Google Cloud hierarchy, such as restricting which APIs are available in your organization with Resource Usage Restriction policies. You can even define IAM Custom Constraints to limit which roles can be granted.
Policy troubleshooter: Can help you understand why a principal has access or has been denied access to a resource. It allows you to analyze both Allow and Deny policies to pinpoint the exact reason for an access outcome.
Policy Simulator: Enables you to simulate the impact of changes to your deny policies before applying them in your live environment. It can help you identify potential disruptions and refine your policies. Our Deny Simulator is now available in preview.
IAM Recommender: Uses machine learning to analyze how you’ve applied IAM permissions, and provide recommendations for reducing overly permissive role assignments. It can help you move towards true least privilege.
Privileged Access Management (PAM): Can manage temporary, just-in-time elevated access for principals who might need exceptions to deny policies. PAM solutions provide auditing and control over break-glass accounts and other privileged access scenarios.
Principal Access Boundaries: Lets you define the resources that principals in your organization can access. For example, you can use these to prevent your principals from accessing resources in other organizations, which can help prevent phishing attacks or data exfiltration.
Implementing IAM Deny with Terraform
The provided GitHub repository offers a Terraform configuration to help you get started with implementing IAM Deny and Organization Policies. This configuration includes:
An organization-level IAM Deny Policy targeting specific administrative permissions on tagged resources.
A folder-level IAM Deny Policy restricting Billing, Networking, and Security permissions on untagged resources.
A Custom Organization Policy Constraint to prevent the use of the roles/owner role.
An Organization Policy restricting the usage of specific Google Cloud services within a designated folder.
3. Prepare terraform.tfvars: Copy terraform.tfvars.example to terraform.tfvars and edit it to include your Organization ID, Target Folder ID, and principal group emails for exceptions.
You can name these whatever you want, but for our example you can use tag key (iamdeny) and tag value (enabled).
5. Update `main.tf` Tag IDs: Replace placeholder tag key and value IDs with your actual tag IDs in the denial_condition section for each policy.
code_block
<ListValue: [StructValue([(‘code’, ‘denial_condition {rn title = “Match IAM Deny Tag”rn expression = “resource.matchTagId(‘tagKeys/*’, ‘tagValues/*’)” #Tag=iam_deny, value=enabledrn }’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3dfdc31aeb50>)])]>
a. NOTE: This is optional, you can also use this expression to deny all resources when the policy is applied
code_block
<ListValue: [StructValue([(‘code’, ‘denial_condition {rn title = “deny all”rn expression = “!resource.matchTag(‘*/\\*’, ‘\\*’)”rn }’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3dfdc209ad60>)])]>
Remember to review the predefined denied permissions in files like `billing.json`, `networking.json`, and `securitycenter.json` (located in the `/terraform/profiles/` directory) and the `denied_perms.tf` file to align them with your organization’s security requirements.
Implementing IAM Deny policies is a crucial step in enhancing your Google Cloud security posture. By explicitly defining what principals cannot do, you add a powerful layer of defense against both accidental misconfigurations and malicious actors.
When combined with Organization Policies, Policy Troubleshooter, Policy Simulator, and IAM Recommender, IAM Deny empowers you to enforce least privilege more effectively and build a more secure cloud environment. Start exploring the provided Terraform example and discover the Power of No in your Google Cloud security strategy.
This content was created from learnings gathered from work by Google Cloud Consulting with enterprise Google Cloud Customers. If you would like to accelerate your Google Cloud journey with our best experts and innovators, contact us at Google Cloud Consulting to get started.
In today’s fast-paced digital landscape, businesses are choosing to build their networks alongside various networking and network security vendors on Google Cloud – and it’s not hard to see why. Google cloud has not only partnered with the best of breed service vendors – it has built an ecosystem that allows its customers to plug in and readily use these services
Cloud WAN: Global connectivity with best in class ISV ecosystem.
This year, we launched Cloud WAN, a key use case of Cross-Cloud Network, that provides a fully managed global WAN solution built on Google’s Premium Tier – planet-scale infrastructure, which spans over 200 countries and 2 million miles of subsea and terrestrial cables — a robust foundation for global connectivity. Cloud WAN provides up to a 40% TCO savings over a customer-managed global WAN leveraging colocation facilities1, while Cross-Cloud Network provides up to 40% improved performance compared to the public internet2.
The ISV Ecosystem advantage
Beyond global connectivity, Cloud WAN also offers customers a robust and adaptable ecosystem that includes market-leading SD-WAN partners, managed SSE vendors integrated via NCC Gateway, DDI solutions from Infoblox and network automation and intelligence solutions from Juniper Mist.These partners are integrated into the networking fabric using Cloud WAN architecture components such as network connectivity center for centralised hub architecture, Cloud VPN and Cloud Interconnect for high bandwidth connectivity to campus and data center networks. You can learn more about our Cloud WAN partners here.
In this post, we explore Google Cloud’s enhanced networking capabilities like multi-tenant, high-scale network address translation (NAT) and zonal affinity that allow ISVs to integrate their offerings natively with the networking fabric – giving Google Cloud customers a plug-and-play solution for cloud network deployments.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 to try Google Cloud networking’), (‘body’, <wagtail.rich_text.RichText object at 0x3e68672ab040>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectpath=/products?#networking’), (‘image’, None)])]>
1. Cloud NAT source-based rules for multi-tenancy
As ISVs scale and expand their services to customers around the globe, infrastructure management can become challenging. When an ISV builds a service for their customers across multiple regions and languages, a single-tenant infrastructure becomes costly, prompting the ISVs to build a shared infrastructure to handle multi-tenancy. But multi-tenancy on shared infrastructure, brings complexities in its own right, especially around network address translation (NAT) and post-service processing. Tenant traffic needs to be translated to the correct allowlisted IP based on region, tenant and language markers. Unfortunately, most NAT solutions don’t handle multi-tenant infrastructure complexity and bandwidth load very well.
Source-based NAT rules in Google Cloud’s Cloud NAT service allow ISVs to NAT their traffic on a granular, per-tenant level, using the tenant and regional context to apply a public NAT IP to traffic after processing it. ISVs can assign IP markers to tenant traffic after they process it through their virtual appliances; Cloud NAT then uses rules to match IP markers and allocates the tenant’s allowlisted public NAT IPs for address translations before sending the traffic to its destination on the internet. This multi-tenant IP management fix provides a scalable way to handle address translation in a service-chaining environment.
Source-based NAT rules will be available for preview in Q3’25.
2. Zonal affinity keeps traffic local to the zone
Another key Cloud WAN advance is zonal affinity for Google Cloud’s internal passthrough Network Load Balancer. This feature minimizes cross-zone traffic, keeping your data local, for improved performance and lower cost of operations. By configuring zonal affinity, you direct client traffic to the managed instance group (MIG) or network endpoint group (NEG) within the same zone. If the number of healthy backends in the local zone dips below your set threshold, the load balancer smartly reverts to distributing traffic across all healthy endpoints in the region. You can control whether traffic spills over to other zones and set the spillover ratio. For an ISV’s network deployment on Google Cloud, zonal affinity helps ensure their applications run smoothly and at a lower TCO, while making the most of a multi-zonal architecture.
Learn more
With its simplicity, high performance, wide range of service options, and cost-efficiency, Cloud WAN is revolutionizing global enterprise connectivity and security. And with source-based NAT rules, and zonal affinity, ISVs and Google Cloud customers can more easily adopt multi-tenant architectures without increasing their operational burden. Visit the Cloud WAN Partners page to learn more about how to integrate your solution as part of Cloud WAN.
1. Architecture includes SD-WAN and 3rd party firewalls, and compares a customer-managed WAN using multi-site colocation facilities to a WAN managed and hosted by Google Cloud. 2. During testing, network latency was more than 40% lower when traffic to a target traveled over the Cross-Cloud Network compared to when traffic to the same target traveled across the public internet.
Google Public Sector is continually engaging with customers, partners, and policymakers to deliver technology capabilities that reflect their needs. When it comes to solutions for public safety and law enforcement, we are deeply committed to providing secure and compliance-focused environments.
We’re pleased to announce significant updates, which further strengthen our ability to enable compliance with the Criminal Justice Information Services (CJIS) 6.0 Security Policy and support the critical work of public safety agencies. These updates will help agencies achieve greater control, choice, security, and compliance in the cloud without compromising functionality.
A commitment to trust and compliance
With CJIS, compliance is about more than just controlling encryption keys. At its core, it’s about giving agencies and enterprises the flexibility their missions require. It’s about securing Criminal Justice Information (CJI) with the most advanced technologies and ensuring that access to CJI is restricted to appropriately screened personnel. For public safety, this translates to ensuring the utmost security and compliance for sensitive criminal justice information. Our strong contractual commitments to our customers are backed by robust controls and solutions that are all available today.
“Google Cloud’s Data Boundary via Assured Workloads ensures criminal justice agencies have a highly secure environment that supports their compliance needs and matches policy adherence of traditional ‘govcloud’ solutions, while delivering innovative AI services and scalable infrastructure crucial for their public safety mission,” said Mike Lesko, former chair of the CJIS Advisory Policy Board and former CJIS Systems Officer for the state of Texas.
Key updates for CJIS compliance
We are excited to share the following key CJIS readiness advancements that will benefit Google Public Sector customers. With these updates, customers in all 50 states and Washington, D.C., can confidently host or migrate CJIS applications to Google Cloud with new AI services for CJIS:
Validated by states: Google Cloud’s compliance with CJIS security controls has been validated by CJIS Systems Agencies (CSA) across the United States. To date, Google Cloud has passed 100% of all CSA reviews of CJIS compliance, including several data center audits.
CJIS 6.0 compliance with 3PAO attestation from Coalfire: Google Cloud’s compliance with the rigorous security requirements of v6.0 of the CJIS Security Policy has been independently assessed and validated by Coalfire, a Third-Party Assessment Organization (3PAO). We also launched a new CJIS Implementation Guide to simplify customer compliance with CJIS v6.0. Both artifacts are available on our CJIS compliance page.
Streamlined CJIS compliance with Data Boundary via Assured Workloads
Google Cloud’s Data Boundary via Assured Workloads provides a modern approach for agencies to achieve CJIS compliance with the software-defined community cloud. This approach allows agencies to optimize for infrastructure availability, including a range of GPUs across 9 U.S. regions, ensuring robust performance for demanding public safety applications.
Our Data Boundary for CJIS offers simple guardrails for agencies to achieve CJIS compliance, enabling them to easily set up data residency, access controls restricting CJI access to CJIS-scoped personnel, customer-managed encryption keys and configure essential policies such as log policies for data retention with continuous monitoring. This streamlines the path to compliance, reducing complexity for agencies while leveraging the latest technologies.
Security and compliance for agencies and enterprises
With Google Cloud, customers not only get CJIS-compliant solutions, they also gain access to our leading security capabilities. This includes our rigorous focus on secure-by-design technology and deep expertise from Google Threat Intelligence and Mandiant Cybersecurity Consulting who operate on the frontlines of cyber conflicts worldwide and maintain trusted partnerships with more than 80 governments around the world.
Contact us to learn more about how we are enabling public safety agencies to achieve CJIS compliance and leverage advanced cloud capabilities, and sign up for a 60 day free trial of Data Boundary via Assured Workloads here.
Today, Amazon Web Services (AWS) announced general availability of a new resource-level distributed denial of service (DDoS) mitigation capability for Application Load Balancers (ALB). This new WAF DDoS protection is directly integrated with ALB as an on-host agent to detect and mitigates DDoS attacks from known malicious sources within seconds while maintaining service quality for legitimate traffic. The WAF resource-level DDoS protection for ALBs is built on upon existing IP reputation rule group to provide rapid protection against known attack sources through static rules. This feature efficiently rate limits the traffic based on both direct client IP addresses and proxy networks by inspecting DDoS indicators in X-Forwarded-For (XFF) headers.
Resource-level DDoS protection for ALBs can be configured to be active at all times or to be active only during high load conditions. You can enable this feature in AWS WAF for any Web ACL that is associated with ALB in all supported AWS Regions. See the AWS WAF pricing page for more details on Web ACL pricing.
To learn more about AWS WAF’s resource level DDoS protection, visit the AWS WAF documentation or the AWS WAF console. To get started, refer to our technical documentation for detailed information about enabling this feature to protect your web applications.
We are excited to announce the general availability of AWS Elastic Beanstalk in the Middle East (UAE) region.
AWS Elastic Beanstalk is a service that simplifies application deployment and management on AWS. The service automatically handles deployment, capacity provisioning, load balancing, auto-scaling, and application health monitoring, allowing developers to focus on writing code.
For a complete list of regions and service offerings, see AWS Regions.
To get started on AWS Elastic Beanstalk, see the AWS Elastic Beanstalk Developer Guide. To learn more about Elastic Beanstalk, visit the Elastic Beanstalk product page.
Amazon Cognito introduces AWS Web Application Firewall (AWS WAF) support in Cognito Managed Login. This new capability allows customers to protect their Managed Login endpoints configured in Cognito user pools from unwanted or malicious requests and web-based attacks. Managed Login, a fully-managed, hosted sign-in and sign-up experience that customers can personalize to align with their company or application branding, now offers an additional layer of protection against threat vectors through integration with AWS WAF web access control lists (web ACLs).
This integration provides customers with powerful new capabilities to safeguard their applications against malicious attacks. With AWS WAF support, you can now define rules that enforce rate limits, gain visibility into web traffic to your applications, and allow or block traffic to Cognito Managed Login based on your specific business or security requirements. Additionally, the AWS WAF integration enables you to optimize costs by controlling bot traffic to your Cognito user pools.
Managed Login and WAF support in Managed Login are offered as part of the Cognito Essentials and Plus tiers and are available in all AWS Regions where Amazon Cognito is available. Please note that AWS WAF charges apply for the inspection of user pool requests. For more information, see AWS WAF Pricing. To learn more, see Using AWS WAF to protect Amazon Cognito User Pools, and to get started, visit the Amazon Cognito console.
AWS Backup adds support to copy your Amazon S3 backups across AWS Regions and accounts in the AWS GovCloud (US) Regions.
With Amazon S3 backup copies in multiple AWS Regions, you can maintain separate, protected copies of your backup data to help meet compliance requirements for data protection and disaster recovery. Amazon S3 backup copies across accounts offers an additional layer of protection against inadvertent or unauthorized actions.
The capability to copy Amazon S3 backups across AWS Regions and accounts, supported in all AWS Commercial Regions, is now available in the AWS GovCloud (US) Regions. For more information on regional availability and pricing, see AWS Backup pricing page.
Amazon WorkSpaces Personal now allows you to route streaming traffic privately between your Amazon Virtual Private Cloud (VPC) and WorkSpaces virtual desktops using AWS PrivateLink, without the data ever traversing the public internet.
With this new capability, you can now stream your WorkSpaces through private IP addresses within your VPC, or from on-premises environments using AWS VPN or AWS Direct Connect. The feature helps you to meet your compliance requirements by keeping streaming traffic within the trusted networks.
To get started using PrivateLink with WorkSpaces, create a WorkSpaces VPC endpoint for DCV streaming protocol in the chosen Amazon VPC, then specify the VPC endpoint when creating a new WorkSpaces Personal directory or modifying an existing one. Your users will then use the VPC endpoint when they stream their DCV WorkSpaces.
The feature is available for WorkSpaces Personal running DCV protocol in all AWS Regions where Amazon WorkSpaces is supported, except China (Ningxia) Region.
You can configure this feature through the AWS Management Console, AWS Command Line Interface (CLI), or Amazon WorkSpaces APIs. For detailed configuration instructions and requirements, please refer to the Amazon WorkSpaces documentation.
Today, AWS announced the expansion of 100 Gbps dedicated connections at the AWS Direct Connect location in the NTT Jakarta 2 data center center near Jakarta, Indonesia. You can now establish private, direct network access to all public AWS Regions (except those in China), AWS GovCloud Regions, and AWS Local Zones from this location. This is the second AWS Direct Connect location in Jakarta to provide 100 Gbps connections with MACsec encryption capabilities.
The Direct Connect service enables you to establish a private, physical network connection between AWS and your data center, office, or colocation environment. These private connections can provide a more consistent network experience than those made over the public internet.
For more information on the over 142 Direct Connect locations worldwide, visit the locations section of the Direct Connect product detail pages. Or, visit our getting started page to learn more about how to purchase and deploy Direct Connect.
AWS Directory Service for Microsoft Active Directory, also known as AWS Managed Microsoft AD, and AD Connector are now available in the Asia Pacific (Taipei) Region.
Built on actual Microsoft Active Directory (AD), AWS Managed Microsoft AD enables you to migrate AD-aware applications while reducing the work of managing AD infrastructure in the AWS Cloud. You can use your Microsoft AD credentials to domain join EC2 instances, and also manage containers and Kubernetes clusters. You can keep your identities in your existing Microsoft AD or create and manage identities in your AWS managed directory.
AD Connector is a proxy that enables AWS applications to use your existing on-premises AD identities without requiring AD infrastructure in the AWS Cloud. You can also use AD Connector to join Amazon EC2 instances to your on-premises AD domain and manage these instances using your existing group policies.
AWS announces that Amazon SageMaker has contributed a custom transport ‘AmazonDataZoneTransport’ to the OpenLineage community and enhanced automated lineage capabilities. These lineage enhancements include improvements to automation from sources such as AWS Glue, Amazon Redshift, and automated lineage capture from tools, enabling data scientists and engineers to work more efficiently with their data and models.
The new ‘custom transport’ contribution to the OpenLineage community allows builders to download the transport along with OpenLineage plugins to augment and automate lineage events captured from OpenLineage-enabled systems. With this, customers can automate lineage capture and send these lineage events to the SageMaker Unified Studio domain, enhancing data governance and traceability within their data workflows. Amazon SageMaker has also introduced enhanced automated lineage capabilities from various sources. These improvements include better support for lineage events from AWS Glue, Amazon Redshift, and automated lineage capture from tools such as vETL processes and notebooks. Additionally, SageMaker has improved its SQL lineage support, particularly for Amazon Redshift, with new features including support for stored procedures and materialized views. These enhancements enable automatic lineage capture of complex data operations, providing a more comprehensive view of data transformations and dependencies.
This feature is available all AWS Regions where Amazon SageMaker is available.
To learn more about the custom transport contribution and enhanced lineage capabilities, visit the Amazon SageMaker. page. For detailed information on how to get started with lineage using these new features, refer to the user documentation.
Amazon EventBridge now supports AWS CodeBuild batch builds as a target. This enhancement allows you to trigger concurrent and coordinated builds of a CodeBuild project using EventBridge, providing greater flexibility and control over your build processes.
The Amazon EventBridge Event Bus is a serverless event broker for creating scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. AWS CodeBuild is a fully managed continuous integration service that compiles source code, runs tests, and produces software packages that are ready to deploy. While the EventBridge Event Bus has long supported standard CodeBuild builds as targets, you can now also trigger batch builds. With batch builds, you can trigger features like build graphs, build lists, build matrices, and build fanouts in response to events from AWS services, SaaS partner applications, or your own applications. By combining EventBridge and batch builds, you can automate and orchestrate complex build workflows more effectively, leveraging concurrent and coordinated builds that automatically scale to meet your needs.
This feature is now available in all AWS Regions including the AWS GovCloud (US) Regions.
Today, AWS Security Incident Response announces integration with Amazon EventBridge. This integration enables customers to react, monitor, and orchestrate events associated with cases and memberships within AWS Security Incident Response. Amazon EventBridge is a service that can provide near real-time access to changes in data in AWS services, your own applications, and software as a service (SaaS) applications without writing code. With Amazon EventBridge acting as a central hub for changes in AWS Security Incident Response cases and memberships, customers can either route these events via Rules (for fan-out scenarios to one or more targets) or through Pipes (for point-to-point integrations with enhanced filtering, enrichment, and transformation capabilities).
With the Amazon EventBridge integration, customers can now create integrations between AWS Security Incident Response and third-party tooling or aggregate data to analyze using generative AI and other AWS tooling. For example, when AWS Security Incident Response proactively creates a case, Amazon EventBridge automation can trigger systems to notify stakeholders, which enables quicker response and minimizes barriers to engaging customer teams during potential security incidents. Customers and partners who manage multiple AWS environments can now leverage the Amazon EventBridge integration to monitor AWS Security Incident Response memberships, helping ensure their environments maintain a strong security posture for incident response.
Support for Amazon EventBridge is available in all regions where AWS Security Incident Response is available. To learn more, see the AWS Security Incident Response documentation. Get started today by visiting AWS Security Incident Response via the console, AWS Command Line Interface, or APIs. For additional information on EventBridge, visit the Amazon EventBridge page.