Today, Amazon Braket introduced a local device emulator that enables developers to test verbatim circuits with device-specific characteristics before running on quantum hardware. This feature accelerates development by providing early feedback on circuit compatibility and expected behavior under realistic noise conditions, helping customers validate their quantum programs and develop noise-aware algorithms without incurring hardware costs.
The local device emulator offers several key capabilities:
Validates circuit compatibility with target quantum devices, including qubit connectivity, native gate sets, and device topology constraints
Simulates quantum circuits with depolarizing channels applied to one-qubit and two-qubit gates based on device calibration data
Provides realistic predictions of program behavior on noisy hardware using local density matrix simulation
Supports both real-time and historical calibration data for device emulation
Braket users can instantiate device emulators directly from AWS quantum devices or from custom device properties using the Amazon Braket SDK. The emulator seamlessly integrates with existing workflows, allowing developers to catch compatibility issues early and efficiently iterate on their quantum algorithms before running them on actual quantum hardware.
Today, AWS Client VPN announces support for remote access to IPv6 workloads, allowing customers to establish secure VPN connections to their IPv6-enabled VPC resources. This new capability enables customers to meet their compliance and IPv6 network adoption goals. Now organizations can support IPv4, IPv6, and dual-stack resource connectivity through their Client VPN endpoints.
Previously, Client VPN only supported remote access to IPv4-enabled AWS workloads. With this feature, administrators can now support IPv6-enabled resources using IPv6-only or dual-stack Client VPN endpoints. Organizations can now directly connect their remote users to IPv6 resources and maintain end-to-end IPv6-only connectivity. For example, organizations can now use IPv6-enabled devices to remotely access IPv6-enabled resources in VPC via Client VPN, preserving end-to-end IPv6 connectivity. This feature simplifies network architecture for organizations using IPv6 while preserving native protocol preferences.
This feature is available in all regions where AWS Client VPN is generally available, except the Middle East (Bahrain) Region, and comes with no additional cost. Customers can use IPv6 and dual-stack endpoints at the current endpoint per-hour price.
Today’s cybersecurity landscape requires partners with expertise and resources to handle any incident. Mandiant, a core part of Google Cloud Security, can empower organizations to navigate critical moments, prepare for future threats, build confidence, and advance their cyber defense programs.
We’re excited to announce that Google has been named a Leader in the IDC MarketScape: Worldwide Incident Response 2025 Vendor Assessment (doc #US52036825, August 2025). According to the report, “Mandiant, now a core part of Google Cloud Security, continues to be one of the most recognized and respected names in incident response. With over two decades of experience, Mandiant has built a reputation for responding to some of the world’s most complex and high-impact cyberincidents.” We believe this recognition reflects Mandiant’s extensive experience in some of the world’s most complex and high-impact cyber incidents.
We employ a tightly coordinated “team of teams” model, integrating specialized groups for forensics, threat intelligence, malware analysis, remediation, and crisis communications to assist our customers quickly and effectively.
Our expertise spans technologies and environments, from multicloud and on-premise systems to critical infrastructure. We help secure both emerging and mainstream technologies including AI, Web3, cloud platforms, web applications, and identity systems.
“This structure allows Mandiant to deliver rapid, scalable, and highly tailored responses to incidents ranging from ransomware and nation-state attacks to supply chain compromises and destructive malware,” said the IDC MarketScape report.
SOURCE: “IDC MarketScape: Worldwide Incident Response 2025 Vendor Assessment” by Craig Robinson & Scott Tiazkun, August 2025, IDC # US52036825.
IDC MarketScape vendor analysis model is designed to provide an overview of the competitive fitness of technology and suppliers in a given market. The research methodology utilizes a rigorous scoring methodology based on both qualitative and quantitative criteria that results in a single graphical illustration of each supplier’s position within a given market. The Capabilities score measures supplier product, go-to-market and business execution in the short-term. The Strategy score measures alignment of supplier strategies with customer requirements in a 3-5-year timeframe. Supplier market share is represented by the size of the icons.
Differentiated, rapid, and holistic incident response
Speed is crucial in cyber-incident response, and cyber events can quickly become a reputational crisis. Helping customers address those concerns drives our Incident Response Services.
A key part of that is Mandiant’s Cyber Crisis Communication Planning and Response Services, launched in 2022. The IDC MarketScape noted, “The firm’s crisis communications practice, launched in 2022, is a unique offering in the IR space. Recognizing that cyberincidents are as much about trust and perception as they are about technology, Mandiant provides strategic communications support to help clients manage media inquiries, stakeholder messaging, and align with regulatory frameworks.”
Our approach combines robust remediation, recovery, and resilience solutions, including repeatable and cost-effective minimal viable business recovery environments. Designed to restore critical operations after ransomware attacks, our offerings can help reduce recovery timelines from weeks to days — or even hours.
The report notes, “A key differentiator is Mandiant’s integration with Google’s SecOps platform, which enables rapid deployment of investigative capabilities without the need for lengthy software installations. This allows Mandiant to begin triage and scoping within hours, leveraging existing client telemetry and augmenting it with proprietary forensic tools like FACT and Monocle. These tools allow for deep forensic interrogation of endpoints and orchestration of data collection at enterprise scale — capabilities that go beyond what traditional EDR platforms offer.”
Unparalleled access to threat intelligence
Google Threat Intelligence fuses Mandiant’s frontline expertise, the global reach of the VirusTotal community, and the visibility from Google’s services and devices — enhanced by AI. “As part of Google, Mandiant now benefits from unparalleled access to global infrastructure, engineering resources, and threat intelligence,” said the IDC MarketScape report.
By ensuring that this access is deeply embedded in Mandiant Incident Response and Consulting services, we can quickly identify threat actors, tactics, and indicators of compromise (IOCs). A dedicated threat intelligence analyst supports each incident response case, ensuring findings are contextualized and actionable.
Advancing cyber defenses with a strategic, global leader
For more than two decades, Mandiant experts have helped global enterprises respond to and recover from their worst days. We enable organizations to go beyond technology solutions, evaluate specific business threats, and strengthen their cyber defenses.
“Organizations that seek to work with a globally capable IR firm with strong threat intelligence capabilities and that utilizes a holistic approach to incident response that goes beyond the technical portions should consider Google,” said the report.
Starting today, Amazon GameLift Streams offers enhanced flexibility for managing default applications in stream groups. You can now create new stream groups without specifying a default application and modify or remove the default application in the existing stream group.
The key improvements include:
Ability to create a stream group without a default application
Ability to select and modify the default application in an existing stream group
Ability to unlink a default application without having to delete the entire stream group
Automatic selection of a new default application when no default exists, before streaming an application, ensuring the stream group always has a default when available
Each stream group can have one application set as default, which is pre-cached to help reduce the stream startup time. You can now switch the default status between any linked applications to optimize the performance without recreating stream groups.
To take advantage of these improvements, you can use the updated Amazon GameLift Streams APIs or the service console to manage your default application configurations. The UpdateStreamGroup API includes a new optional field for the default application identifier. The AssociateApplications and DisassociateApplications APIs have also been updated to handle default application changes. For more information about default applications, see Multi-application stream groups in the Developer Guide.
These enhancements provide a better experience as you build and scale your cloud gaming infrastructure.
Amazon Relational Database Service (Amazon RDS) for Oracle now supports ECC384 Certificate Authority with two new ECDSA cipher suites for Oracle Secure Socket Layer (SSL) and Oracle Enterprise Manager (OEM) Agent options in Oracle Database versions 19c and 21c. The ECC384 Certificate Authority and ECDSA cipher suites provide comparable security to the RSA certificate authorities while using shorter keys, and deliver faster encryption with lower CPU usage.
The new ECDSA cipher suites supported with this option are TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384 and TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384. To use these cipher suites, select ECC384 (rds-ca-ecc384-g1) as the Certificate Authority for your Amazon RDS for Oracle database instances.
At Cloud Next ’25, we announced the preview of Firestore with MongoDB compatibility, empowering developers to build cost-effective, scalable, and highly reliable apps on Firestore’s serverless database using a familiar MongoDB-compatible API. Today, we’re announcing that Firestore with MongoDB compatibility is now generally available.
With this launch, the 600,000 active developers within the Firestore community can now use existing MongoDB application code, drivers, and tools, as well as the open-source ecosystem of MongoDB integrations with Firestore’s distinguished serverless service. Firestore provides benefits such as multi-region replication with strong consistency, virtually unlimited scalability, industry-leading high availability with an up to 99.999% SLA, single-digit milliseconds read performance, integrated Google Cloud governance, and a cost-effective pay-as-you-go pricing model.
Firestore with MongoDB compatibility is attracting significant customer interest from diverse industries, including financial services, healthcare, retail, and manufacturing. We’re grateful for the engagement and opportunity this afforded to prioritize features that enable key customer use cases. For instance, a prominent online retail company sought to migrate their product catalog from another document database to Firestore with MongoDB compatibility to maximize scalability and availability. To support this migration, the customer maximized new capabilities like unique indexes to guarantee distinct universal product identifiers. The customer is excited to migrate their production traffic to Firestore with MongoDB compatibility now that it’s generally available.
What’s new in Firestore with MongoDB compatibility
Based on direct customer feedback during the preview, we introduced new capabilities to Firestore with MongoDB compatibility, including expanded support for the Firestore with MongoDB compatibility API, enhanced enterprise readiness, and access from both Firebase and Google Cloud. Let’s take a closer look.
1. Expanded support for Firestore with MongoDB compatibility API and query language
Firestore with MongoDB compatibility API and query language now supports over 200 capabilities. Developers can now create richer applications by leveraging new stages and operators that enable joining data across collections, data analysis within buckets, and advanced querying capabilities including arrays, sets, arithmetic, type conversion, and bitwise operations. We also added support for creating indexes directly from the Firestore with MongoDB compatibility API, including the ability to create unique indexes that ensure distinct field values across documents within a collection. Furthermore, the Firestore Studio console editor now features a new JSON viewer and a data export tool. You can find a comprehensive list of Firestore with MongoDB compatibility capabilities in thedocumentation.
Utilize the MongoDB Query Language (MQL) to run queries like pinpointing optimal wishlist purchase conversions, using new operators and stages such as $setIntersection and $lookup.
2. Built for the enterprise
We have built Firestore with MongoDB compatibility to meet the needs of the enterprise, including new disaster recovery, change data capture, security, and observability features.
For disaster recovery, we’ve integrated Point-in-Time Recovery (PITR) to complement existing scheduled backups. This helps you recover from human errors, such as accidental data deletion, by enabling version control to rollback back at any point in time from the past seven days. We’ve also introduced database clones, allowing you to create an isolated copy of your database for staging, development, or analytics from any point-in-time recovery snapshot. Furthermore, we’ve incorporated managed export and import, enabling you to create a portable copy of your Firestore data in Cloud Storage for archival and other regulatory purposes.
Firestore offers multiple, easy-to-use, disaster recovery options including point-in-time recovery and scheduled backups.
For change data capture, trigger support has been added, enabling the configuration of server-side code to respond to document creation, updates, or deletions within your collections. This facilitates the replication of Firestore data changes to other services, such as BigQuery.
Regarding security, Private Google Access has been implemented, providing secure access from in-perimeter Google Cloud services with a private IP address, to a Firestore with MongoDB compatibility database. This connection option is available with no additional cost.
In terms of observability, Firestore with MongoDB compatibility now supports new metrics within a Firestore usage page. This simplifies the identification of MongoDB compatibility API calls that contribute to cost and traffic. This observability feature augments existing capabilities like query explain and query insights to help optimize usage.
3. Broader accessibility through Firebase in addition to Google Cloud
Finally, you can now access Firestore with MongoDB compatibility alongside all of your favorite developer services in Firebase, as well as in Google Cloud. This means you can manage Firestore with MongoDB compatibility from both the Firebase and Google Cloud consoles, and their respective command-line interfaces (CLI).
Create, manage and query your Firestore with MongoDB compatibility database using the Firebase Console.
“While containers make packaging apps easier, a powerful cluster manager and orchestration system is necessary to bring your workloads to production.”
Ten years ago, these words opened the blog post announcing Google Kubernetes Engine (GKE). The need for a managed Kubernetes platform is as important today as it was then, especially as workloads have evolved to meet increasingly complex demands.
One year before GKE’s arrival, we open-sourced large parts of our internal container management system, Borg, as Kubernetes. This marked the beginning of offering our own platforms to customers, and as we continue to use Kubernetes and GKE to power leading services like Vertex AI, we distill our learnings and best practices into GKE. Innovating and evolving GKE to meet the demands of our global platforms and services means that we deliver a best-in-class platform for our customers.
Enhanced flexibility with updated pricing
After ten years of evolution comes another shift: we are updating GKE to make sure every customer can balance efficiency, performance, and cost as effectively as possible. In September 2025, we’re moving to a single paid tier of GKE that comes with more features, and that lets you add features as needed. Now, every customer can take advantage of multi-cluster management features like Fleets, Teams, Config Management, and Policy Controller — all available with GKE Standard at no additional cost.
Flexibility and versatility are always in demand. The new GKE pricing structure provides à la carte access to additional features to meet the specific needs of all of your clusters. We want you to direct your resources toward the most impactful work at all times, and are confident that a single GKE pricing tier will help you manage your workloads — and your budgets — more effectively.
Optimized compute with Autopilot for every cluster
When we launched GKE Autopilot four years ago, we made Kubernetes accessible to every organization — no Kubernetes expertise required. More recently, we rolled out a new container-optimized compute platform, which delivers unique efficiency and performance benefits, ensuring you get the most out of your workload’s allocated resources, so you can serve more traffic with the same capacity, or existing traffic with fewer resources.
Now, we’re making Autopilot available for every cluster, including existing GKE Standard clusters, on an ad hoc, and per-workload basis. Soon, you’ll be able to turn on Autopilot (and off) in any existing Standard cluster to take advantage of fully managed Kubernetes with better performance, at the best price.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud containers and Kubernetes’), (‘body’, <wagtail.rich_text.RichText object at 0x3e781f525f40>), (‘btn_text’, ”), (‘href’, ”), (‘image’, None)])]>
Innovations for customer success
The technological landscape shifts quickly. A few years ago, organizations were thinking about stateless microservices; today, the focus is on running complex AI workloads. Throughout, we’re always adapting GKE to meet the needs of innovative companies for new scenarios.
We’re proud of the many incredible products and services built on GKE by customers ranging from startups, to enterprises, to AI companies. AI-powered advertising provider Moloco trains and runs its models with TPUs running on GKE. For nearly a decade, Signify has leveraged GKE as the foundation for its Philips Hue smart lighting platform worldwide. And AI unicorn Anthropic depends on GKE to deliver the scale they need to train and serve their models.
“GKE’s new support for larger clusters provides the scale we need to accelerate our pace of AI innovation.” – James Bradbury, Head of Compute, Anthropic
Foundations for consistent evolution
The AI era has just begun, and we’ve been pushing GKE to meet tomorrow’s workload demands today. With customer insights and Google’s best practices baked in, GKE is the ideal platform for developing and deploying AI at scale.
Thank you to our community, customers, and partners who have been on the GKE journey with us. Celebrate a decade of GKE with us by joining the GKE Turns 10 Hackathon and reading the 10 years of GKE ebook. The future is revealing itself every day, and GKE is ready for wherever AI takes us next.
You can now downgrade to minor Apache Airflow versions on Amazon Managed Workflows for Apache Airflow (MWAA).
Amazon MWAA is a managed orchestration service for Apache Airflow that makes it easier to set up and operate end-to-end data pipelines in the cloud. This in-place minor Apache Airflow version option allows you to downgrade your MWAA Apache Airflow version to any other supported minor version.
Apache, Apache Airflow, and Airflow are either registered trademarks or trademarks of the Apache Software Foundationin the United States and/or other countries.
We are excited to announce the general availability of AWS Elastic Beanstalk in Asia Pacific (Thailand), (Malaysia), and Europe (Spain).
AWS Elastic Beanstalk is a service that simplifies application deployment and management on AWS. The service automatically handles deployment, capacity provisioning, load balancing, auto-scaling, and application health monitoring, allowing developers to focus on writing code.
For a complete list of regions and service offerings, see AWS Regions.
To get started on AWS Elastic Beanstalk, see the AWS Elastic Beanstalk Developer Guide. To learn more about Elastic Beanstalk, visit the Elastic Beanstalk product page.
Amazon RDS now supports Redo Transport Compression, a feature that compresses redo data before it is transmitted to a standby database. By reducing the amount of data sent over the network, it improves redo transport performance. Because of faster redo log transport, customers can achieve a lower Recovery Point Objective (RPO), which is the amount of data loss that may occur when promoting a replica in situations where the primary instance becomes unavailable.
Compression and decompression of redo data consumes CPU resources on both the primary and standby databases. Customers should ensure that sufficient CPU capacity is available to handle this increased workload, and use Redo Transport Compression in situations where reduced network traffic and improved RPO outweigh the CPU overhead of compression. To enable the feature, set redo_compression parameter in the Parameter Group for your database instance. Redo Log Compression is available for both mounted and read replicas, and requires Oracle Enterprise Edition with Oracle Advanced Compression Licensing.
Redo Transport Compression is available in all AWS Regions where Amazon RDS for Oracle is available. To learn more about using Redo Transport Compression, visit the Amazon RDS User Guide.
Today, we announced native image generation and editing in Gemini 2.5 Flash to deliver higher-quality images and more powerful creative control. Gemini 2.5 Flash Image is State of the Art (SOTA) for both generation and image editing. For creative use cases, this means you can create richer, more dynamic visuals and edit images until they’re just right. Here are some ways you can use our state of the art native image generation in Gemini 2.5 Flash.
Multi-image fusion: Combine different images into one seamless new visual. You can use multiple reference images to create a single, unified image for use cases such as marketing, training, or advertising.
Character & style consistency: Maintain the same subject or visual style across multiple generations. Easily place the same character or product in different scenes without losing their identity, saving you from time-consuming fine-tuning.
Conversational editing: Edit images with simple, natural language instructions. From removing a person from a group photo to fixing a small detail like a stain, you can make changes through a simple conversation.
Developers and enterprises can access Gemini 2.5 Flash Image in preview today on Vertex AI.
Here’s how customers are leveraging Vertex AI to build next-gen visuals with Gemini 2.5 Flash Image
“With today’s addition of Google’s Gemini 2.5 Flash Image in Adobe Firefly and Adobe Express, people have even greater flexibility to explore their ideas with industry-leading generative AI models and create stunning content with ease. And with seamless integration across Creative Cloud apps, only Adobe delivers a complete creative workflow that takes ideas from inspiration to impact – empowering everyone with the freedom to experiment, the confidence to perfect every detail, and the control to make their work stand out.” – Hannah Elsakr, Vice President, New GenAI Business Ventures, Adobe
“In our evaluation, Gemini 2.5 Flash Image showed notable strengths in maintaining cross‑edit coherence — preserving both fine‑grained visual details and higher‑level scene semantics across multiple revision cycles. Combined with its low response times, this enables more natural, conversational editing loops and supports deployment in real‑time image‑based applications on Poe and through our API.” – Nick Huber, AI Ecosystem Lead, Poe (by Quora)
“Gemini 2.5 Flash Image an incredible addition to Google’s gen media suite of models. We have tested it across multiple WPP clients and products and have been impressed with the quality of output. We see powerful use cases across multiple sectors, particularly retail, with its ability to combine multiple products into single frames, and CPG, where it maintains a high level of object consistency across frames. We are looking forward to integrating Gemini 2.5 Flash Image into WPP Open, our AI-enabled marketing services platform and developing new production workflows.” – Daniel Barak, Global Creative and Innovation Lead, WPP
“For anyone working with visual content, Gemini 2.5 Flash Image is a serious upgrade. Placing products, keeping styles aligned, and ensuring character consistency can all be done in a single step. The model handles complex edits easily, producing results that look polished and professional instantly. Freepik has integrated it into the powerful AI suite powering image generation and editing to help creatives express the power of their ideas.” – Joaquin Cuenca, CEO, Freepik
“Editing requires the highest level of control in any creative process. Gemini 2.5 Flash Image meets that need head-on, delivering precise, iterative changes. It also exhibits extreme flexibility – allowing for significant adjustments to images while retaining character and object consistency. From our early testing at Leonardo.Ai, this model will enable entirely new workflows and creative possibilities, representing a true step-change in capability for the creative industry.” – JJ Fiasson, CEO, Leonardo.ai
Figma‘s AI image tools now include Google’s Gemini 2.5 models, enabling designers to generate and refine images using text prompts—creating realistic content that helps communicate design vision.
Written by: Austin Larsen, Matt Lin, Tyler McLellan, Omar ElAhdan
Introduction
Google Threat Intelligence Group (GTIG) is issuing an advisory to alert organizations about a widespread data theft campaign, carried out by the actor tracked as UNC6395. Beginning as early as Aug. 8, 2025 through at least Aug. 18, 2025, the actor targeted Salesforce customer instances through compromised OAuth tokens associated with the Salesloft Drift third-party application.
The actor systematically exported large volumes of data from numerous corporate Salesforce instances. GTIG assesses the primary intent of the threat actor is to harvest credentials. After the data was exfiltrated, the actor searched through the data to look for secrets that could be potentially used to compromise victim environments. GTIG observed UNC6395 targeting sensitive credentials such as Amazon Web Services (AWS) access keys (AKIA), passwords, and Snowflake-related access tokens. UNC6395 demonstrated operational security awareness by deleting query jobs, however logs were not impacted and organizations should still review relevant logs for evidence of data exposure.
Salesloft indicated that customers that do not integrate with Salesforce are not impacted by this campaign. There is no evidence indicating direct impact to Google Cloud customers, however any customers that use Salesloft Drift should also review their Salesforce objects for any Google Cloud Platform service account keys.
On Aug. 20, 2025 Salesloft, in collaboration with Salesforce, revoked all active access and refresh tokens with the Drift application. In addition, Salesforce removed the Drift application from the Salesforce AppExchange until further notice and pending further investigation. This issue does not stem from a vulnerability within the core Salesforce platform.
GTIG, Salesforce, and Salesloft have notified impacted organizations.
Threat Detail
The threat actor executed queries to retrieve information associated with Salesforce objects such as Cases, Accounts, Users, and Opportunities. For example, the threat actor ran the following sequence of queries to get a unique count from each of the associated Salesforce objects.
SELECT COUNT() FROM Account;
SELECT COUNT() FROM Opportunity;
SELECT COUNT() FROM User;
SELECT COUNT() FROM Case;
Query to Retrieve User Data
SELECT Id, Username, Email, FirstName, LastName, Name, Title, CompanyName,
Department, Division, Phone, MobilePhone, IsActive, LastLoginDate,
CreatedDate, LastModifiedDate, TimeZoneSidKey, LocaleSidKey,
LanguageLocaleKey, EmailEncodingKey
FROM User
WHERE IsActive = true
ORDER BY LastLoginDate DESC NULLS LAST
LIMIT 20
Query to Retrieve Case Data
SELECT Id, IsDeleted, MasterRecordId, CaseNumber <snip>
FROM Case
LIMIT 10000
Recommendations
Given GTIG’s observations of data exfiltration associated with the campaign, organizations using Drift integrated with Salesforce should consider their Salesforce data compromised and are urged to take immediate remediation steps.
Impacted organizations should search for sensitive information and secrets contained within Salesforce objects and take appropriate action, such as revoking API keys, rotating credentials, and performing further investigation to determine if the secrets were abused by the threat actor.
Investigate for Compromise and Scan for Exposed Secrets
Search for the IP addresses and User-Agent strings provided in the IOCs section below. While this list includes IPs from the Tor network that have been observed to date, Mandiant recommends a broader search for any activity originating from Tor exit nodes.
Review Salesforce Event Monitoring logs for unusual activity associated with the Drift connection user.
Review authentication activity from the Drift Connected App.
Review UniqueQuery events that log executed SOQL queries.
Open a Salesforce support case to obtain specific queries used by the threat actor.
Search Salesforce objects for potential secrets, such as:
AKIA for long-term AWS access key identifiers
Snowflake or snowflakecomputing.com for Snowflake credentials
password, secret,key to find potential references to credential material
Strings related to organization-specific login URLs, such as VPN or SSO login pages
Run tools like Trufflehog to find secrets and hardcoded credentials.
Rotate Credentials
Immediately revoke and rotate any discovered keys or secrets.
Reset passwords for associated user accounts.
Configure session timeout values in Session Settings to limit the lifespan of a compromised session.
Today, we are excited to announce the general availability of seven highly expressive Amazon Polly Generative voices in English, French, Polish and Dutch. Amazon Polly is a fully-managed service that turns text into lifelike speech, enabling developers and builders to speechify their applications for conversational AI or for speech content creation.
We are excited to share that Amazon Polly today launches one new male-sounding generative voice: Canadian French – Liamtogether with six new female-sounding generative voices: US English – Salli, Belgian French – Isabelle, French – Celine, Canadian French – Gabrielle, Polish – Ola, and Polish – Ewa. This launch expands the number of voices available on Polly’s Generative TTS engine to twenty-seven diverse voices.
With this release, Polly now offers six male-sounding voices (Canadian French – Liam, French – Rémi, German – Daniel, US Spanish – Pedro, Spain Spanish – Sergio, and Mexico Spanish – Andrés) that speak multiple languages while maintaining the same vocal identity as the US English voice Matthew. Having the same voice identity while speaking multiple languages natively enables customers to switch from one language to another while preserving brand identity across regions/locales. This is made possible by Amazon Polly’s GenAI-based polyglot capability, where a single voice is able to synthesize speech in multiple languages.
All generative voices are accessible in the US East (North Virginia), Europe (Frankfurt), and US West (Oregon) regions and complement the other types of voices that are already available in the same regions.
AWS Transform for .NET now supports Azure DevOps repos alongside GitHub, GitLab, and BitBucket. You can connect your Azure DevOps repositories directly to AWS Transform to discover, assess, transform hundreds of repositories in parallel, and run unit tests. AWS Transform automatically resolves dependencies from Azure Artifacts NuGet packages during transformation, helping you modernize .NET Framework applications from Windows to Linux-ready cross-platform .NET.
Now, you can modernize your .NET applications while continuing to work within your familiar Azure DevOps workflows.
Azure DevOps connector feature is now available in all regions where AWS Transform is available. To get started, visit the product page and documentation.
Starting today, AWS Deadline Cloud supports running Maxon Cinema 4D and Redshift render jobs on Linux service-managed fleets. AWS Deadline Cloud is a fully managed service that simplifies render management for teams creating computer-generated graphics and visual effects, for films, television and broadcasting, web content, and design.
Previously available only on Windows fleets, you can now run Cinema 4D and Redshift jobs on Linux service-managed fleets reducing the compute costs for workers. AWS Deadline Cloud automatically handles the provisioning and elastic scaling of compute resources required for rendering your Cinema 4D and Redshift projects. Service-managed fleets can be configured in minutes so you can begin rendering immediately.
Cinema 4D and Redshift is available on Linux service-managed fleets in all AWS regions where AWS Deadline Cloud is currently offered. To learn more about AWS Deadline Cloud visit the AWS Deadline Cloud documentation.
Amazon Aurora DSQL now supports application resiliency testing with AWS Fault Injection Service (FIS), a fully managed service for running controlled fault injection experiments to improve application performance, observability, and resilience. With this launch, customers can simulate real-world scenarios that disrupt connections to Aurora DSQL clusters, such as during regional failures, enabling them to observe how their applications respond to these disruptions and validate their resilience mechanisms.
Aurora DSQL is the fastest serverless, distributed SQL database with active-active high availability and multi-Region strong consistency. The new FIS action creates scenarios where applications need to handle connection disruptions or complete inaccessibility to an Aurora DSQL cluster in an AWS Region, enabling customers to test application resilience and recovery capabilities. This lets customers test and build confidence that their applications respond as intended when experiencing connection issues, whether they operate within a single Region or across multiple Regions. Customers can create experiment templates in FIS to integrate experiments with continuous integration and release testing. Customers can also generate detailed reports of their FIS experiments and store them in Amazon S3, enabling them to audit and demonstrate compliance with both organizational and regulatory resilience testing requirements.
Aurora DSQL support for FIS is available in the following AWS Regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Europe (Ireland), Europe (London), and Europe (Paris). To get started, visit the Aurora DSQL FIS actions documentation.
AWS B2B Data Interchange now supports custom validation rules for X12 EDI documents, enabling you to expand and alter the validation logic of the X12 ANSI standard to align with custom agreements with your trading partners.
AWS B2B Data Interchange automates validation, transformation, and generation of Electronic Data Interchange (EDI) documents such as ANSI X12 documents to and from JSON and XML data formats. With this launch, you can expand and alter the validation logic of the X12 ANSI standard. You can choose if certain elements need to be present and what length and values of elements are allowed for documents to pass the validation. AWS B2B Data Interchange will automatically validate X12 EDI documents against a combination of the X12 standard and your custom rules. Validation status will be communicated in a generated functional acknowledgment X12 EDI document (997/999) and in an emitted EventBridge event. In case of validation failure, AWS B2B Data Interchange will also generate a human-readable plain language explanation of validation errors and store it alongside your output files. You can use these events and data to trigger and streamline your validation remediation workflow, reducing the time and costs to process your X12 documents.
Support for custom validation rules for X12 EDI documents is available in all AWS Regions where the AWS B2B Data Interchange service is available. To get started with building event-driven EDI workloads on AWS B2B Data Interchange, take the self-paced workshop or refer to the AWS B2B Data Interchange user guide.
Amazon Connect Contact Lens now supports external voice in the Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), and Europe (London) AWS Regions. Amazon Connect integrates with other voice systems for real-time and post-call analytics to help improve customer experience and agent performance with your existing voice system.
Amazon Connect Contact Lens provides call recordings, conversational analytics (including contact transcript, generative AI post-contact summary, sensitive data redaction, contact categorization, theme detection, sentiment analysis, and real-time alerts), and generative AI for automating evaluations of up to 100% of customer interactions (including evaluation forms, automated evaluation, supervisor review) with a rich user experience to display, search and filter customer interactions, and programmatic access to data streams and the data lake. If you are an existing Amazon Connect customer, you can expand your use of Contact Lens to other voice systems for consistent analytics in a single data warehouse. If you want to migrate your contact center to Amazon Connect, you can start with Contact Lens analytics and performance insights before migrating their agents.
Starting today, the Amazon Elastic Compute Cloud (Amazon EC2) G6 instances powered by NVIDIA L4 GPUs are now available in Middle East (UAE). G6 instances can be used for a wide range of graphics-intensive and machine learning use cases.
Customers can use G6 instances for deploying ML models for natural language processing, language translation, video and image analysis, speech recognition, and personalization as well as graphics workloads, such as creating and rendering real-time, cinematic-quality graphics and game streaming. G6 instances feature up to 8 NVIDIA L4 Tensor Core GPUs with 24 GB of memory per GPU and third generation AMD EPYC processors. They also support up to 192 vCPUs, up to 100 Gbps of network bandwidth, and up to 7.52 TB of local NVMe SSD storage.
Amazon EC2 G6 instances are already available today in the AWS US East (N. Virginia and Ohio) , US West (Oregon), Europe (Frankfurt, London, Paris, Spain, Stockholm and Zurich), Asia Pacific (Mumbai, Tokyo, Malaysia, Seoul and Sydney), South America (Sao Paulo) and Canada (Central) regions. Customers can purchase G6 instances as On-Demand Instances, Reserved Instances, Spot Instances, or as part of Savings Plans.
Today, we are announcing the support of Bring Your Own Knowledge Graph (BYOKG) for Retrieval-Augmented Generation (RAG) using the open-source GraphRAG Toolkit. This new capability allows customers to connect their existing knowledge graphs to large language models (LLMs), enabling Generative AI applications that deliver more accurate, context-rich, and explainable responses grounded in trusted, structured data.
Previously, customers who wanted to use their own curated graphs for RAG had to build custom pipelines and retrieval logic to integrate graph queries into generative AI workflows. With BYOKG support, developers can now directly leverage their domain-specific graphs, such as those stored in Amazon Neptune Database or Neptune Analytics, through the GraphRAG Toolkit. This makes it easier to operationalize graph-aware RAG, reducing hallucinations and improving reasoning over multi-hop and temporal relationships. For example, a fraud investigation assistant can query a financial services company’s knowledge graph to surface suspicious transaction patterns and provide analysts with contextual explanations. Similarly, a telecom operations chatbot can detect that a series of linked cell towers are consistently failing, trace the dependency paths to affected network switches, and then guide technicians using SOP documents on how to resolve the issue. Developers simply configure the GraphRAG Toolkit with their existing graph data source, and it will orchestrate retrieval strategies that use graph queries alongside vector search to enhance generative AI outputs.