Amazon Q Developer in chat applications now supports AWS Systems Manager just-in-time node access approvals from Microsoft Teams and Slack. AWS customers can now monitor node access requests and approvals from chat channels to enhance security posture and meet compliance requirements.
The Just-in-time node access provides customers policy-based time-bound access to nodes and helps them comply with zero-standing privileges operations model. This launch provides a seamless integration for managing Just-in-time access request approvals in chat applications.
When configuring Just-in-time approval policies, customers can designate Amazon SNS topics associated with Amazon Q Developer in chat applications configurations for managing node access approval requests. As operators make new node access requests, approvers are notified about the requests in the chat channels. They can then approve or reject access requests directly from the chat channel.
Systems Manager node access approval management in chat applications is available at no additional cost in AWS Regions where Amazon Q Developer and System Manager Just-in-time node access are offered. Visit the user guide and Systems Manager pricing to get started.
Today, AWS announced managed support for Energy Data Insights (EDI) on AWS – delivered through AWS Managed Service (AMS), which enables energy customers to easily deploy, manage, and operate their subsurface data management platform on AWS, in compliance with the (OSDU®) standard. Now, you can automatically deploy EDI on AWS and accelerate your data ingestion from weeks to hours, and intelligently process and organize your subsurface data with minimal manual effort. AWS extends your team with operational capabilities, allowing you to focus on innovation and accelerating time to value with your subsurface data.
With AWS-provided managed support, EDI on AWS removes the undifferentiated heavy-lifting and the complexities of deploying, operating, and maintaining an OSDU Data Platform on AWS, optimizing your EDI operations and security while ensuring round-the-clock availability and protection of the service. AWS handles critical operations on your behalf such as incident management, and backup and restore, significantly improving the resilience of your OSDU Data Platform on AWS. You also receive timely support for application upgrades and patches, allowing you to stay current with the latest features and improvements.
EDI on AWS is available with pay-as-you-go pricing in the following AWS Regions: US East (N. Virginia), US West (Oregon), Asia Pacific (Singapore), Asia Pacific (Sydney), Europe (Ireland), Europe (Paris), and South America (São Paulo).
Starting today, you can use Amazon Route 53 Resolver DNS Firewall and DNS Firewall Advanced in the Asia Pacific (Thailand) and Mexico (Central) Regions, to govern and filter outbound DNS traffic for your Amazon Virtual Private Cloud (VPC).
Route 53 Resolver DNS Firewall is a managed service that enables you to block DNS queries made for domains identified as low-reputation or suspected to be malicious, and to allow queries for trusted domains. In addition, Route 53 Resolver DNS Firewall Advanced is a capability of DNS Firewall that allows you to detect and block DNS traffic associated with Domain Generation Algorithms (DGA) and DNS Tunneling threats. DNS Firewall can be enabled only for Route 53 Resolver, which is a recursive DNS server that is available by default in all Amazon Virtual Private Clouds (VPCs). The Route 53 Resolver responds to DNS queries from AWS resources within a VPC for public DNS records, VPC-specific domain names, and Route 53 private hosted zones.
See here for the list of AWS Regions where Route 53 Resolver DNS Firewall is available. Visit our product page and documentation to learn more about Amazon Route 53 Resolver DNS Firewall and its pricing.
Amazon Neptune Database now supports Graviton3-based R7g and Graviton4-based R8g database instances for Amazon Neptune engine versions 1.4.5 or above, and priced -16% vs R6g.
Graviton3-based R7g are the first AWS database instances to feature the latest DDR5 memory, which provides 50% more memory bandwidth compared to DDR4, enabling high-speed access to data in memory. R7g database instances offer up to 30Gbps enhanced networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS). Graviton4-based R8g instances offer larger instance sizes, up to 48xlarge and features an 8:1 ratio of memory to vCPU, and the latest DDR5 memory.
R7g instances for Neptune are now available US East (N. Virginia), US East (Ohio), US West (N. California), US West (Oregon), Europe (Ireland), Europe (London), Asia Pacific (Hong Kong), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Mumbai), Asia Pacific (Hyderabad), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Malaysia), Canada (Central), Europe (Frankfurt), Europe (Stockholm), Europe (Spain), and South America (São Paulo). R8g instances for Neptune are now available in: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Asia Pacific (Tokyo), Asia Pacific (Mumbai), Asia Pacific (Sydney), Europe (Frankfurt), Europe (Stockholm), and Europe (Spain). You can launch R7g and R8g instances for Neptune using the AWS Management Console or using the AWS CLI. Upgrading a Neptune cluster to R7g or R8g instances requires a simple instance type modification for Neptune engine versions 1.4.5 or higher. For more information on pricing and regional availability, refer to the Amazon Neptune pricing page.
It’s a core part of our mission at Google Cloud to help you meet your evolving policy, compliance, and business objectives. To help further strengthen the security of your cloud environment, we continue regular delivery of new security controls and capabilities on our cloud platform.
We announced at Google Cloud Next multiple new capabilities in our IAM, Access Risk, and Cloud Governance portfolio. Our announcements covered a wide range of new product capabilities and security enhancements in Google Cloud, including:
Identity and Access Management (IAM)
Access Risk products including VPC Service Controls, Context-Aware Access and Identity Threat Detection and Response
Cloud Governance with Organization Policy Service
Resource Management
We also announced new AI capabilities to help cloud developers and operators at every step of the application lifecycle. These new capabilities take an application-centered approach and embed AI assistance throughout the application development lifecycle, driven by new features in Gemini Code Assist and Gemini Cloud Assist.
IAM, Access Risk, and Cloud Governance portfolio.
What’s new in Identity and Access Management
Workforce Identity Federation
Workforce Identity Federation extends Google Cloud’s identity capabilities to support syncless, attribute-based single sign on. Over 95% of Google Cloud products now support Workforce Identity Federation.We also released support for FedRAMP High government requirements to help manage and satisfy compliance mandates.
Enhanced security for non-human identities
With the rise of microservices and the popularity of multicloud deployments, non-human and workload identities are growing rapidly, much faster than human identities. Many large enterprises now have between 10 and 45 times more non-human identities than human (user) identities, often with expansive permissions and privileges.
Securing non-human identities is a key goal for Google Cloud, and we are announcing two new capabilities to enhance authorization and access protection:
Keyless access to Google Cloud APIs using X.509 certificates, to further strengthen workload authentication.
Cloud Infrastructure Entitlement Management (CIEM) for multicloud
Across the security landscape, we are contending with the problem of excessive and often unnecessary widely-granted permissions. At Google Cloud, we work to proactively address the permission problem with tools that can help you control permission proliferation, while also providing comprehensive defense across all layers.
Cloud Infrastructure Entitlement Management (CIEM), our key tool for addressing permission issues, is now available for Azure (in preview) and generally available for Google Cloud and AWS.
IAM Admin Center
We also announced IAM Admin Center , a single pane of glass experience that is customized to your role, showcasing recommendations, notifications, and active tasks. You can also launch into other services directly from the console.
IAM Admin Center will provide organization administrators and project administrators a unified view to discover, learn, test, and use IAM capabilities. It’ll provide contextual discovery of features, enable focus on day to day tasks, and offer curated guides for getting started and resources for continuous learning.
Additionally, other IAM features grew in coverage and in feature depth.
Previously, we announced IAM Deny and Principal access boundary (PAB) policies, powerful mechanisms to set policy-based guardrails on access to resources. As these important controls continue to grow in service coverage and adoption, now there is a need for tooling to simplify planning and visualize impact.
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud security products’), (‘body’, <wagtail.rich_text.RichText object at 0x3eb687b11a30>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>
What’s new with Access Risk
Comprehensive security demands continuous monitoring and control even with authenticated users and workloads equipped with the right permissions and engaged in active sessions. Google Cloud’s access risk portfolio brings dynamic capabilities that layer additional security controls around users, workloads, and data.
Enhanced access and session security
Today, you can use Context-Aware Access (CAA) to secure access to Google Cloud based on attributes including user identity, network, location, and corporate-managed devices.
Coming soon, CAA will be further enhanced with Identity Threat Detection and Response (ITDR) capabilities, using numerous activity signals, such as activity from a suspicious source or a new geo location, to automatically identify risky behavior, and trigger further security validations using mechanisms such as multi-factor authentication (MFA), re-authentication, or denials.
We also announced automatic re-authentication, which triggers a re-authentication request when users perform highly-sensitive actions such as updating billing accounts. This will be enabled by default, and while you can opt-out we strongly recommend you keep it turned on.
Expanded coverage for VPC Service Controls
VPC Service Controls lets you create perimeters that protect your resources and data, and for services that you explicitly specify. To speed up diagnosis and troubleshooting when using VPC Service Controls, we launched Violation Analyzer and Violation Dashboard to help you diagnose an access denial event.
What’s new in Cloud Governance with Organization Policy Service
Expanded coverage for Custom Organization Policy
Google Cloud’s Organization Policy Service gives you centralized, programmatic control over your organization’s resources. Organization Policy already provides predefined constraints, but for greater control you can create custom organization policies. Custom organization policy has now expanded service coverage, with 62 services supported.
Google Cloud Security Baseline
Google Cloud strives to make good security outcomes easier for customers to achieve. As part of this continued effort, we are releasing an updated and stronger set of security defaults, our Google Cloud Security Baseline. These were rolled out to all new customers last year — enabled by default — and based on positive feedback, we are now recommending them to all existing customers.
Starting this year, existing customers are seeing recommendations in their console to adopt the Google Cloud Security Baseline. You also have access to a simulator that tests how these constraints will impact your current environment.
What’s new with resource management
App-enablement with Resource Manager
We also extended our application centric approach to Google Cloud’s Resource Manager. App-enabled folders, now in preview, streamline application management by organizing services and workloads into a single manageable unit, providing centralized monitoring and management, simplifying administration, and providing an application-centric view.
You can now enable application management on folders in a single step.
Learn more
To learn more, you can view the Next ‘25 session recording with an overview of these announcements.
We’re excited to announce the general availability of Amazon Nova Premier, our most capable multimodal foundation model for complex tasks such as processing long documents, videos, large codebases, and executing multistep agentic workflows. It is also our most capable teacher model and can be used with Amazon Bedrock Model Distillation to create custom distilled models for specific needs.
Nova Premier extends the capabilities available from Amazon Nova understanding models with several key improvements, including:
Superior intelligence: The model scores 87.4% in the Massive Multitask Language Understanding (MMLU) benchmark for undergraduate-level knowledge, 82.0% on Math500 for mathematic problems, and 84.6% on the CharXiv benchmark for chart understanding.
Improved agentic capabilities: Nova Premier can perform end-to-end actions on behalf of the user, enabling more complex workflows such as Retrieval-Augmented Generation (RAG), function calling, and agentic coding. The model scores 86.3% on SimpleQA with RAG, 63.7% on the Berkeley Function Calling Leaderboard (BFCL), and 42.4% on SWE-bench Verified for software engineering tasks.
Longer context: The model offers a context window of one million tokens. This enables analysis of bigger data sets like large codebases, multiple documents and images, documents longer than 400 pages, or 90-minute-long videos.
Nova Premier is also the fastest and most cost-effective proprietary model in its intelligence tier in Amazon Bedrock. With Nova Premier and Amazon Bedrock Model Distillation, you can now create highly capable, cost-effective, and low-latency versions of Nova Pro, Lite, and Micro for your specific needs. For example, we used Nova Premier to distill Nova Pro for complex tool selection and API calling. The distilled Nova Pro had a 20% higher accuracy for API invocations compared to the base model and consistently matched the performance of the teacher, with the speed and cost benefits of Nova Pro.
Nova Premier is available in Amazon Bedrock in the following AWS Regions: US East (N. Virginia), US East (Ohio), and US West (Oregon) through cross-Region inference.
Today, AWS Resource Explorer has expanded the availability of resource search and discovery to 3 additional AWS Regions: Asia Pacific (Malaysia), Asia Pacific (Thailand), and Mexico (Central).
With AWS Resource Explorer you can search for and discover your AWS resources across AWS Regions and accounts in your organization, either using the AWS Resource Explorer console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the unified search bar from wherever you are in the AWS Management Console.
For more information about the AWS Regions where AWS Resource Explorer is available, see the AWS Region table.
Amazon SageMaker now offers a unified scheduling experience for visual ETL flows and queries. The next generation of Amazon SageMaker is the center for all your data, analytics, and AI, and includes SageMaker Unified Studio, a single data and AI development environment. Visual ETL in Amazon SageMaker provides a drag-and-drop interface for building ETL flows and authoring flows with Amazon Q. The query editor tool provides a place to write and run queries, view results, and share your work with your team. This new scheduling experience simplifies the scheduling process for Visual ETL and Query editor users.
With unified scheduling you can now schedule your workloads with Amazon EventBridge Scheduler from the same visual interface you use to author your query or visual ETL flow. Previously, you needed to create a code-based workflow in order to run a single flow or query on schedule. You can also view, modify or pause/resume these schedules and monitor the runs they invoked.
This new feature is now available in all AWS regions where Amazon SageMaker is available. Access the supported region list for the most up-to-date availability information.
To learn more, visit our Amazon SageMaker Unified Studio documentation, blog post and Amazon EventBridge Scheduler pricing page.
Cross-Region Automated Backup replication for Amazon RDS is now available in five additional AWS Regions. This launch allows you to setup automated backup replication between Australia (Melbourne) and Australia (Sydney); between Asia Pacific (Hong Kong) and Asia Pacific (Singapore) or Asia Pacific (Tokyo); between Asia Pacific (Malaysia) and Asia Pacific (Singapore); between Canada (Central) and Canada West (Calgary); and between Europe (Zurich) and Europe (Frankfurt) or Europe (Ireland) Regions.
Automated Backups enable recovery capability for mission-critical databases by providing you the ability to restore your database to a specific point in time within your backup retention period. With Cross-Region Automated Backup replication, RDS will replicate snapshots and transaction logs to the chosen destination AWS Region. In the event that your primary AWS Region becomes unavailable, you can restore the automated backup to a point in time in the secondary AWS Region and quickly resume operations. As transaction logs are uploaded to the target AWS Region frequently, you can achieve a Recovery Point Objective (RPO) of within the last few minutes.
You can setup Cross-Region Automated Backup replication with just a few clicks on the Amazon RDS Management Console or using the AWS SDK or CLI. Cross-Region Automated Backup replication is available on Amazon RDS for PostgreSQL, Amazon RDS for MariaDB, Amazon RDS for MySQL, Amazon RDS for Oracle, and Amazon RDS for Microsoft SQL Server. For more information, including instructions on getting started, read the Amazon RDS documentation.
AWS Elastic Beanstalk now gives customers the option to use default security groups or their own custom security groups when deploying applications. This new feature provides greater control over network access and security configurations.
With this update, customers can use custom security groups instead of default security groups for both new and existing Elastic Beanstalk environments. This applies to the EC2 instances within the environment and, for load-balanced environments, to the load balancer as well. Previously, Elastic Beanstalk would automatically add a default security group. This enhancement enables customized security policies and simplifies security management.
This feature is available in all of the AWS Commercial Regions and AWS GovCloud (US) Regions that Elastic Beanstalk supports. For a complete list of regions and service offerings, see AWS Regions.
Amazon Connect now lets you grant administrator access to agent schedules, making it easier to address key operational needs with minimal configuration. With this launch, you can now give certain users access to all published agent schedules without being added as a supervisor to every staff group. For example, users such as centralized schedulers or auditors who require a broad view of agent schedules across the organization can now be granted this access in a few clicks, thus reducing time spent on access management and improving overall operational efficiency.
This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click here.
You can now run OpenSearch version 2.19 in Amazon OpenSearch Service which introduces several improvements in the areas of Vector Search, Observability and OpenSearch Dashboards.
We have introduced four key capabilities for vector search applications. The Faiss engine now supports AVX512 SIMD instructions, to accelerate vector similarity computations. The ML inference search response processor can now rank search hits and update scores based on model predictions, enabling sophisticated and context-aware document ranking and result augmentation. Lucene binary vectors, now complement existing Faiss engine binary vector support offering greater flexibility for vector search applications. Hybrid search now includes pagination support , reciprocal rank fusion to improve result ranking along with a debugging tool for score and rank normalization process.
The launch also introduces query insights dashboards that lets users monitor and analyze the top queries collected by the query Insights plugin. Anomaly detection now offers two key improvements. First, enhanced anomaly definition capabilities allow users to specify multiple criteria to identify both spikes and dips in data patterns. Second, a new dedicated index for flattened results improves query performance and dashboard visualization experience. Finally, you can now use template query to create search queries that contain placeholder variables allowing for more flexible, efficient, and secure search operations.
For information on upgrading to OpenSearch 2.19, please see the documentation. OpenSearch 2.19 is now available in all AWS Regions where Amazon OpenSearch Service is available.
Amazon Simple Email Service (SES) announces that its Mail Manager email modernization and infrastructure features now have a rule action which enables messages to be published using an Amazon Simple Notification Service (SNS) notification. The notification includes the complete email content, and has options for SNS Topic and Encoding.
Amazon SNS is a fully managed service that provides message delivery from publishers (producers) to subscribers (consumers). Publishers communicate asynchornously with subscribers by sending messages to a topic, which is a logical access point and communication channel. Subscribers can choose to receive these notifications through a variety of endpoints, including email, SMS, and Lambda. By centralizing notification preferences within SNS, customers enhance messaging between applications and users while gaining advantages of high availability, durability, and flexibility. Using the Publish to SNS rule action within Mail Manager increases the number and type of delivery destinations available to customers as part of their larger ruleset configuration.
Mail Manager’s Publish to SNS rule action is available in all 17 AWS Regions where Mail Manager is launched. There is no additional fee from SES to make use of this feature, though charges from AWS for SNS and destination channel activity may apply. Customers can learn more about SES Mail Manager by clicking here.
Today, AWS Clean Rooms announces support for multiple collaboration members to receive analysis results from queries using Spark SQL. This streamlined capability enhances usability and transparency by eliminating the need for additional audit mechanisms outside of the collaboration. With this feature, multiple members can receive and validate analysis results from queries across collective datasets directly from the collaboration.
You can designate multiple collaborators as result receivers when executing a Spark SQL query. Results are automatically delivered to all selected collaborators who are configured in both the collaboration settings and table controls. For example, in a collaboration between a media publisher and an advertiser, the publisher can run a query across their collective datasets; the query results are sent to both parties’ chosen Amazon S3 location for validation.
AWS Clean Rooms helps companies and their partners to easily analyze and collaborate on their collective datasets without revealing or copying one another’s underlying data. For more information about the AWS Regions where AWS Clean Rooms is available, see the AWS Regions table. To learn more about collaborating with AWS Clean Rooms, visit AWS Clean Rooms.
Amazon Connect now provides bulk removal of agent schedules, making day-to-day management of agent schedules more efficient. With this launch, you can now remove schedules for up to 400 agents for a single day, or up to 30 days for a single agent. For example, remove all schedules for next Monday as the contact center is going to be closed, or remove future shifts for an agent who is no longer with the organization. With bulk remove, managers no longer have to remove agent shifts one agent and one day at a time, thus improving manager productivity by reducing time spent on managing agent schedules.
This feature is available in all AWS Regions where Amazon Connect agent scheduling is available. To learn more about Amazon Connect agent scheduling, click here.
Starting today, we’re enhancing the AWS Migration Acceleration Program (MAP) with two key capabilities to help you accelerate your modernization efforts and drive customers’ adoption of AI:
New “Move to AI” Modernization Pathway, featuring Amazon Bedrock and Amazon SageMaker. This pathway enables you to help customers transform their existing applications and business processes with proven AI patterns that deliver measurable business value.
Amazon Connect is now a qualifying service in the MAP Modernization Strategic Partner Incentive (SPI). This enables you to help customers transform their contact centers with AI-powered features that increase agent productivity and enhance customer experiences.
These enhancements strengthen your ability to lead customers’ AI transformation and drive contact center modernization.
Today, Amazon Kinesis Data Streams introduces support for tagging and Attribute-Based Access Control (ABAC) for enhanced fan-out consumers. You can register enhanced fan-out consumers to have dedicated low latency read throughput per shard, up to 2MB/s. ABAC is an authorization strategy that defines access permissions based on tags that can be attached to IAM users, roles, and AWS resources for fine-grained access control. This new feature enables you to apply tags for allocating costs and simplifying permission management for your enhanced fan-out consumers.
With this launch, you can now tag your enhanced fan-out consumers used by different business units to track and allocate costs in AWS Cost Explorer without manually tracking costs per consumer. You can apply tags to enhanced fan-out consumers using the Kinesis Data Streams API or AWS Command Line Interface (CLI). Additionally, ABAC support for enhanced fan-out consumers allows you to use IAM policies to allow or deny specific Kinesis Data Streams API actions when the IAM principal’s tags match the tags on a registered consumer.
Tagging and Attribute-Based Access Control for enhanced fan-out consumers are available in all AWS Regions, including the AWS China and AWS GovCloud (US) Regions. To learn more about tagging and ABAC support for consumers, see Tag your resources and Attribute-Based Access Control (ABAC) for AWS.
EC2 Image Builder now integrates with Systems Manager Parameter Store, offering customers a streamlined approach for referencing SSM parameters in their image recipes, components, and distribution configurations. This capability allows customers to dynamically select base images within their image recipes, easily use configuration data and sensitive information for components, and update their SSM parameters with output latest images.
Prior to today, customers had to specify AMI IDs in their image recipes to use custom base images, leading to a constant maintenance cycle when these base images had to be updated. Furthermore, customers were required to create custom scripts to update SSM parameters with output images and to utilize SSM parameter values in components, resulting in substantial operational overhead. Now, customers can leverage SSM Parameters as inputs for their image recipes, enabling them to dynamically retrieve the latest base image. This integration extends to components, where SSM Parameters can be easily referenced to save, retrieve and use sensitive information in components, and to the distribution process, where SSM parameters can be updated with latest output images. These enhancements streamline the image building workflow, reduce manual intervention and improve overall efficiency.
This capability is available to all customers at no additional costs, and is enabled in all AWS commercial regions including AWS GovCloud (US), AWS China (Beijing) Region, operated by Sinnet, and in the AWS China (Ningxia) Region, operated by NWCD.
Customers can get started from the EC2 Image Builder Console, CLI, API, CloudFormation, or CDK, and learn more in the EC2 Image Builder documentation.
Amazon VPC Lattice introduces dual stack support for management API, enabling you to connect using Internet Protocol Version 6 (IPv6), Internet Protocol Version 4 (IPv4), or dual stack clients. Dual stack support is also available when the Amazon VPC Lattice management API endpoint is privately accessed from your Amazon Virtual Private Cloud (VPC) using AWS PrivateLink. Dual stack endpoints are made available on a new AWS DNS domain name. The existing Amazon VPC Lattice management API endpoints are maintained for backwards compatibility reasons.
Amazon VPC Lattice, an application networking service that simplifies connecting, securing, and monitoring service-to-service communication. You can use Amazon VPC Lattice to facilitate cross-account and cross-VPC connectivity, as well as application layer load balancing for your workloads. Whether the underlying compute types are instances, containers, or serverless, with Amazon VPC Lattice developers can work with native integration on the compute platform of their choice. With simultaneous support for both IPv4 and IPv6 clients on VPC Lattice endpoints, you are able to gradually transition from IPv4 to IPv6 based systems and applications, without needing to switch all over at once. This enables you to meet IPv6 compliance requirements and removes the need for expensive networking equipment to handle the address translation between IPv4 and IPv6.
Today, we are excited to announce the general availability of anonymous user access for Amazon Q Business. This feature allows customers to create Q Business applications for anonymous users using publicly accessible content. Q Business applications created in this anonymous mode are billed on a API consumption basis.
Customers can now create anonymous Q Business applications to power use cases such as public web site Q&A, documentation portals, and customer self-service experiences, where user authentication is not required and content is publicly available. For example, AnyCompany wants to improve their website’s visitor support experience by providing a genAI assistant over their publicly available help/product pages. The customer would create an anonymous Q Business application and index all the public product help/documentation to power their Q Business genAI assistant. To deploy the anonymous application, customers can implement the anonymous Chat/ChatSync APIs for higher UX control or embed the built-in anonymous web experience via an iFrame. Anonymous applications are billed on API consumption basis, offering a scalable way to deploy Q Business generative AI experiences to large anonymous audiences.
The anonymous chat APIs and web experience are available in the US East (N. Virginia), US West (Oregon), Europe (Ireland), and Asia Pacific (Sydney) AWS Regions. For more information, please consult our documentation.