Amazon Connect Contact Lens real-time queue and agent performance dashboards, and flows performance dashboards are now available in AWS GovCloud (US-West), a secure cloud environment designed for government and public sector customers. The new dashboard lets you monitor real-time agent activity and take immediate actions such as listen-in to a contact, barge (take over) a contact, or change an agent state in a few clicks from a single interface. The dashboards now allow you to define widget level filters and groupings, re-order and re-size columns, and delete or add new metrics. With these dashboards, you can view and compare real-time and historical aggregated performance, trends, and insights using custom-defined time periods (e.g., week over week), summary charts, time-series chart, etc. For example, you can automatically highlight in red if an agent is an error state to give a quick visual indicator of where agents might need additional help to change their status back to available.
Amazon Connect Contact Len’s dashboards are available in all AWS commercial and AWS GovCloud (US) Regions where Amazon Connect is offered. To learn more about dashboards, see the Amazon Connect Administrator Guide. To learn more about Amazon Connect, the AWS cloud-based contact center, please visit the Amazon Connect website.
Amazon Connect is expanding WhatsApp Business messaging to five new AWS Regions: Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Sydney), Canada (Central), and Africa (Cape Town). Additionally, Amazon Connect SMS is now available in Africa (Cape Town). These expansions enable you to engage with customers through their preferred messaging channels while leveraging Amazon Connect’s unified contact center capabilities to deliver seamless omnichannel experiences.
With this launch, Amazon Connect for WhatsApp Business messaging and Amazon Connect SMS are now available in US East (N. Virginia), US West (Oregon), Asia Pacific (Tokyo), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Canada (Central), Europe (Frankfurt), Europe (London), and Africa (Cape Town).
To learn more, visit the Amazon Connect documentation, pricing page, or visit the Amazon Connect website for detailed information about getting started with WhatsApp Business messaging and SMS in these regions.
Today, Amazon Simple Email Service (SES) launched support for connecting to SES outbound sending endpoints over IPv6. Customers can now specify their preference for IPv4 or IPv6 endpoints when using the AWS SDK or CLI. This makes it easy to switch from using IPv4 addresses to IPv6 addresses when communicating with the SES services for outbound sending.
Previously, customers could use the AWS SDK or CLI to connect with SES endpoints for outbound sending. These connections always used IPv4 addresses when creating TCP/IP connections for communication with the SES service. Now customers can specify their preference for dual-stack using an environment variable or command line argument. The AWS SDK and CLI will use this information to specify the address type when connecting to the SES service API endpoint.
SES supports IPv6 addresses when connecting to SES endpoints for outbound sending in all AWS Regions where SES is available.
For more information, see the documentation on using dual stack endpoints with AWS services.
You can now use AWS PrivateLink to privately access Amazon ElastiCache from your Amazon Virtual Private Cloud (Amazon VPC) in the AWS Europe (Spain) Region. AWS PrivateLink provides private connectivity between VPCs, AWS services, and on-premises networks, without exposing traffic to the public internet and securing your network traffic.
To use AWS PrivateLink with Amazon ElastiCache, you create an interface VPC endpoint for Amazon ElastiCache in your VPC using the Amazon VPC console, AWS SDK, or AWS CLI. With an interface VPC endpoint, you can privately access the Amazon ElastiCache APIs from applications inside your Amazon VPC. You can also access the VPC endpoint from other VPCs using VPC Peering or your on-premises environments using AWS VPN or AWS Direct Connect. To learn more, read the documentation, or get started in the Amazon VPC Console.
AWS WAF is expanding the availability of its enhanced rate-based rules feature to customers in the following AWS Regions: Asia Pacific (Hyderabad), Australia (Melbourne), Israel (Tel Aviv), and Asia Pacific (Malaysia). This feature supports additional request parameters for rate-based rules, including cookies and other HTTP headers. Additionally, customers can now create composite keys based on up to 5 request parameters, providing more granular options for managing and securing web application traffic.
Customers could already use WAF rate-based rules to automatically block requests from IP addresses that make large numbers of requests within a short period of time until the rate of requests falls below a customer-defined threshold. Now, WAF customers can aggregate requests by combining IP addresses with other request parameters (“keys”). Supported keys include cookies and other request headers, query strings or query arguments, cookies, label namespaces, and HTTP methods. By combining multiple request parameters into a single composite key, customers can detect and mitigate potential threats with higher accuracy.
There is no additional cost for using this feature, however standard AWS WAF charges still apply. For more information about pricing, visit the AWS WAF Pricing page. This feature is now available in all AWS regions where WAF is supported, except the China (Beijing) and China (Ningxia) Regions. To learn more, see the AWS WAF developer guide. For more information about the service, visit the AWS WAF page.
Amazon Redshift Serverless, which allows you to run and scale analytics without having to provision and manage data warehouse clusters, is now generally available in the AWS Asia Pacific (Hyderabad) region. With Amazon Redshift Serverless, all users, including data analysts, developers, and data scientists, can use Amazon Redshift to get insights from data in seconds. Amazon Redshift Serverless automatically provisions and intelligently scales data warehouse capacity to deliver high performance for all your analytics. You only pay for the compute used for the duration of the workloads on a per-second basis. You can benefit from this simplicity without making any changes to your existing analytics and business intelligence applications.
With a few clicks in the AWS Management Console, you can get started with querying data using the Query Editor V2 or your tool of choice with Amazon Redshift Serverless. There is no need to choose node types, node count, workload management, scaling, and other manual configurations. You can create databases, schemas, and tables, and load your own data from Amazon S3, access data using Amazon Redshift data shares, or restore an existing Amazon Redshift provisioned cluster snapshot. With Amazon Redshift Serverless, you can directly query data in open formats, such as Apache Parquet, in Amazon S3 data lakes. Amazon Redshift Serverless provides unified billing for queries on any of these data sources, helping you efficiently monitor and manage costs.
Amazon ECS customers can configure automated failure detection and remediation for their ECS service rolling updates using deployment circuit breaker and CloudWatch Alarms. Deployment circuit breaker automatically detects task launch failures while CloudWatch alarms allow you to detect issues that result in degradation in infrastructure (e.g. cpu utilization) or performance (e.g. response latency) metrics. Previously, in scenarios where a failing deployment was not detected by either of these mechanisms, customers had to manually trigger a new deployment to roll back to a previous safe state. With today’s release, customers can simply use the new stopDeployment API action and ECS automatically rolls back the service to the last service revision that reached steady state.
You can use the new stop-deployment API to rollback deployments for your ECS services using the AWS Management Console, API, SDK, and CLI in all AWS Regions. To learn more, visit our documentation.
Amazon Bedrock Data Automation (BDA) now supports extraction of custom GenAI-powered insights from audio by specifying the desired output configuration through blueprints. BDA is a GenAI-powered capability of Bedrock that streamlines the development of generative AI applications and automates workflows involving documents, images, audio, and videos. Developers can now extract custom insights from audio using blueprints, which contain their desired output including a list of field names, the data format in which the response for the field is to be extracted as well as natural language instructions for each field. Developers can get started with blueprints by either using a catalog blueprint or creating a blueprint tailored to their needs.
With this launch, developers can extract custom insights such as summaries, key topics, intents, and sentiment from a variety of voice conversations such as customer calls, clinical discussions, and meetings. Insights from BDA can be used to improve employee productivity, reduce compliance costs, and enhance customer experience, among others. For example, customers can improve productivity of sales agents by extracting insights such as summaries, key action items, and next steps from conversations between sales agents and customers.
Amazon Bedrock Data Automation is available in US West (Oregon) and US East (N. Virginia) AWS Regions.
AWS Billing and Cost Management Console’s Payments page now features a Payments Account Summary that helps you view your AWS account’s financial status more efficiently. Critical account balance information is now summarized in a single, easy-to-access location on your Payments page.
Payments Account Summary shows your total outstanding balance, including current and past due amounts, alongside your total unapplied funds from credit memos, unapplied cash, and Advance Pay balance. You can use these unapplied funds to pay outstanding invoices by sending remittance instructions via the email address on your invoice, or by contacting AWS Customer Service. Customers with Advance Pay will have their balances automatically applied to eligible future invoices.
To start reviewing your Payments Account Summary, visit the Payments page in the AWS Billing and Cost Management Console.
Amazon Connect now supports Outbound Campaign calling to Poland in the Europe (Frankfurt) and Europe (London) regions, making it easier to proactively communicate across voice, SMS, and email for use cases such as delivery notifications, marketing promotions, appointment reminders, or debt collection, etc. Outbound Campaigns offers real-time audience segmentation using unified customer data from Customer Profiles, along with an intuitive UI for campaign management, targeting, and analytics. It eliminates the need for complex integrations or direct AWS Console access. Outbound Campaigns can be enabled within the AWS Connect Console.
With Outbound Campaigns, Amazon Connect becomes the only CCaaS platform offering native, seamless support for both inbound and outbound engagement across voice and digital channels in a single, business-friendly application. To learn more, visit our webpage.
AWS Marketplace now supports software as a service (SaaS) products deployed on AWS, on other cloud infrastructures, and on-premises. This will allow independent software vendors to list more SaaS products in AWS Marketplace, offering customers a broader selection of products.
By listing SaaS products in AWS Marketplace, sellers can streamline their sales processes and scale operations more efficiently. Customers can now identify products, including SaaS products, that are 100% deployed on AWS infrastructure with a new “Deployed on AWS” badge in AWS Marketplace. The badge is visible on product detail pages, and customers can also see whether products are “Deployed on AWS” on procurement pages. Products with the “Deployed on AWS” badge leverage the strong security posture and operational excellence of AWS infrastructure, can be deployed quickly, and may qualify for additional AWS customer benefits.
This feature is available in all AWS Regions where AWS Marketplace is available.
To learn more about the expansion of the SaaS product catalog and “Deployed on AWS” badge, read this blog. If you are a seller and want to learn more about the SaaS product listing guidelines, visit the AWS Marketplace Seller Guide.
Across industries, enterprises need efficient and proactive solutions. Imagine frontline professionals using voice commands and visual input to diagnose issues, access vital information, and initiate processes in real-time. The Gemini 2.0 Flash Live API empowers developers to create next-generation, agentic industry applications.
This API extends these capabilities to complex industrial operations. Unlike solutions relying on single data types, it leverages multimodal data – audio, visual, and text – in a continuous livestream. This enables intelligent assistants that truly understand and respond to the diverse needs of industry professionals across sectors like manufacturing, healthcare, energy, and logistics.
In this post, we’ll walk you through a use case focused on industrial condition monitoring, specifically motor maintenance, powered by Gemini 2.0 Flash Live API. The Live API enables low-latency bidirectional voice and video interactions with Gemini. With this API we can provide end users with the experience of natural, human-like voice conversations, and with the ability to interrupt the model’s responses using voice commands. The model can process text, audio, and video input, and it can provide text and audio output. Our use case highlights the API’s advantages over conventional AI and its potential for strategic collaborations.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud AI and ML’), (‘body’, <wagtail.rich_text.RichText object at 0x3e7fa064a370>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/vertex-ai/’), (‘image’, None)])]>
Demonstrating multimodal intelligence: A condition monitoring use case
The demonstration features a live, bi-directional multimodal streaming backend driven by Gemini 2.0 Flash Live API, capable of real-time audio and visual processing, enabling advanced reasoning and life-like conversations. Utilizing the API’s agentic and function calling capabilities alongside Google Cloud services allows for building powerful live multimodal systems with a clean, mobile-optimized user interface for factory floor operators. The demonstration uses a motor with a visible defect as a real-world anchor.
Real-time visual identification: Pointing the camera at a motor, Gemini identifies the model and instantly summarizes relevant information from its manual, providing quick access to crucial equipment details.
Real-time visual defect identification: With a voice command like “Inspect this motor for visual defects,” Gemini analyzes the live video, identifies and localizes the defect, and explains its reasoning.
Streamlined repair initiation: Upon identifying defects, the system automatically prepares and sends an email with the highlighted defect image and part information, directly initiating the repair process.
Real-time audio defect identification: Analyzing pre-recorded audio of healthy and defective motors, Gemini accurately distinguishes the faulty one based on its sound profile and explains its analysis.
Multimodal QA on operations: Operators can ask complex questions about the motor while pointing the camera at specific components. Gemini intelligently combines visual context with information from the motor manual to provide accurate voice-based answers.
Under the hood: The technical architecture
The demonstration leverages the Gemini Multimodal Livestreaming API on Google Cloud Vertex AI. The API manages the core workflow and agentic function calling, while the regular Gemini API handles visual and audio feature extraction.
The workflow involves:
Agentic function calling: The API interprets user voice and visual input to determine the desired action.
Audio defect detection: Upon user intent, the system records motor sounds, stores them in GCS, and triggers a function that uses a prompt with examples of healthy and defective sounds, analyzed by the Gemini Flash 2.0 API to diagnose the motor’s health.
Visual inspection: The API recognizes the intent to detect visual defects, captures images, and calls a function that uses zero-shot detection with a text prompt, leveraging the spatial understanding of the Gemini Flash 2.0 API to identify and highlight defects.
Multimodal QA: When users ask questions, the API identifies the intent for information retrieval, performs RAG on the motor manual, combines it with multimodal context, and uses the Gemini API to provide accurate answers.
Sending repair orders: Recognizing the intent to initiate a repair, the API extracts the part number and defect image, using a pre-defined template to automatically send a repair order via email.
Such a demo can be easily built with minimal custom integration, by referring to theguide here, and incorporating the features mentioned in the diagram above. The majority of the effort would be in adding custom function calls for various use cases.
Key capabilities and industrial benefits with cross-industry use cases
Real-time multimodal processing: The API’s ability to simultaneously process live audio and visual streams provides immediate insights in dynamic environments, crucial for preventing downtime and ensuring operational continuity.
Use case: In healthcare, a remote medical assistant could use live video and audio to guide a field paramedic, receiving real-time vital signs and visual information to provide expert support during emergencies.
Advanced audio & visual reasoning: Gemini’s sophisticated reasoning interprets complex visual scenes and subtle auditory cues for accurate diagnostics.
Use Case: In manufacturing, AI can analyze the sounds and visuals of machinery to predict failures before they occur, minimizing production disruptions.
Agentic function calling for automated workflows: The API’s agentic nature enables intelligent assistants to proactively trigger actions, like generating reports or initiating processes, streamlining workflows.
Use case: In logistics, a voice command and visual confirmation of a damaged package could automatically trigger a claim process and notify relevant parties.
Seamless integration and scalability: Built on Vertex AI, the API integrates with other Google Cloud services, ensuring scalability and reliability for large-scale deployments.
Use case: In agriculture, drones equipped with cameras and microphones could stream live data to the API for real-time analysis of crop health and pest detection across vast farmlands.
Mobile-optimized user experience: The mobile-first design ensures accessibility for frontline workers, allowing interaction with the AI assistant at the point of need using familiar devices.
Use case: In retail, store associates could use voice and image recognition to quickly check inventory, locate products, or access product information for customers directly on the store floor.
Proactive maintenance and efficiency gains: By enabling real-time condition monitoring, industries can shift from reactive to predictive maintenance, reducing downtime, optimizing asset utilization, and improving overall efficiency across sectors.
Use case: In the energy sector, field technicians can use the API to diagnose issues with remote equipment like wind turbines through live audio and visual streams, reducing the need for costly and time-consuming site visits.
Get started
Explore the cutting edge of AI interaction with the Gemini Live API, as showcased by this solution. Developers can leverage its codebase – featuring low-latency voice, webcam/screen integration, interruptible streaming audio, and a modular tool system via Cloud Functions – as a robust starting point. Clone the project, adapt the components, and begin creating transformative, multimodal AI solutions that feel truly conversational and aware. The future of the intelligent industry is live, multimodal, and within reach for all sectors.
For AI developers building cutting-edge applications with large model sizes, a reliable foundation is non-negotiable. You need your AI to perform consistently, delivering results without hiccups, even under pressure. This means having dedicated resources that won’t get bogged down by other users’ activity. While existing Vertex AI Prediction Endpoints – managed pools of resources to deploy AI models for online inference – provide a capable serving solution, developers need better ways to reach consistent performance and resource isolation in case of shared resource contention.
Today, we are pleased to announce Vertex AI Prediction Dedicated Endpoints, a new family of Vertex AI Prediction endpoints, designed to address the needs of modern AI applications, including those related with large-scale generative AI models.
Dedicated endpoint architected for generative AI and large models
Serving generative AI and other large-scale models introduces unique challenges related to payload size, inference time, interactivity, and performance demands. The new Vertex AI Prediction Dedicated Endpoints have been specifically engineered to help you build more reliably with the following new integrated features:
Native support for streaming inference: Essential for interactive applications like chatbots or real-time content generation, Vertex AI Endpoints now provide native support for streaming, simplifying development and architecture, via the following APIs:
streamRawPredict: Utilize this dedicated API method for bidirectional streaming to send prompts and receive sequences of responses (e.g., tokens) as they become available.
OpenAI Chat Completion: To facilitate interoperability and ease migration, endpoints serving compatible models can optionally expose an interface conforming to the widely used OpenAI Chat Completion streaming API standard.
gRPC protocol support: For latency-sensitive applications or high-throughput scenarios often encountered with large models, endpoints now natively support gRPC. Leveraging HTTP/2 and Protocol Buffers, gRPC can offer performance advantages over standard REST/HTTP.
Customizable request timeouts: Large models can have significantly longer inference times. We now provide the flexibility, via API, to configure custom timeouts for prediction requests, accommodating a wider range of model processing durations beyond the default settings.
Optimized resource handling: The underlying infrastructure is designed to better handle the resource demands (CPU/GPU, memory, network bandwidth) of large models, contributing to the overall stability and performance, especially when paired with Private Endpoints.
The newly integrated capabilities of Vertex AI Prediction Dedicated Endpoints offer a unified and robust serving solution tailored for demanding modern AI workloads. From today, Vertex AI Model Garden will use Vertex AI Prediction Dedicated Endpoints as the standard serving method for self-deployed models.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud AI and ML’), (‘body’, <wagtail.rich_text.RichText object at 0x3e7fac8ca580>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/vertex-ai/’), (‘image’, None)])]>
Optimized networking via Private Service Connect (PSC)
While Dedicated Endpoints Public remain available for models accessible over the public internet, we are enhancing networking options on Dedicated Endpoints utilizing Google Cloud Private Service Connect (PSC). The new DedicatedEndpoints Private (via PSC)provide a secure and performance-optimized path for prediction requests. By leveraging PSC, traffic routes entirely within Google Cloud’s network, offering significant benefits:
Enhanced security: Requests originate from within your Virtual Private Cloud (VPC) network, eliminating public internet exposure for the endpoint.
Improved performance consistency: Bypassing the public internet reduces latency variability.
Reduced performance interference: PSC facilitates better network traffic isolation, mitigating potential “noisy neighbor” effects and leading to more predictable performance, especially for demanding workloads.
For production workloads with strict security requirements and predictable latency, Private Endpoints using Private Service Connect are the recommended configuration.
How Sojern is using the new Vertex AI Prediction Dedicated Endpoints to serve models at scale
Sojern is a marketing company focusing on the hospitality industry, matching potential customers to travel businesses around the globe. As part of their growth plans, Sojern turned to Vertex AI. Leaving their self-managed ML stack behind, Sojern can focus more on innovation, while scaling out far beyond their historical footprint.
Given the nature of Sojern’s business, their ML deployments follow a unique deployment model, requiring several high throughput endpoints to be available and agile at all times, allowing for constant model evolution. Using Public Endpoints would cause rate limiting and ultimately degrade user experience; moving to a Shared VPC model would have required a major design change for existing consumers of the models.
With Private Service Connect (PSC) and Dedicated Endpoint, Sojern avoided hitting the quotas / limits enforced on Public Endpoints, while also avoiding a network redesign to accommodate Shared VPC.
The ability to quickly promote tested models, take advantage of Dedicated Endpoint’s enhanced featureset, and improve latency for their customers strongly aligned with Sojern’s goals. The Sojern team continues to onboard new models, always improving accuracy and customer satisfaction, powered by Private Service Connect and Dedicated Endpoint.
Get started
Are you struggling to scale your prediction workloads on Vertex AI? Check out the resources below to start using the new Vertex AI Prediction Dedicated Endpoints:
Your experience and feedback are important as we continue to evolve Vertex AI. We encourage you to explore these new endpoint capabilities and share your insights through Google Cloud community forum.
Today, AWS announces the preview of the Amazon Q Developer integration in GitHub. With this launch, developers can use the power of Amazon Q Developer agents for feature development, code review, and Java transformation within GitHub.com and GitHub Enterprise Cloud projects to streamline their developer experience.
After installing the Amazon Q Developer application from GitHub, developers can use labels to assign issues to Amazon Q Developer. Then, Amazon Q Developer agents automatically implement new features, generate bug fixes, run code reviews on new pull requests, and modernize legacy Java applications, all within the GitHub projects. While generating new code, the agents will automatically use any pull request workflows, refining the solution and ensuring all checks are passing. Developers can also collaborate with the agents by directly commenting on the pull request, and Amazon Q Developer will respond with improvements, allowing all teammates to stay in the loop. By bringing Amazon Q Developer into GitHub, development teams can confidently deliver high-quality software faster while maintaining their organization’s security and compliance standards.
The Amazon Q Developer integration is available on GitHub, and you can get started today for free—no AWS account needed. To learn more, check out the Amazon Q Developer Integrations page or read the blog.
When’s the last time you watched a race for the braking?
It’s the heart-pounding acceleration and death-defying maneuvers that keep most motorsport fans on the edge of their seats. Especially when it comes to Formula E — and really all EVs — the explosive, near-instantaneous acceleration of an electric motor is part of the appeal.
A less considered, yet no less important feature, is how EVs can regeneratively brake, turning friction into fuel. Part of Formula E’s mission is to make EVs a compelling automotive choice for consumers, not just world-class racers; highlighting this powerful aspect of the vehicles has become a priority. The question remained: How do you get others to feel the same exhilaration from deceleration?
The answer came from the mountains above Monaco, as well as some prompts in Gemini 2.5.
In the lead up to the Monaco E-Prix, Formula E and Google undertook a project dubbed Mountain Recharge. The challenge: Whether a Formula E GENBETA race car, starting with only 1% battery, could regenerate enough energy from braking during a descent through France’s coastal Alps to then complete a full lap of the iconic Monaco circuit.
More than just a stunt, this experiment is testing the boundaries of technology — and not just in EVs, but on the cloud, too. Without the live analytics and plenty of AI-powered planning, the Mountain Recharge might not have come to pass. In fact, AI even helped determine which mountain pass would be best suited for this effort. (Read on to find out which one, and see if we made it to the bottom.)
Mountain Recharge is exciting not only for thrills on the course but also the potential it shows for AI across industries. In addition to its role in helping to execute tasks, AI proved valuable to the brainstorming, experimentation, and rapidfire simulations that helped get Mountain Recharge to the finish line.
aside_block
<ListValue: [StructValue([(‘title’, ‘Try Google Cloud for free’), (‘body’, <wagtail.rich_text.RichText object at 0x3e50bafb0cd0>), (‘btn_text’, ‘Get started for free’), (‘href’, ‘https://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>
Planning the charge up the mountain
Before even setting foot or wheel to the course, the team at Formula E and Google Cloud turned to Gemini to try and figure out if such an endeavor was possible.
To answer the fundamental question of feasibility, the team entered a straightforward prompt into Google’s AI Studio: “Starting with just 1% battery, could the GENBETA car potentially generate enough recharge by descending a high mountain pass to do a lap of the Circuit of Monaco?”
The AI Studio validator, running Gemini 2.5 Pro with its deep reasoning functionality, analyzed first-party data that had been uploaded by Formula E on the GENBETA’s capabilities; we then grounded the model with Google Search to further improve accuracy and reliability by connecting to the universe of information available online.
AI Studio shared its “thinking” in a detailed eight-step process, which included identifying the key information needed; consulting the provided documents; gathering external information through a simulated search; performing calculations and analysis; and finally synthesizing the answer based on the core question.
The final output: “theoretically feasible.” In other words, the perfect challenge.
Navigating the steep turns above Monaco helped generate plenty of power for Mountain Recharge.
Still working in AI Studio, we then used a new feature, the ability to build custom apps such as the Maps Explorer, to determine the best route, which turned out to be theCol de Braus. AI Studio then mapped out a route for the challenge. This rigorous, data-backed validation, facilitated by AI Studio and Gemini’s ability to incorporate technical specifications and estimations, transformed the project from a speculative what-if into something Formula E felt confident attempting.
AI played an important role away from the course, as well. To aid in coordination and planning, teams at Formula E and Google Cloud used NotebookLM to digest the technical regulations and battery specifications and locate relevant information within them, which, given the complexity of the challenge and the number of parties involved, helped ensure cross-functional teams were kept up to date and grounded with sourced data to help make informed decisions.
Smart cars, smart drivers, and a smartphone
During the mountain descent, real-time monitoring of the car’s progress and energy regeneration would be crucial. Firebase and BigQuery were instrumental in visualizing this real-time telemetry. Data from both multiple sensors and Google Maps was streamed to BigQuery, Google Cloud’s data warehouse, from a high-performance mobile phone connected to the car (a Pixel 9 was well suited to the task).
This data stream proved to be yet another challenge to overcome, because of the patchy mobile signal in the mountainous terrain of the Maritime Alps. When data couldn’t be sent, it was cached locally on the phone until the signal was available again.
BigQuery’s capacity for real-time data ingestion and in-platform AI model creation enabled speedy analysis and the calculation of essential metrics. A web-based dashboard was developed using Firebase that connected to BigQuery to display both data and insights. AI Studio greatly facilitated the development of the application by translating a picture of a dashboard mockup into fully functional code.
“From figuring out if our crazy Mountain Recharge idea was even possible, to giving us live insights during the descent, AI was our guide,” said Alex Aidan, Formula E’s VP of Marketing. “It’s what turned an ambitious ‘what if’ into a reality we could track moment by moment.”
After completing its descent, the car stored up enough energy that it is expected to complete its lap of the Monaco circuit on Saturday, as part of the E-Prix’s pre-race festivities.
A different kind of push start.
Benefits beyond the finish line
Both the success and the development of the Mountain Recharge campaign offer valuable lessons to others pursuing ambitious projects. It shows that AI doesn’t have to be central to a project — it can be just as powerful at facilitating and optimizing something we’ve been doing for years, like racing cars. Our results in the Mountain Recharge only underscores the potential benefits of AI for a wide range of industries:
Enhanced planning and exploration: Just as Gemini helped Formula E explore unconventional ideas and identify the optimal route, businesses can leverage large language models for innovative problem-solving, market analysis, and strategic planning, uncovering unexpected angles and accelerating the journey from “what if” to “we can do that”.
Streamlined project management: NotebookLM’s ability to centralize and organize vast amounts of information demonstrates how AI can significantly improve efficiency in complex projects, from logistics and resource allocation to research and compliance. This reduces the risk of errors and ensures smoother coordination across teams.
Data-driven decision making: The real-time data analysis capabilities showcased in the Mountain Recharge underscore the power of cloud-based data platforms like BigQuery. Organizations can leverage these tools to gain immediate insights from their data, enabling them to make agile adjustments and optimize performance on the fly. This is invaluable in dynamic environments where rapid responses are critical.
Deeper understanding of complex systems: By applying AI to analyze intricate data streams, teams can gain a more profound understanding of the factors influencing performance.
Such capabilities certainly impressed James Rossiter, a former Formula E Team Principal, current test driver, and broadcaster for the series. “I was really surprised at the detail of the advice and things to consider,” Rossiter said. “We always talk about these things as a team, but as this is so different to racing, I had to totally rethink the drive.”
The Formula E Mountain Recharge campaign is more than just an exciting piece of content; it’s a testament to the power of human ingenuity amplified by intelligent technology. It’s also the latest collaboration between Formula E and Google Cloud and our shared commitment to use AI to push the boundaries of what’s possible in the sport in the sport and in the world.
We’ve already developed an AI-powered digital driving coach to help level the field for EV racing. Now, with the Mountain Recharge, we can inspire everyday drivers well beyond the track with the capabilities of electric vehicles.
It’s thinking big, even if it all starts with a simple prompt on a screen. You just have to ask the right questions, starting with the most important ones: Is this possible, and how can we make it so?
Today, AWS Organizations is making resource control policies (RCPs) available in both AWS GovCloud (US-West) and AWS GovCloud (US-East) Regions. RCPs help you centrally establish a data perimeter across your AWS environment. With RCPs, you can centrally restrict external access to your AWS resources at scale.
RCPs are a type of authorization policy in AWS Organizations that you can use to centrally enforce the maximum available permissions for resources in your organization. For example, an RCP can help enforce the requirement that “no principal outside my organization can access Amazon S3 buckets in my organization,” regardless of the permissions granted through individual S3 bucket policies.
AWS Graviton4-based R8g database instances are now generally available for Amazon Aurora with PostgreSQL compatibility and Amazon Aurora with MySQL compatibility in the AWS GovCloud (US-West) Region. R8g instances offer larger instance sizes, up to 48xlarge and features an 8:1 ratio of memory to vCPU, and the latest DDR5 memory. Graviton4-based instances provide up to a 40% performance improvement and up to 29% price/performance improvement for on-demand pricing over Graviton3-based instances of equivalent sizes on Amazon Aurora databases, depending on database engine, version, and workload.
AWS Graviton4 processors are the latest generation of custom-designed AWS Graviton processors built on the AWS Nitro System. R8g DB instances are available with new 24xlarge and 48xlarge sizes. With these new sizes, R8g DB instances offer up to 192 vCPU, up to 50Gbps enhanced networking bandwidth, and up to 40Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).
Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.
Amazon Elastic Container Registry (ECR) announces IPv6 support for API and Docker/OCI endpoints for both ECR and ECR Public. This makes it easier to standardize on IPv6 and remove IP address scalability limitations for your container build, deployment, and orchestration infrastructure.
With today’s launch, you can pull your private or public ECR images via the AWS SDK or Docker/OCI CLI using ECR’s new dual-stack endpoints which support both IPv4 and IPv6. When you make a request to an ECR dual-stack endpoint, the endpoint resolves to an IPv4 or an IPv6 address, depending on the protocol used by your network and client. This helps you meet IPv6 compliance requirements, and modernize your applications without expensive network address translation between IPv4 and IPv6 addresses.
ECR’s new dual-stack endpoints are generally available in all AWS commercial and AWS GovCloud (US) regions at no additional cost. Currently, ECR’s dual-stack endpoints do not serve AWS PrivateLink traffic originating from your Amazon Virtual Private Cloud (VPC). To get started with ECR IPv6, visit ECR documentation or ECR Public documentation.
AWS Graviton3-based R7g database instances are now generally available for Amazon Aurora with PostgreSQL compatibility and Amazon Aurora with MySQL compatibility in Middle East (Bahrain) and AWS GovCloud (US-West) Regions. Graviton3 instances provide up to 30% performance improvement over Graviton2 instances for Aurora depending on database engine, version, and workload.
Graviton3 processors offer several improvements over Graviton2 processors. Graviton3-based R7g are the first AWS database instances to feature the latest DDR5 memory, which provides 50% more memory bandwidth compared to DDR4, enabling high-speed access to data in memory. R7g database instances offer up to 30Gbps enhanced networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).
You can launch Graviton3 R7g database instances in the Amazon RDS Management Console or using the AWS CLI. Graviton3 is supported by Aurora MySQL version 3.03.1 and higher, and Aurora PostgreSQL version 13.10 and higher, Aurora PostgreSQL 14.7 and higher, and Aurora PostgreSQL 15.2 and higher. Upgrading a database instance to Graviton3 requires a simple instance type modification. For more details, refer to the Aurora documentation.
Amazon Aurora is designed for unparalleled high performance and availability at global scale with full MySQL and PostgreSQL compatibility. It provides built-in security, continuous backups, serverless compute, up to 15 read replicas, automated multi-Region replication, and integrations with other AWS services. To get started with Amazon Aurora, take a look at our getting started page.
Amazon Relational Database Service (Amazon RDS) for PostgreSQL, MySQL, and MariaDB now supports AWS Graviton2-based T4g database instances in Asia Pacific (Malaysia) region. T4g database instances provide a baseline level of CPU performance, with the ability to burst CPU usage at any time for as long as required. For complete information on pricing and regional availability, please refer to the Amazon RDS pricing page.
T4g database instances are available on Amazon RDS for All PostgreSQL 17, 16, 15, 14, and 13 versions; and 12.7 and higher 12 versions. T4g database instances are available on Amazon RDS for MySQL versions 8.4 and 8.0, and Amazon RDS for MariaDB versions11.4, 10.11, 10.6, 10.5, and 10.4. You can upgrade to T4g by modifying the database instance type to T4g using the AWS Management Console or AWS CLI. For more details, refer to the Amazon RDS User Guide.