Amazon S3 Tables now support server-side encryption using AWS Key Management Service (SSE-KMS) with customer-managed keys. You can use your own KMS keys to encrypt the tables stored in table buckets to meet regulatory and governance requirements.
By default, S3 Tables encrypt all objects with server-side encryption using S3-managed keys (SSE-S3). With support for customer-managed keys, you have the option to set a default customer-managed key for all new tables in the table bucket, set a dedicated key per table, or implement a combination of both approaches. With SSE-KMS support, S3 Tables use S3 Bucket Keys by default for cost optimization, and provide AWS CloudTrail logging for auditing the usage of customer-managed keys.
Today, we are excited to announce throughput improvements to dynamic run storage for AWS HealthOmics. AWS HealthOmics is a HIPAA-eligible service that helps healthcare and life sciences customers accelerate scientific breakthroughs with fully managed biological data stores and workflows.
Dynamic run storage automatically scales storage capacity based on workflow needs. With this release, dynamic run storage now also scales throughput using Elastic Throughput mode on Amazon Elastic File System. This feature is recommended for runs requiring faster start times, workflows with unpredictable storage requirements, and iterative development cycles, helping research teams reduce time-to-insight for time-sensitive genomic analyses.
Dynamic run storage with elastic throughput is now available in all regions where AWS HealthOmics is available: US East (N. Virginia), US West (Oregon), Europe (Frankfurt, Ireland, London), Asia Pacific (Singapore) and Israel (Tel Aviv). To get started with dynamic run storage, see the documentation.
Amazon CloudWatch agent now supports Security-Enhanced Linux (SELinux) environments through a pre-configured security policy that allow monitoring in systems where security enforcement is required. This feature benefits customers in regulated industries and government sectors who maintain strict security controls across their Linux infrastructure. These security policies, when applied before CloudWatch Agent installation, help customers maintain their security posture while collecting essential monitoring data.
This launch enables organizations to deploy the CloudWatch agent in SELinux-enabled environments while maintaining their security posture. It addresses a critical need where enforcing access controls is essential. The pre-configured SELinux configurations allow customers to benefit from AWS monitoring and observability features while helping to adhering to their compliance requirements. This feature helps to simplify the deployment process and reduce the risk of security misconfigurations during agent installation.
To get started with Amazon CloudWatch agent in Security-Enhanced Linux (SELinux) environments, see Installing the CloudWatch agent in the Amazon CloudWatch User Guide.
Customers in AWS Mexico (Central) Region can now use AWS Transfer Family for file transfers over Secure File Transfer Protocol (SFTP), File Transfer Protocol (FTP), FTP over SSL (FTPS) and Applicability Statement 2 (AS2).
AWS Transfer Family provides fully managed file transfers for Amazon Simple Storage Service (Amazon S3) and Amazon Elastic File System (Amazon EFS) over SFTP, FTP, FTPS and AS2 protocols. In addition to file transfers, Transfer Family enables common file processing and event-driven automation for managed file transfer (MFT) workflows, helping customers to modernize and migrate their business-to-business file transfers to AWS.
To learn more about AWS Transfer Family, visit our product page and user-guide. See the AWS Region Table for complete regional availability information.
We are excited to announce that Amazon Athena is now available in Mexico (Central) and Asia Pacific (Thailand).
Athena is a serverless, interactive query service that makes it simple to analyze petabytes of data using SQL, without requiring infrastructure setup or management. Athena is built on open-source Trino and Presto query engines, providing powerful and flexible interactive query capabilities, and supports popular data formats such as Apache Parquet and Apache Iceberg.
For more information about the AWS Regions where Athena is available, see the AWS Region table. To learn more, see Amazon Athena.
Amazon CloudFront announces Anycast Static IPs support for apex domains, enabling customers to easily use their root domain (e.g., example.com) with CloudFront. This new feature simplifies DNS management by providing just 3 static IP addresses instead of the previous 21, making it easier to configure and manage apex domains with CloudFront distributions.
Previously, customers had to create CNAME records to point their domains to CloudFront. However, due to DNS rules, root domains (apex domains) cannot point to CNAME records and must use A records or Route53’s ALIAS records. With the new Anycast Static IPs support, customers can now easily configure A records for their apex domains. Organizations can maintain their existing DNS infrastructure while using CloudFront’s global content delivery network to deliver apex domains with low latency and high data transfer speeds. Anycast routing automatically directs traffic to the optimal edge location, ensuring high performance content delivery for end users worldwide.
CloudFront supports Anycast Static IPs from all CloudFront edge locations. This excludes Amazon Web Services China (Beijing) region, operated by Sinnet, and the Amazon Web Services China (Ningxia) region, operated by NWCD. Standard CloudFront pricing applies, with additional charges for Anycast Static IP addresses. To learn more, visit the CloudFront Developer Guide for detailed documentation and implementation guidance.
Today, we are announcing the general availability of AWS Wavelength in partnership with Sonatel, an affiliate of Orange, in Dakar, Senegal. With this first Wavelength Zone in Sub-Saharan Africa, Independent Software Vendors (ISVs), enterprises, and developers can now use AWS infrastructure and services to support applications with data residency, low latency, and resiliency requirements.
AWS Wavelength, in partnership with Sonatel, delivers on-demand AWS compute and storage services to customers in West Africa. AWS Wavelength enables customers to build and deploy applications that meet their data residency, low-latency, and resiliency requirements. AWS Wavelength offers the operational consistency, industry leading cloud security practices, and familiar tools for automation that are similar to an AWS Region. With AWS Wavelength in partnership with Sonatel, developers can now build the applications needed for use cases, such as AI/ML inference at the edge, gaming, and fraud detection.
Spring is a great reminder to spring clean – an annual tradition that should extend not only to your household, but also to your virtual cloud infrastructure. Why not start with Google Cloud’s FinOps Hub?
As Google Cloud customers have adopted the FinOps hub to guide their optimization initiatives, we started getting additional feedback from our business community. For example, while DevOps users have access to tools and utilization metrics to identify waste, business teams often lack clear insights into resource consumption, leading to a significant blind spot. The most recent State of FinOps 2025 Report reinforces this need, underscoring the importance of workload optimization and waste reduction as the #1 Top FinOps concern. It’s extremely difficult to optimize workloads or applications if customers cannot fully understand how much is even being used. Why purchase a committed use discount for compute cores that you might not even be fully using?
Sometimes the easiest optimizations our customers can make are really just using more efficiently the resources they are actually paying for. That’s why, in 2025, we are focused on the deep clean of your optimization opportunities and have upgraded FinOps Hub to help you find, highlight, and eliminate wasted spend.
aside_block
<ListValue: [StructValue([(‘title’, ‘Try Google Cloud for free’), (‘body’, <wagtail.rich_text.RichText object at 0x3ea0a47c7610>), (‘btn_text’, ‘Get started for free’), (‘href’, ‘https://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>
1. Find waste: FinOps Hub 2.0 now comes with new utilization insights to zero in on optimization opportunities.
At Google Cloud Next 2025, we introduced FinOps Hub 2.0,focused exclusively on bringing utilization insights on your resources to the forefront so you can see what potential waste may exist and take action immediately. Waste can come in many forms: from a VM that is barely getting used at 5% (overprovisioned), to a GKE cluster that is actually running hot at 110% utilization and might fail (underprovisioned), to managed resources like Cloud Run instances that may not be optimally configured (suboptimal configuration) or, worse yet, a VM that might not ever have been used (idle). FinOps users can now quickly view the most expensive waste category in one, easy-to-understand heatmap by service or AppHub application. But FinOps Hub doesn’t just show you where there may be waste; it also includes more cost optimizations for Kubernetes Engine (GKE), Compute Engine (GCE), Cloud Run, and Cloud SQL to remedy the waste too.
Waste map showing identified resources with their corresponding utilization metrics
2. Highlight waste: Gemini Cloud Assist supercharges FinOps Hub to summarize optimization insights and send opportunities to engineering.
But perhaps what really makes this a 2.0 release is that we supercharged the most time-consuming tasks on FinOps Hub with Gemini Cloud Assist. Our first launch of Gemini Cloud Assist, which helps create personalized cost reports and synthesize insights, has resulted in >100k FinOps hours saved by our customers annually (from January 2024 to January 2025). The power of Gemini Cloud Assist to supercharge and automate workflows is a huge benefit, so we applied that to FinOps Hub in two ways. First, FinOps can now see embedded optimization insights on the hub itself –similar to cost reports – so you don’t need to solve the “needle in the haystack” problem of optimization. Second, you can now use Gemini Cloud Assist to summarize and send top waste insights to your engineering teams to take action and remediate fast.
Gemini summary and draft emails with top optimization opportunities
3. Eliminate waste: introducing a NEW IAM role permission for your tech solution owners to see & directly take action on these optimization opportunities.
Finally, perhaps our most exciting feature – and long overdue for FinOps – is that we are unlocking access to the Billing console for tech solution owners, so that these owners can get FinOps insights and Gemini Cloud Assist insights across all their projects, in a single pane. For example, if you want to give access to FinOps Hub or cost reports to an entire department that only uses a subset of projects for their infrastructure – without providing them with broader billing data access, but still allowing them to see all of their data in a single view – now you can, with multi-project views in the billing console. Multi-project views are enabled using the new Project Billing Costs Manager IAM role (or related granular permissions). These new permissions are currently in private preview so sign-up to get access. Now you can truly extend the power of FinOps tools across your organization with these new access controls.
So take this Spring to try FinOps Hub 2.0 with Gemini Cloud Assist, and do some spring cleaning on your cloud infrastructure, because as the saying goes, “With clouds overgrown, like winter’s old grime, Spring clean your servers, save dollars and time.” – well at least that’s what they say according to Gemini.
On Apr 15, 2025 Amazon announced quarterly security and critical updates for Amazon Corretto Long-Term Supported (LTS) and Feature Release (FR) versions of OpenJDK. Corretto 24.0.1, 21.0.7, 17.0.15, 11.0.27, 8u452 are now available for download. Amazon Corretto is a no-cost, multi-platform, production-ready distribution of OpenJDK.
Click on the Corretto home page to download Corretto 8, Corretto 11, Corretto 17, Corretto 21, or Corretto 24. You can also get the updates on your Linux system by configuring a Corretto Apt or Yum repo.
The Amazon EventBridge connector for Apache Kafka Connect is now generally available. This open-source connector streamlines event integration of Kafka environments with dozens of AWS services and partner integrations without writing custom integration code or running multiple connectors for each target. The connector includes built-in support for Kafka schema registries, offloading large event payloads to S3, and IAM role-based authentication, and is available under the Apache 2.0 license in the AWS GitHub organization.
Amazon EventBridge is a serverless event router that enables you to create highly scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. With the EventBridge Connector for Apache Kafka Connect, customers can leverage advanced features such as dynamic event filtering, transformation, and scalable routing through a unified connector in Kafka environments. The connector simplifies event routing from Kafka to AWS targets, custom applications and third-party SaaS services. Organizations can deploy the connector on any Apache Kafka Connect installation, including Amazon Managed Streaming for Kafka (MSK) Connect.
This feature is available in all AWS Regions, including AWS GovCloud (US). To get started, download the latest release from GitHub, configure it in your Kafka Connect environment, and refer to our developer documentation for detailed implementation guidance. Amazon MSK users can find specific instructions in the MSK Connect developer guide.
AWS Batch now supports Amazon Elastic Container Service (ECS) Exec and AWS FireLens log router for AWS Batch on Amazon ECS and AWS Fargate. With ECS Exec you can track the progress of your application and troubleshoot issue by by running interactive commands against the containers in your AWS Batch job. AWS FireLens allows you to stream logs of your AWS Batch jobs to your chosen destinations including Amazon CloudWatch, Amazon S3, Amazon OpenSearch Service, Amazon Redshift, partner services such as Splunk and more.
You can configure ECS Exec and AWS FireLens while registering a new AWS Batch job definition or making a revision to an existing job definition. For more information, see Register Job Definition page in the AWS Batch API reference and Amazon ECS Developer Guide for ECS Exec and AWS FireLense.
AWS Batch supports developers, scientists, and engineers in running efficient batch processing for ML model training, simulations, and analysis at any scale. ECS Exec and AWS FireLens are supported in any AWS Region where AWS Batch is available.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M8g instances are available in AWS Asia Pacific (Tokyo, Sydney) regions. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 M8g instances are built for general-purpose workloads, such as application servers, microservices, gaming servers, midsize data stores, and caching fleets. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.
AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon M7g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. M8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).
Driven by generative AI innovations, the Business Intelligence (BI) landscape is undergoing significant transformation, as businesses look to bring data insights to their organization in new and intuitive ways, lowering traditional barriers that have often kept discoveries out of the hands of the broader organization.
We’re spearheading this trend with Gemini in Looker, which builds upon Looker’s history as a cloud-first BI tool underpinned by a semantic layer that aligns data and that changes how users interact with it: with intelligent, AI-powered BI powered by Google’s latest AI models. The convergence of AI and BI stands to democratize data insights across organizations, moving beyond traditional methods to make data exploration more intuitive and accessible.
Gemini in Looker lowers technical barriers to accessing information, enhancing collaboration, and accelerating the process of turning raw data into actionable insights. As we announced at Google Cloud Next 25, we are expanding access to Gemini in Looker, making it now available to all Looker platform users. In this post, we discuss its key features, underlying architecture, and its transformative potential for both data analysts and business users.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud data analytics’), (‘body’, <wagtail.rich_text.RichText object at 0x3e322d85fbe0>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/bigquery/’), (‘image’, None)])]>
Using AI to enhance productivity and efficiency
We designed Gemini in Looker with a clear objective: to improve productivity for analysts and business users with AI. Gemini in Looker makes it easier to prepare data and semantic models for BI, and simplifies building dashboard visualizations and reports. Additionally, Gemini in Looker can help business users’ efficiency by improving their data literacy and fluency, enabling them to tell data stories in their presentations, and use natural language to go beyond the dashboard to get answers to their questions. The result is analysts can do their jobs faster and business users can tell data stories and get answers.
Gemini in Looker does this through a suite of gen-AI-powered capabilities that make analytics tasks and workflows easier:
Looker Conversational Analytics allows users to ask questions about their data in natural language, gaining instant, highly visual answers powered by AI and grounded in Looker’s semantic model. Data exploration is now as simple as chatting with your team’s data expert.
Talk to your data the same way you talk to your data analyst, only faster.
Automatic Slide Generation exports Looker reports to Google Slides, as well as AI-generated summaries of charts and their key insights, to automate creating presentations. With Automatic Slide Generation, presentations stay current and relevant, as the slides are directly connected to the underlying reports, so that the data they present is always up-to-date.
Rapidly transform your reports into live presentations you can share.
Formula Assistant simplifies the creation of calculated fields for ad-hoc analysis by allowing analysts to describe the desired calculation in natural language. The formula is automatically generated using AI, saving time and effort for analysts and report builders.
LookML Assistant simplifies LookML code creation by letting users describe what they are looking to build in natural language and automatically creating the corresponding LookML measures and dimensions. This helps streamline the process of creating and maintaining governed data.
Advanced Visualization Assistant creates customized data visualizations that users describe with natural language, while. Gemini in Looker creates the necessary JSON code configurations.
The semantic layer: The foundation of AI accuracy
A critical component of Looker’s AI architecture is the LookML semantic modeling layer, which in conjunction with LLMs like Gemini, provides the necessary context for the LLM to comprehend the data, and helps ensure centralized metric definitions, preventing inconsistencies that can derail AI models. Without a semantic layer, AI answers may be inaccurate, leading to unreliable results, lack of adoption, and wasted effort. Looker’s semantic model enables data governance integration, maintaining compliance and trust with existing controls, and evolves with your business, iteratively updating data sets and measures so that AI answers are accurate. According to our own internal tests, Looker’s semantic layer reduces data errors in gen AI natural language queries by as much as two thirds.
How Google protects your data and privacy
You can use Gemini in Looker knowing that your data is protected. Gemini prioritizes data privacy, and does not store customer prompts and outputs without permission. Critically, customer data, including prompts and generated output, is never used to train Google’s generative AI models.
Looker’s agentic AI architecture powers intelligent BI
Announced at Next 25, the Looker Conversational Analytics API serves as the agentic backend for Looker AI. It answers questions using a reasoning agent that uses multiple tools to answer analytical questions. It also uses conversation history to answer multi-turn questions and enable more efficient Looker queries, including the ability to open them in the Explore UI.
Looker’s AI architecture is designed for accuracy and quality, taking a multi-pronged approach to gen AI quality:
Agentic reasoning
A semantic layer foundation
A dynamic knowledge graph that provides context for Retrieval Augmented Generation (RAG)
Fine-tuned models for SQL and Python generation
This robust architecture enables Looker to move beyond simply answering “What?” questions to addressing more complex queries like “How does this compare?” “Why?” “What will happen?” and ultimately, “What should we do?”
Looker’s AI and BI roadmap
With Looker, we’re committed to converging AI and BI, and are working on a number of new offerings including:
Code Interpreter for Conversational Analytics makes advanced analytics easy, enabling business users to perform complex tasks like forecasting and anomaly detection using natural language, without needing in-depth Python expertise. You can learn more about this new capability and sign up here for the Preview.
Centralize and share your Looker agents with Agentspace, which offers centralized access, faster deployment, enhanced team collaboration, and secure governance.
Automated semantic model generation with Gemini helps democratize LookML creation, boost developer productivity, and unlock data insights with multi-modal inputs. Gemini leverages diverse input types like natural language descriptions, SQL queries, and database schemas.
Embracing BI’s AI-powered future
Gemini in Looker is a significant milestone in the AI/BI revolution. By integrating the power of Google’s Gemini models with Looker’s robust data modeling and analytics capabilities, organizations can empower their analysts, enhance the productivity of their business users, and unlock deeper, more actionable insights from their data. Gemini in Looker is transforming how we understand and leverage data to make smarter, more informed decisions. The journey from asking “What?” to confidently determining “What next?” is now within reach, powered by Gemini in Looker. Learn more at https://cloud.google.com/looker, or click here to learn more about Gemini in Looker and how to enable it for your Looker deployment. You can also choose to enable Trusted Tester features to gain access to early features in development.
Amazon Web Services (AWS) announces the availability of Amazon EC2 I7ie instances in the AWS Europe (Ireland) region. Designed for large storage I/O intensive workloads, these new instances are powered by 5th generation Intel Xeon Scalable processors with an all-core turbo frequency of 3.2 GHz, offering up to 40% better compute performance and 20% better price performance over existing I3en instances.
I7ie instances offer up to 120TB local NVMe storage density—the highest available in the cloud for storage optimized instances—and deliver up to twice as many vCPUs and memory compared to prior generation instances. Powered by 3rd generation AWS Nitro SSDs, these instances achieve up to 65% better real-time storage performance, up to 50% lower storage I/O latency, and 65% lower storage I/O latency variability compared to I3en instances. Additionally, the 16KB torn write prevention feature, enables customers to eliminate performance bottlenecks for database workloads.
I7ie instances are high-density storage-optimized instances, for workloads that demand rapid local storage with high random read/write performance and consistently low latency for accessing large data sets. These versatile instances are offered in eleven different sizes including 2 metal sizes, providing flexibility to match customers computational needs. They deliver up to 100 Gbps of network performance bandwidth, and 60 Gbps of dedicated bandwidth for Amazon Elastic Block Store (EBS), ensuring fast and efficient data transfer for the most demanding applications.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7i-flex instances powered by custom 4th Gen Intel Xeon Scalable processors (code-named Sapphire Rapids) are available in AWS Asia Pacific (Melbourne) Region. These custom processors, available only on AWS, offer up to 15% better performance over comparable x86-based Intel processors utilized by other cloud providers.
M7i-flex instances are the easiest way for you to get price-performance benefits for a majority of general-purpose workloads. They deliver up to 19% better price-performance compared to M6i. M7i-flex instances offer the most common sizes, from large to 16xlarge, and are a great first choice for applications that don’t fully utilize all compute resources such as web and application servers, virtual-desktops, batch-processing, and microservices. In addition, these instances support the new Intel Advanced Matrix Extensions (AMX) that accelerate matrix multiplication operations for applications such as CPU-based ML. For workloads that need larger instance sizes (up to 192 vCPUs and 768 GiB memory) or continuous high CPU usage, you can leverage M7i instances.
To learn more, visit Amazon EC2 M7i-flex instance page.
We’re at an inflection point right now, where every industry and entire societies are witnessing sweeping change, with AI as the driving force. This isn’t just about incremental improvements, it’s about total transformation. The public sector is already experiencing sweeping change with the introduction of AI, and that pace will only intensify. This is the promise of AI, and it’s here and now. At our recent Google Cloud Next ‘25 we showcased our latest innovations and reinforced our commitment to bringing the latest and best technologies to help public sector agencies meet their missions.
Key public sector announcements at Next
It was an exciting week at Next ‘25 with hundreds of product and customer announcements from Google Cloud. Here are key AI, security, and productivity announcements that can help the public sector deliver improved services, enhance decision-making and operate with greater efficiency.
Advancements in Google Distributed Cloud that let customers bring Gemini models on premises. This compliments our GDC air-gapped product, now authorized for U.S. Government Secret and Top Secret levels, and on which Gemini is available, provides the highest levels of security and compliance. This enables public sector agencies to have greater flexibility in how and where they access the latest Google AI innovations.
Support for a full suite of generative media models and Gemini 2.5 – Our most intelligent model yet, Gemini 2.5 is designed for the agentic era and now available in Vertex AI platform. This builds on our recent announcement that Vertex AI Search and Generative AI (with Gemini) achieve FedRAMP High authorization,providing agencies with a secure platform and the latest AI innovations and capabilities.
Simplifying security with the launch of Google Unified Security– We are offering customers a security solution powered by AI that brings together our best-in-class security products for threat intelligence, security operations, cloud security, and secure enterprise browsing, along with Mandiant expertise to provide a unified view and improved threat detection across complex infrastructures.
Transforming agency productivity and unlocking significant savings – We are offering Google Workspace, our FedRAMP High authorized communication and collaboration platform, at a significant discount of 71% off for U.S. federal government agencies. This offering in combination with Gemini in Workspace being authorized at the FedRAMP High level gives unprecedented access to cutting edge AI services for U.S. government workers.
Helping customers meet their mission
All of this incredible technology – and more – came to life on stage and across the showfloor at our Google Public Sector Hub, where we showcased our solutions for security, defense, transportation, productivity & automation, education, citizen services, health & human services, and Google Distributed Cloud (GDC). In case you missed our live demos on Medicaid redetermination, unemployment insurance claims, transportation coordination, and research grant sourcing, contact us to schedule a virtual demo or discuss a pilot. To get hands on with the technology register for an upcoming Google Cloud Days training for the public sector here.
We are proud to work with customers across the public sector, as they apply the latest Google innovations and technologies to achieve real mission-value impact. Ai2 and Google Cloud announced a partnership with Google Cloud to make its portfolio of open AI models available in Vertex AI Model Garden. The collaboration will help set a new standard for openness that leverages Google Cloud’s infrastructure resources and AI development platform with Ai2’s open models that will advance AI research and offer enterprise-quality deployment for the public sector. This builds on our announcement that Ai2 and Google Cloud will commit $20M to advance AI-powered research for the Cancer AI Alliance. You can catch the highlights from my conversation at Next with Ali Farhadi, CEO of Ai2 here.
CEO perspectives: A new era of AI-powered research and innovation
All of this incredible innovation with our customers is further enabled by our ecosystem of partners who help us scale our impact across the public sector. At Google Cloud Next, Accenture Federal Services and Google Public Sector announced the launch of a joint Managed Extended Detection and Response (MxDR) solution. The new MxDR for government solution integrates Google Security Operations (SecOps) platform with Accenture Federal’s deep federal cybersecurity expertise. This solution uses security-specific generative artificial intelligence (Gen AI) to significantly enhance threat detection and response, and the overall security posture for federal agencies.
Lastly, Lockheed Martin and Google Public Sector also announced a collaboration to advance generative AI for national security. Integrating Google’s advanced generative artificial intelligence into Lockheed Martin’s AI Factory ecosystem will enhance Lockheed Martin’s ability to train, deploy, and sustain high-performance AI models and accelerate AI-driven capabilities in critical national security, aerospace, and scientific applications.
A new era of innovation and growth
AI presents a unique opportunity to enter a new era of innovation and economic growth, enabling the public sector to get more out of limited resources to improve public services and infrastructure, make public systems more secure, and better meet the needs of their constituents. Harnessing the power of AI can help governments become agile and more secure, and serve citizens better. At Google Public Sector, we’re passionate about applying the latest cloud, AI and security innovations to help you meet your mission.
Subscribe to our Google Public Sector Newsletter to stay informed and stay ahead with the latest updates, announcements, events and more.
Amazon Q Developer in the AWS Management Console and Amazon Q Developer in the IDE is now GA in the Europe (Frankfurt) Region.
Pro tier customers can now use and configure Amazon Q Developer in the AWS Management Console and Amazon Q Developer in the IDE to store data in the Europe (Frankfurt) Region and perform inference in European Union (EU) Regions giving them more choice over where their data resides and transits. Amazon Q Developer Administrators can configure their user settings so that data is stored in Europe (Frankfurt) Region and inference is performed in EU geographies using cross-region inference (CRIS) to reduce latency and optimize availability. If you are requesting to contact AWS Support your data will be processed in the US East (N. Virginia) region.
Amazon Q Developer in is generally available, and you can use it in the following AWS Regions: US East (N. Virginia), and Europe (Frankfurt).
Today, Amazon Simple Email Service (SES) launched support for logging email sending events through AWS CloudTrail. Customers can maintain a record of email send actions performed using the SES APIs, including actions taken by a user, role, or an AWS service in SES.
Previously, customers could use SES event destinations to route sending event notifications to custom data stores they created and managed themselves. This required custom solutions for data storage and data indexing, including development costs and operational oversight costs. Now, customers can configure event logging to AWS CloudTrail without any custom solution development. Customers can search for events, view the events, and download lists of events for processing in their private workflows. This gives customers a turn-key solution for event history management.
SES supports AWS CloudTrail data events for sending events in all AWS Regions where SES is available.
Today, Amazon Q Business is launching a feature to reduce hallucinations in chat responses. Hallucinations are confident responses made by generative AI applications that are not justified by its underlying data. The new feature enables customers to mitigate hallucinations in real-time during chat conversations.
Large Language Models (LLMs) underlying generative AI applications have reduced the extent of hallucination in their responses, but it is possible that these models could hallucinate. Hallucination mitigation is therefore needed to generate reliable and trustworthy responses. The Q Business hallucination mitigationfeature helps ensure more accurate retrieval augmented generation (RAG) responses from data connected to the application. This data could either come from connected data sources, or from files uploaded during chat. During chat, Q Business evaluates a response for hallucinations. If a hallucination is detected with high confidence, it corrects the inconsistencies in its response real-time during chat and generates a new, edited message.
The feature for Amazon Q Business is available in all regions where Q Business is available. Customers can opt into using this feature by enabling it through API or through the Amazon Q console. For more details, refer to the documentation. For more information about Amazon Q Business and its features, please visit the Amazon Q product page.
AWS Lambda@Edge now supports AWS Lambda’s advanced logging controls to improve how function logs are captured, processed, and consumed at the edge. This enhancement provides you with more control over your logging data, making it easier to monitor application behavior and quickly resolve issues.
The new advanced logging controls for Lambda@Edge give you three flexible ways to manage and analyze your logs. New JSON structured logs make it easier to search, filter, and analyze large volumes of log entries without using custom logging libraries. Log level granularity controls can switch log levels instantly, allowing you to filter for specific types of logs like errors or debug information when investigating issues. Custom CloudWatch log group selection lets you choose which Amazon CloudWatch log group Lambda@Edge sends logs to, making it easier to aggregate and manage logs at scale.
To get started, you can specify advanced logging controls for your Lambda functions using Lambda APIs, Lambda console, AWS CLI, AWS Serverless Application Model (SAM), and AWS CloudFormation. To learn more, visit the Lambda Developer Guide, and the CloudFront Developer Guide.