Amazon Keyspaces (for Apache Cassandra) is a scalable, serverless, highly available, and fully managed Apache Cassandra-compatible database service that offers 99.999% availability.
Today, Amazon Keyspaces added support for frozen collections in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. With support for frozen collections, the primary keys in your tables can contain collections, allowing you to index your tables on more complex and richer data types. Additionally, using frozen collections, you can create nested collections. Nested collections enable you to model your data in a more real-world way and efficient manner. The AWS console extends the native Cassandra experience by giving you the ability to intuitively create and view nested collections that are several levels deep.
Support for frozen collections is available in all commercial AWS Regions and the AWS GovCloud (US) Regions where AWS offers Amazon Keyspaces. If you’re new to Amazon Keyspaces, the getting started guide shows you how to provision a keyspace and explore the query and scaling capabilities of Amazon Keyspaces.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) R8g instances are available in AWS Asia Pacific (Tokyo) region. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 R8g instances are ideal for memory-intensive workloads such as databases, in-memory caches, and real-time big data analytics. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.
AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. AWS Graviton4-based R8g instances offer larger instance sizes with up to 3x more vCPU (up to 48xlarge) and memory (up to 1.5TB) than Graviton3-based R7g instances. These instances are up to 30% faster for web applications, 40% faster for databases, and 45% faster for large Java applications compared to AWS Graviton3-based R7g instances. R8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).
Amazon Keyspaces (for Apache Cassandra) is a scalable, serverless, highly available, and fully managed Apache Cassandra-compatible database service that offers 99.999% availability.
Today, Amazon Keyspaces added support for Cassandra’s User Defined Types (UDTs) in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions. With support for UDTs, you can continue using any custom data types that are defined in your Cassandra workloads in Keyspaces without making schema modifications. With this launch, you can use UDTs in the primary key of your tables, allowing you to index your data on more complex and richer data types. Additionally, UDTs enable you to create data models that are more efficient and similar to the data hierarchies that exist in real-world data. The AWS console enhances the original Cassandra experience by allowing you to easily create and visualize nested UDTs at multiple levels.
Support for UDTs is available in all commercial AWS Regions and the AWS GovCloud (US) Regions where AWS offers Amazon Keyspaces. If you’re new to Amazon Keyspaces, the getting started guide shows you how to provision a keyspace and explore the query and scaling capabilities of Amazon Keyspaces.
Starting today, AWS Network Firewall is available in the AWS Asia Pacific (Malaysia) Region, enabling customers to deploy essential network protections for all their Amazon Virtual Private Clouds (VPCs).
AWS Network Firewall is a managed firewall service that is easy to deploy. The service automatically scales with network traffic volume to provide high-availability protections without the need to set up and maintain the underlying infrastructure. It is integrated with AWS Firewall Manager to provide you with central visibility and control over your firewall policies across multiple AWS accounts.
To see which regions AWS Network Firewall is available in, visit the AWS Region Table. For more information, please see the AWS Network Firewall product page and the service documentation.
Tis the season for learning new skills! Get ready for 12 Days of Learning, a festive digital advent calendar packed with courses, hands-on labs, videos, and community opportunities—all designed to boost your generative AI expertise. Discover a new learning resource on Google Cloud’s social channels every day for twelve days this December.
Before you start: Get no-cost access to generative AI courses and labs
Join the Innovators community to activate 35 monthly learning credits in Google Cloud Skills Boost at no cost. Use these credits to access courses and labs throughout the month of December—and beyond!
Ready to get started? Review all of the resources below.
Get festive with generative AI foundations
Learn how to use gen AI in your day-to-day work. These resources are designed for developers looking to gain foundational knowledge in gen AI.
A Developer’s Guide to LLMs: In this 10-minute video, explore the exciting world of large language models (LLMs). Discover different AI model options, analyze pricing structures, and delve into essential features.
Responsible AI: Fairness & Bias: This course introduces the concepts of responsible AI and shares practical methods to help you implement best practices using Google Cloud products and open source tools.
Gemini for end-to-end SDLC: This course explores how Google Cloud’s Gemini AI can assist in all stages of the software development lifecycle, from building and debugging web applications to testing and data querying. The course ends with a hands-on lab where you can build practical experience with Gemini.
Responsible AI for Developers: Interpretability & Transparency: This course introduces AI interpretability and transparency concepts. Learn how to train a classification model on image data and deploy it to Vertex AI to serve predictions with explanations.
Introduction to Security in the World of AI: This course equips security and data protection leaders with strategies to securely manage AI within their organizations. Bring these concepts to life with real-world scenarios from four different industries.
aside_block
<ListValue: [StructValue([(‘title’, ‘Get hands-on experience for free’), (‘body’, <wagtail.rich_text.RichText object at 0x3e004834b760>), (‘btn_text’, ‘Start building for free’), (‘href’, ”), (‘image’, None)])]>
Cozy up with gen AI use cases
Launch these courses and labs to get more in-depth, hands-on experience with generative AI, from working with Gemini models to building agents and applications.
Build Generative AI Agents with Vertex AI and Flutter: Learn how to develop an app using Flutter and then integrate the app with Gemini. Then use Vertex AI Agent Builder to build and manage AI agents and applications.
Boost productivity with Gemini in BigQuery: This course provides an overview of features to assist in the data-to-AI workflow, including data exploration and preparation, code generation, workflow discovery, and visualization.
Work with Gemini models in BigQuery: Through a practical use case involving customer relationship management, learn how to solve a real-world business problem with Gemini models. Plus, receive step-by-step guidance through coding solutions using SQL queries and Python notebooks.
Get a jump-start on your New Year’s resolutions with AI Skills Quest
Get an early start on your learning goals by signing up for AI Skills Quest, a monthly learning challenge that puts gen AI on your resume with verifiable skill badges. When you sign up, choose your path based on your level of knowledge:
Beginner/Intermediate cohort: Learn fundamental AI concepts, prompt design, and Gemini app development techniques in Vertex AI, plus other Gemini integrations in various technical workflows.
Advanced cohort: Already know the basics, but want to add breadth and depth to your AI skills? Sign up for the Advanced path to learn advanced AI concepts like RAG and MLOps.
Ready to ring in the new year with new skills? Find more generative AI learning content on Google Cloud Skills Boost.
Amazon MQ now supports AWS PrivateLink (interface VPC endpoint) to connect directly to the Amazon MQ API in your virtual private cloud (VPC) instead of connecting over the internet.
When you use AWS PrivateLink, communication between your VPC and Amazon MQ API is conducted entirely within the AWS network, providing an optimized secure pathway for your data. An AWS PrivateLink endpoint connects your VPC directly to the Amazon MQ API. The instances in your VPC don’t need public IP addresses to communicate with the Amazon MQ API. To use Amazon MQ through your VPC, you can connect from an instance that is inside your VPC, or connect your private network to your VPC by using an AWS VPN option or AWS Direct Connect. You can create an AWS PrivateLink to connect to Amazon MQ using the AWS Management Console or AWS Command Line Interface (AWS CLI) commands.
Amazon Simple Email Services (SES) announces the availability of Deterministic Easy DKIM (DEED), a new form of global identity which simplifies the use of DomainKeys Identified Mail (DKIM) management for SES first-party sender customers and independent solution vendors (ISVs). DEED expands the existing Easy DKIM solution from SES and enables it to work across all commercial AWS Regions, instead of being limited to just one. While Easy DKIM required domain name system (DNS) lookups to be made in the Region where the identity was verified, DEED expands that capability so customers can use the same identity across multiple Regions without making any DNS setup changes. Customers now have less risk of manual DNS management errors.
At launch, the main use case for DEED is for large customers with multinational operations and a need to have shared access to core domain identities for SES sending from multiple worldwide subsidiaries. ISVs also need to be able to operate smoothly on behalf of their customers, including when moving activity into new Regions. DEED allows them to make those changes without requiring the primary domain owner to make DNS changes themselves.
SES DEED is available across all commercial AWS Regions where SES sending is already available. A new blog post is available here to discuss the feature in greater detail. Click here for more information about Deterministic Easy DKIM and to begin the simple, guided onboarding process for initial setup.
Today, AWS announces three new updates to AWS IoT Core for LoRaWAN: IPv6 support, enhanced Firmware Update Over-The-Air (FUOTA) with advanced logging capabilities, and console-based gateway firmware updates, improving fleet management, scalability, reliability, and user experience for Internet of Things (IoT) applications.
AWS IoT Core for LoRaWAN is a fully-managed cloud service that makes it easy to connect, manage, and monitor wireless devices that use low-power, long-range wide area network (LoRaWAN) technology. With the new feature updates, developers can now assign IPv6 address to their LoRaWAN-based devices and gateways and coexist with other IPv4 devices in the same network, simplifying network configurations and management, while improving the security posture of their solutions. The FUOTA enhancements enable selective multicast transmission to specific gateways, reducing airtime competition and file corruption risks, while advanced logging capabilities monitor FUOTA progress, file retrieval, transition status, and errors. The AWS IoT Core console now provides a streamlined interface for managing firmware updates for LoRaWAN gateways using Configuration and Update Server (CUPS protocol), simplifying firmware uploads, update scheduling, and progress tracking.
These updates are available in all AWS IoT Core for LoRaWAN-supported regions. For detailed guidance and implementation instructions, visit the AWS IoT Core for LoRaWAN Developer Guide.
Amazon Bedrock Guardrails enable you to implement safeguards for your generative AI applications based on your use cases and responsible AI policies. Starting today, we are excited to announce that Amazon Bedrock Guardrails are even more cost-effective with reduced pricing by up to 85%.
Amazon Bedrock Guardrails help you build safe, generative AI applications by filtering undesirable content, redacting personally identifiable information (PII), and enhancing content safety and privacy. You can configure policies for content filters, denied topics, word filters, PII redaction, contextual grounding checks, and Automated Reasoning checks (preview), to tailor safeguards to your specific use cases and responsible AI policies.
We are reducing the prices for content filters by 80% to $0.15 per 1,000 text units and for denied topics by 85% to $0.15 per 1,000 text units. With this price reduction, Bedrock Guardrails will help you accelerate the use of responsible AI across all your generative AI applications.
These pricing changes are already in effect starting December 1, 2024 in all AWS regions where Amazon Bedrock Guardrails is supported today. To learn more about Amazon Bedrock Guardrails, see the product page and the technical documentation.
2024 has been a landmark year for Google Earth Engine, marked by significant advancements in platform management, cloud integration, and core functionality. With increased interoperability between Google Cloud tools and services, and Earth Engine, we’ve unlocked powerful new workflows and use cases for our users. Here’s a round up of this year’s top Earth Engine launches, many of which were highlighted in our Geo for Good 2024 summit.
Earlier this year, we launched the new Earth Engine Overview page in the Cloud Console, serving as a centralized hub for Earth Engine resources, allowing you to manage Earth Engine from the same console used to manage and monitor other Cloud services.
In this console, we also introduced a new Tasks page, allowing you to view and monitor Earth Engine export and import tasks alongside usage management and billing. The Tasks page provides a useful set of fields for each task, including state, runtime, and priority. Task cancellation is also easier than ever with single or bulk task cancellation in this new interface.
As we deepen Earth Engine’s interoperability across Google Cloud, we’ll be adding more information and controls to the Cloud Console so that you can further centralize the management of Earth Engine alongside other services.
Integrations: deepening cloud interoperability
Earth Engine users can integrate with a number of cloud services and tools to enable advanced solutions requiring custom machine learning and robust data analytics. This year, we launched a set of features that improved existing interoperability, making it easier to both enable and deploy these solutions.
Vertex AI integration Using Earth Engine with Vertex AI enables use cases that require deep learning, such as crop classification. You can host a model in Vertex AI and get predictions from within the Earth Engine Code Editor. This year, we announced a major performance improvement to our Vertex Preview connector, which will give you more reliability and more throughput than the current Vertex connector.
Earth Engine access To ensure all Earth Engine users can take advantage of these new integration improvements and management features, we’ve also transitioned all Earth Engine users to Cloud projects. With this change, all Earth Engine users can now leverage the power and flexibility of Google Cloud’s infrastructure, security, and growing ecosystem of tools to drive forward the science, research, and operational decision making required to make the world a better place.
Security: enhancing control
This year we launched Earth Engine support for VPC Service Controls – a key security feature that allows organizations to define a security perimeter around their Google Cloud resources. This new integration, available to customers with professional and premium plans, provides enhanced control over data, and helps prevent unauthorized access and data exfiltration. With VPC-SC, customers can now set granular access policies, restrict data access to authorized networks and users, and monitor and audit data flows, ensuring compliance with internal security policies and external regulations.
Platform: improving performance
Zonal Statistics Computing statistics about regions of an image is a core Earth Engine capability. We recently launched a significant performance improvement to batch zonal statistics exports in Earth Engine. We’ve optimized the way we parallelize zonal statistics exports, such as exports that generate statistics for all regions in a large collection. This means that you will get substantially more concurrent compute power per batch task when you use ReduceRegions().
With this launch, large-scale zonal statistics exports are running several times faster than this time last year, meaning you get your results faster, and that Earth Engine can complete even larger analyses than ever. For example, you can now calculate the average tree canopy coverage of every census tract in the continental United States at 1 meter scale in 7 hours. Learn more about how we sped up large-scale zonal statistics computations in our technical blog post.
Python v1 Over the last year, we’ve focused on ease-of-use, reliability, and transparency for Earth Engine Python. The client library has moved into an open-source repository at Google which means we can sync changes to GitHub immediately, keeping you up-to-date on changes between releases. We are also sharing pre-releases, so you can see and work with Python library candidate releases before they come out. We have a static loaded client library, which makes it easier to build on our Python library and better testing and error messaging. We’ve also continued making progress on improving geemap and integrations like xee.
With all of these changes, we’re excited to announce that the Python Client library is now ‘v1’, representing the maturity of Earth Engine Python. Check out this blog post to read more about these improvements and see how you can take full advantage of Python and integrate it into Google’s Cloud tooling.
COG-backed asset improvements If you have data stored in Google Cloud Storage (GCS), in Cloud-Optimized GeoTIFF (COG) format, you can easily use it in Earth Engine via Cloud Geotiff Backed Earth Engine Assets, improving the previous experience requiring a single file GeoTIFF, where all bands have the same projection and type.
Now you can create an Earth Engine asset backed by multiple GeoTiffs, which may have different projections, different resolutions, and different band types–and Earth Engine will take care of these complexities for you. There are also major performance improvements to the previous feature: Cloud GeoTiff backed assets now have similar performance to native Earth Engine assets. In addition, If you want to use your GCS COGs elsewhere, like open source pipelines or other tools, the data is stored once and you can use it seamlessly across products.
Looking forward to 2025
We’re excited to see Earth Engine users leverage more advanced tools, stronger security, and seamless integrations to improve sustainability and climate resilience. In the coming year, we’re looking forward to further deepening cloud interoperability, making it easier to develop actionable insights and inform sustainability decision-making through geospatial data.
Amazon RDS for SQL Server now offers enhanced control over backup and restore operations with new custom parameters. This update allows database administrators to fine-tune their processes, potentially improving efficiency and reducing operation times. The new parameters are available for the rds_backup_database, rds_restore_database, and rds_restore_log stored procedures.
You can now specify the BLOCKSIZE, MAXTRANSFERSIZE, and BUFFERCOUNT parameters for backup and restore operations. These granular controls can help optimize performance based on your specific database characteristics and workload patterns. These customizable parameters are particularly useful when customer backups are incompatible with the default settings used by Amazon RDS for SQL Server. By allowing users to fine-tune these performance-related factors, the feature provides greater flexibility to accommodate unique database requirements and operating environments.
Customers can specify these new parameters in all AWS commercial Regions where Amazon RDS for SQL Server databases are available, including the AWS GovCloud (US) Regions.
Amazon RDS makes it simple to set up, operate, and scale SQL Server deployments in the cloud. See Amazon RDS for SQL Server Pricing for up-to-date pricing of instances, storage, data transfer and regional availability.
Today, AWS Resource Groups is adding support for an additional 405 resource types for tag-based Resource Groups. Customers can now use Resource Groups to group and manage resources from services such as Bedrock, Chime, and Quicksight.
AWS Resource Groups enables you to model, manage and automate tasks on large numbers of AWS resources by using tags to logically group your resources. You can create logical collections of resources such as applications, projects, and cost centers, and manage them on dimensions such as cost, performance, and compliance in AWS services such as myApplications, AWS Systems Manager and Amazon CloudWatch.
Resource Groups expanded resource type coverage is available in all AWS Regions, including the AWS GovCloud (US) Regions. You can access AWS Resource Groups through the AWS Management Console, the AWS SDK APIs, and the AWS CLI.
AWS announces the general availability of Amazon Elastic Compute Cloud (Amazon EC2) C6in and M6in instances in Dallas Local Zone. These instances are powered by 3rd Generation Intel Xeon Scalable processors with an all-core turbo frequency of up to 3.5 GHz. They are x86-based general purpose and compute-optimized instances offering up to 200 Gbps of network bandwidth. The instances are built on AWS Nitro System, which is a dedicated and lightweight hypervisor that delivers the compute and memory resources of the host hardware to your instances for better overall performance and security. You can take advantage of the higher network bandwidth to scale the performance for a broad range of workloads running on Amazon EC2.
AWS Local Zones are a type of AWS infrastructure deployment that places compute, storage, database, and other select services closer to large population, industry, and IT centers where no AWS Region exists. You can use Local Zones to run applications that require single-digit millisecond latency for use cases such as real-time gaming, hybrid migrations, media and entertainment content creation, live video streaming, engineering simulations, and AR/VR at the edge.
Welcome to the first Cloud CISO Perspectives for December 2024. Today, Nick Godfrey, senior director, Office of the CISO, shares our Forecast report for the coming year, with additional insights from our Office of the CISO colleagues.
As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.
–Phil Venables, VP, TI Security & CISO, Google Cloud
aside_block
<ListValue: [StructValue([(‘title’, ‘Get vital board insights with Google Cloud’), (‘body’, <wagtail.rich_text.RichText object at 0x3ed0694e87c0>), (‘btn_text’, ‘Visit the hub’), (‘href’, ‘https://cloud.google.com/solutions/security/board-of-directors?utm_source=cloud_sfdc&utm_medium=email&utm_campaign=FY24-Q2-global-PROD941-physicalevent-er-CEG_Boardroom_Summit&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
Forecasting 2025: AI threats and AI for defenders, turned up to 11
By Nick Godfrey, senior director, Office of the CISO
While security threats and incidents may seem to pop up out of nowhere, the reality is that very little in cybersecurity happens in a vacuum. Far more common are incidents that build on incidents, threats shifting to find new attack vectors, and defenders simultaneously working to close up those points of ingress while also mitigating evolving risks.
Security and business leaders know that readiness plays a crucial role, and our Cybersecurity Forecast report for 2025 extrapolates from today’s trends the scenarios that we expect to arise in the coming year.
Expect attackers to increasingly use AI for sophisticated phishing, vishing, and social engineering attacks
AI has been driving a rapid evolution of tactics and technology for attackers and defenders. This year saw threat actors rapidly adopt AI-based tools to support all stages of their attacks, and we expect that trend to continue in 2025. Phishing, vishing, SMS-based attacks, and other forms of social engineering, will rely even more on AI and large language models (LLMs) to appear convincing.
Cyber-espionage and cybercrime actors will use deepfakes for identity theft, fraud, and bypassing know-your-customer (KYC) security requirements. We also expect to observe more evidence of threat actors experimenting with AI for their information operations, vulnerability research, code development, and reconnaissance.
Generative AI will allow us to bring more practitioners into the profession and focus them on learning both fundamental software development principles and secure software development — at the same time.
AI will continue to bolster defenders, as well. We expect 2025 will usher in an intermediate stage of semi-autonomous security operations, with human awareness and ingenuity supported by AI tools.
Taylor Lehmann, health care and life sciences director, Office of the CISO
As Google CEO Sundar Pichai said recently, “more than a quarter of new code at Google is generated by AI.” Many will probably interpret this to mean that, broadly speaking, companies will be able to save money by hiring fewer software developers because gen AI will do their work for them.
I believe we’re at the beginning of a software renaissance. Gen AI will help create more developers, because the barrier to becoming one has been lowered. We will need even more great developers to review work, coach teams, and improve software quality (because we’ll have more code to review.)
Crucially, finding and fixing insecure software will get easier. This added attention to software quality should help us create better, safer, and more secure and resilient products. Accordingly, any person or business who uses those products will benefit. Now, we should all go write our “hello worlds” — and start building.
Anton Chuvakin, security advisor, Office of the CISO
While “AI, secure my environment!” magic will remain elusive, generative AI will find more practical applications. Imagine gen AI that sifts through reports and alerts, summarizing incidents, and recommending response actions to humans. AI can be used to identify subtle patterns and anomalies that humans often miss, and can proactively uncover hidden threats during threat hunting.
Marina Kaganovich, executive trust lead, Office of the CISO
We predicted last year that organizations should get ahead of shadow AI. Today, we’re still seeing news stories about how enterprises are struggling to navigate unauthorized AI use. We believe that establishing robust organizational governance is vital. Proactively asking and answering key questions can also help you experiment with AI securely.
The global stage: threat actors
Geopolitical conflicts in 2025 will continue to fuel cyber-threat activity and create a more complex cybersecurity environment.
Ongoing geopolitical tensions and potential state-sponsored attacks will further complicate the threat landscape, requiring manufacturers to be prepared for targeted attacks aimed at disrupting critical infrastructure and stealing intellectual property.
The Big Four — China, Iran, North Korea, and Russia — will continue to pursue their geopolitical goals through cyber espionage, disruption, and influence operations. Globally, organizations will face ongoing threats from ransomware, multifaceted extortion, and infostealer malware. There are regional trends across Europe, the Middle East, Japan, Asia, and the Pacific that we expect to drive threat actor behavior, as well.
Toby Scales, advisor, Office of the CISO
Against the backdrop of ongoing AI innovation, including the coming “Agentic AI” transformation, we expect to see threat activity from nation-states increase in breadth — the number of attacks — and depth — the sophistication and variety of attacks.
While we don’t necessarily expect a big attack on infrastructure to land next year, it’s not hard to imagine an explicit retaliation by one of the Big Four against a U.S.-owned media enterprise for coverage, content, or coercion. Expect the weakest links of the media supply chain to be exploited for maximum profit.
Bob Mechler, director, Office of the CISO
Financially-motivated cybercrime as well as increasing geopolitical tensions will continue to fuel an increasingly challenging and complicated threat landscape for telecom providers. We believe that the increase in state-sponsored attacks, sabotage, and supply chain vulnerabilities observed during 2024 is likely to continue and possibly increase during 2025.
These attacks will, in turn, drive a strong focus on security fundamentals, resilience, and a critical need for threat intelligence that can help understand, preempt, and defeat a wide range of emerging threats.
Vinod D’Souza, head of manufacturing and industry, Office of the CISO
Ongoing geopolitical tensions and potential state-sponsored attacks will further complicate the threat landscape, requiring manufacturers to be prepared for targeted attacks aimed at disrupting critical infrastructure and stealing intellectual property. The convergence of IT and OT systems for manufacturing, along with increased reliance on interconnected technologies and data-driven processes, will create new vulnerabilities for attackers to exploit.
Ransomware attacks will potentially become more targeted and disruptive, potentially focusing on critical production lines and supply chains for maximum impact. Additionally, the rise of AI-powered attacks will pose a significant challenge, as attackers use machine learning to automate attacks, evade detection, and develop more sophisticated malware.
We should see public sector organizations begin to expand their comfort levels using cloud platforms built for the challenges of the future. They will likely begin to move away from platforms built using outdated protection models, and platforms where additional services are required to achieve security fundamentals.
Supply chain attacks will continue to be a major challenge in 2025, too. Attackers will increasingly target smaller suppliers and third-party vendors with weaker security postures to gain indirect access to larger manufacturing networks.
A collaborative approach to cybersecurity is needed, with manufacturers working closely with partners to assess and mitigate risks throughout the supply chain. Cloud technologies can become a solution as secure collaborative cloud platforms and applications could be used by the supplier ecosystem for better security.
Thiébaut Meyer, director, Office of the CISO
Digital sovereignty will gain traction in risk analysis and in the discussions we have with our customers and prospects in Europe, the Middle East, and Asia. This trend is fueled by growing concerns about potential diplomatic tensions with the United States, and “black swan” events are seen as increasingly plausible. As a result, entities in these regions are prioritizing strategies that account for the evolving geopolitical landscape and the potential for disruptions to data access, control, and survivability.
This concern will grow stronger as public entities move to the cloud. For now, in Europe, these entities are still behind in their migrations, mostly due to a lack of maturity. Their migration will be initiated only with the assurance of sovereign safeguards. Therefore, we really need to embed these controls in the core of all our products and offer “sovereign by design” services.
The global stage: empowered defenders
To stay ahead of these threats, and be better prepared to respond to them when they occur, organizations should prioritize a proactive, comprehensive approach to cybersecurity in 2025. Cloud-first solutions, robust identity and access management controls, and continuous threat monitoring and threat intelligence are key tools for defenders. We should also begin to prepare for the post-quantum cryptography era, and ensure ongoing compliance with evolving regulatory requirements.
MK Palmore, public sector director, Office of the CISO
I believe 2025 may bring an avalanche of opportunities for public sector organizations globally to transform how their enterprises make use of AI. They will continue to explore how AI can help them streamline time-dominant processes, and explore how AI can truncate those experiences to get in and out of the delivery cycle faster.
We should see public sector organizations begin to expand their comfort levels using cloud platforms built for the challenges of the future. They will likely begin to move away from platforms built using outdated protection models, and platforms where additional services are required to achieve security fundamentals. Security should be inherent in the design of cloud platforms, and Google Cloud’s long-standing commitment to secure by design will ring true through increased and ongoing exposure to the platform and its capabilities.
Alicja Cade, financial services director, Office of the CISO
Effective oversight from boards of directors requires open and joint communication with security, technology, and business leaders, critical evaluation of existing practices, and a focus on measurable progress. By understanding cybersecurity initiatives, boards can ensure their organizations remain resilient and adaptable in the face of ever-evolving cyber threats.
With the continued threat of economically and clinically disruptive ransomware attacks, we expect healthcare to adopt more resilient systems that allow them to better operate core services safely, even when under attack. This will be most acute in the underserved and rural healthcare sector, where staffing is minimal and resources are limited.
Boards can achieve prioritize cybersecurity by supporting strategies that:
Modernize technology by using cloud computing, automation, and other advancements to bolster defenses;
Implement robust security controls to establish a strong security foundation, with measures that include multi-factor authentication, Zero Trust segmentation, and threat intelligence; and
Manage AI risks by proactively addressing the unique challenges of AI, including data privacy, algorithmic bias, and potential misuse.
Odun Fadahunsi, executive trust lead, Office of the CISO
The global landscape is witnessing a surge in operational resilience regulations, especially in the financial services sector. Operational resilience with a strong emphasis on cyber-resilience is poised to become a top priority for both boards of directors and regulators in 2025. CISOs, and risk and control leaders, should proactively prepare for this evolving regulatory environment.
Bill Reid, solutions consultant, Office of the CISO
The drive to improve medical device security and quality will continue into 2025 with the announcement of the ARPA-H UPGRADE program awardees and the commencement of this three-year project. This program is expected to push beyond FDA software-as-a-medical-device security requirements to using more automated approaches to address assessment and patching whole classes of devices in a healthcare environment.
In general, the healthcare industry will keep building on the emergent theme of cyber-physical resilience, described in the PCAST report. With the continued threat of economically and clinically disruptive ransomware attacks, we expect healthcare to adopt more resilient systems that allow them to better operate core services safely, even when under attack. This will be most acute in the underserved and rural healthcare sector, where staffing is minimal and resources are limited. New cross-industry and public-private collaboration can help strengthen these efforts.
Based on feedback from our security field teams in 2024, we anticipate strong demand for practical, actionable guidance on cybersecurity and cloud security, including best practices for securing multicloud environments.
We believe there’ll be a shift away from blaming CISOs and their security organizations for breaches, and a rebuttal of the shame-based culture that has plagued cybersecurity. Cybersecurity events will be recognized as criminal acts and, in healthcare and other critical industries, as attacks on our national security. New ways to address security professional liability will emerge as organizations have challenges attracting and retaining top talent.
Widya Junus, head of Google Cloud Cybersecurity Alliance business operations, Office of the CISO
Based on feedback from our security field teams in 2024, we anticipate strong demand for practical, actionable guidance on cybersecurity and cloud security, including best practices for securing multicloud environments.
Cloud customers will continue to request support to navigate the complexities of managing security across multiple cloud providers, ensuring consistent policies and controls. The demand also includes real-world use cases, common threats and mitigations, and industry-specific security knowledge exchange.
Key security conversations and topics will cover streamlined IAM configuration, security best practices, and seamless implementation of cloud security controls. There will be a strong push for cloud providers to prioritize sharing practical examples and industry-specific security guidance, especially for AI.
For more leadership guidance from Google Cloud experts, please see ourCISO Insights hub.
aside_block
<ListValue: [StructValue([(‘title’, ‘Join the Google Cloud CISO Community’), (‘body’, <wagtail.rich_text.RichText object at 0x3ed0694e8940>), (‘btn_text’, ‘Learn more’), (‘href’, ‘https://rsvp.withgoogle.com/events/ciso-community-interest?utm_source=cgc-blog&utm_medium=blog&utm_campaign=2024-cloud-ciso-newsletter-events-ref&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
In case you missed it
Here are the latest updates, products, services, and resources from our security teams so far this month:
Oops! 5 serious gen AI security mistakes to avoid: Pitfalls are inevitable as gen AI becomes more widespread. In highlighting the most common of these mistakes, we hope to help you avoid them. Read more.
How Roche is pioneering the future of healthcare with secure patient data: Desiring increased user-access visibility and control, Roche secured its data by implementing a Zero Trust security model with BeyondCorp Enterprise and Chrome. Read more.
Securing AI: Advancing the national security mission: Artificial intelligence is not just a technological advancement; it’s a national security priority. For AI leaders across agencies in the AI era, we’ve published a new guide with agency roadmaps on how AI can be used to innovate in the public sector. Read more.
Perspectives on Security for the Board, sixth edition: Our final board report for 2024 reflects on our recent conversations with board members, highlighting the critical intersection of cybersecurity and business value in three key areas: resilience against supply chain attacks, how information sharing can bolster security, and understanding value at risk from a cybersecurity perspective. Read more.
Announcing the launch of Vanir: Open-source security patch validation: We are announcing the availability of Vanir, a new open-source security patch validation tool. It gives Android platform developers the power to quickly and efficiently scan their custom platform code for missing security patches and identify applicable available patches. Read more.
Please visit the Google Cloud blog for more security stories published this month.
aside_block
<ListValue: [StructValue([(‘title’, ‘Tell us what you think’), (‘body’, <wagtail.rich_text.RichText object at 0x3ed0694e8970>), (‘btn_text’, ‘Vote now’), (‘href’, ‘https://www.linkedin.com/feed/update/urn:li:ugcPost:7271984453818671105/’), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
Threat Intelligence news
Elevating red team assessments with AppSec testing: Incorporating application security expertise enables organizations to better simulate the tactics and techniques of modern adversaries, whether through a comprehensive red team engagement or a targeted external assessment. Read more.
(QR) coding my way out of here: C2 in browser isolation environments: Mandiant researchers demonstrate a novel technique where QR codes are used to achieve command and control in browser isolation environments, and provide recommendations to defend against it. Read more.
Please visit the Google Cloud blog for more threat intelligence stories published this month.
Now hear this: Google Cloud Security and Mandiant podcasts
Every CTO should be a CSTO: Chris Hoff, chief secure technology officer, Last Pass, discusses with host Anton Chuvakin and guest co-host Seth Rosenblatt the value of the CSTO, what it was like helping LastPass rebuild its technology stack, and how that helped overhaul the company’s corporate culture. Listen here.
How Google does workload security: Michael Czapinski, Google security and reliability enthusiast, talks with Anton about workload security essentials: zero-touch production, security rings, foundational services, and more. Listen here.
Defender’s Advantage: The art of remediation in incident response: Mandiant Consulting lead Jibran Ilyas joins host Luke McNamara to discuss the role of remediation as part of incident response. Listen here.
To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in two weeks with more security-related updates from Google Cloud.
Conventional fraud detection methods have a hard time keeping up with increasingly sophisticated criminal tactics. Existing systems often rely on the limited data of individual institutions, and this hinders the detection of intricate schemes that span multiple banks and jurisdictions.
To better combat fraud in cross-border payments, Swift, the global provider of secure financial messaging services, is working with Google Cloud to develop anti-fraud technologies that use advanced AI and federated learning.
In the first half of 2025, Swift plans to roll out a sandbox with synthetic data to prototype learning from historic fraud, working with 12 global financial institutions, with Google Cloud as a strategic partner. This initiative builds on Swift’s existing Payment Controls Service (PCS), and follows a successful pilot with financial institutions across Europe, North America, Asia and the Middle East.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud security products’), (‘body’, <wagtail.rich_text.RichText object at 0x3ed0694d2070>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>
The partnership: Google Cloud and Swift
Google Cloud is collaborating with Swift — along with technology partners including Rhino Health and Capgemini — to develop a secure, privacy-preserving solution for financial institutions to combat fraud. This innovative approach uses federated learning techniques, combined with privacy-enhancing technologies (PETs), to enable collaborative intelligence without compromising proprietary data.
Rhino Health will develop and deliver the core federated learning platform, and Capgemini will manage the implementation and integration of the solution.
“Swift is in a unique position in the financial industry – a trusted and cooperative network that is integral to the functioning of the global economy. As such, we are ideally placed to lead collaborative, industry-wide efforts to fight fraud. This exploration will help the community validate whether federated learning technology can help financial institutions stay one step ahead of bad actors through sharing of fraud labels, and in turn enabling them to provide an enhanced cross-border payments experience to their customers,” said Rachel Levi, head of artificial intelligence, Swift.
“At Google Cloud, we are committed to empowering financial institutions with cutting-edge technology to combat the evolving threat of fraud. Our collaboration with Swift exemplifies the transformative potential of federated learning and confidential computing. By enabling secure collaboration and knowledge sharing without compromising data privacy, we are fostering a safer and more resilient financial ecosystem for everyone,” said Andrea Gallego, Managing Director, global GTM incubation, Google Cloud,
The challenge: Traditional fraud detection is falling behind
The lack of visibility across the payment lifecycle creates vulnerabilities that can be exploited by criminals. A collaborative approach to fraud modeling offers significant advantages over traditional methods in combating financial crimes. To be effective, this approach requires data sharing across institutions, which is often restricted because of privacy concerns, regulatory requirements, and intellectual property considerations.
The solution: Federated learning
Federated learning offers a powerful solution for collaborative AI model training without compromising privacy and confidentiality. Instead of requiring financial institutions to pool their sensitive data, the model training occurs within financial institutions on decentralized data.
Here’s how it works for Swift:
A copy of Swift’s anomaly detection model is sent to each participating bank.
Each financial institution trains this model locally on their own data.
Only the learnings from this training — not the data itself — are transmitted back to a central server for aggregation, managed by Swift.
The central server aggregates these learnings to enhance Swift’s global model.
This approach significantly minimizes data movement and ensures that sensitive information remains within each financial institution’s secure environment.
Core benefits of the federated learning solution
By using federated learning solutions, financial institutions can achieve substantial benefits, including:
Shared intelligence: Financial institutions work together by sharing information on fraudulent activities, patterns, and trends, which creates a much larger and richer decentralized data pool than any single institution could gather alone.
Enhanced detection: The collaborative global model can identify complex fraud schemes that might go unnoticed by individual institutions, leading to improved detection and prevention.
Reduced false positives: Sharing information helps refine fraud models, leading to more accurate identification of genuine threats and fewer false alarms that disrupt both legitimate activity and the customer experience.
Faster adaptation: The collaborative approach allows for faster adaptation to new fraud trends and criminal tactics. As new threats emerge, the shared knowledge pool helps all participants quickly adjust their models and their fraud prevention tools.
Network effects: The more institutions participate, the more comprehensive the data pool becomes, creating a powerful network effect that strengthens fraud prevention for everyone involved.
For widespread adoption, federated learning must seamlessly integrate with existing financial systems and infrastructure. This allows financial institutions to easily participate and benefit from the collective intelligence without disrupting their operations.
Architecting the global fraud AI solution
The initial scope remains a synthetic data sandbox centered on prototyping learning from historic payments fraud. The platform allows multiple financial institutions to train a robust fraud detection model while preserving the confidentiality of their sensitive transaction data. It uses federated learning and confidential computing techniques, such as Trusted Execution Environments (TEEs), to enable secure, multi-party machine learning without training data movement.
There are several key components to this solution:
Federated server in TEE execution environment: A secure, isolated environment where a federated learning (FL) server orchestrates the collaboration of multiple clients by first sending an initial model to the FL clients. The clients perform training on their local datasets, then send the model updates back to the FL server for aggregation to form a global model.
Federated client: Executes tasks, performs local computation and learning with local dataset (such as data from an individual financial institution), then submits results back to FL server for secure aggregation.
Bank-specific encrypted data: Each bank holds its own private, encrypted transaction data that includes historic fraud labels. This data remains encrypted throughout the entire process, including computation, ensuring end-to-end data privacy.
Global fraud-based model: A pre-trained anomaly detection model from Swift that serves as the starting point for federated learning.
Secure aggregation: Using a Secure Aggregation protocol to compute these weighted averages would ensure that the server learns only the historic fraud labels from participating financial institutions, but not exactly which financial institution, thereby preserving the privacy of each participant in the federated learning process.
Global anomaly detection trained model and aggregated weights: The improved anomaly detection model, along with its learned weights, is securely exchanged back to the participating financial institutions. They can then deploy this enhanced model locally for fraud detection monitoring on their own transactions.
We’re seeing more enterprises adopt federated learning to combat global fraud, including global consulting firm Capgemini.
“Payment fraud stands as one of the greatest threats that undermines the integrity and stability of the financial ecosystem, with its impact acutely felt upon some of the most vulnerable segments of our society,” said Sudhir Pai, chief technology and innovation officer, Financial Services, Capgemini.
“This is a global epidemic that demands a collaborative effort to achieve meaningful change. Our application of federated learning is grounded with privacy-by-design principles, leveraging AI to pioneer secure aggregation and anonymization of data which is of primary concern to large financial institutions. The potential to apply our learnings within a singular global trained model across other industries will ensure we break down any siloes and combat fraud at scale,” he said.
“We are proud to support Swift’s program in partnership with Google Cloud and Capgemini,” said Chris Laws, chief operating officer, Rhino. “Fighting financial crime is an excellent example of the value created from the complex multi-party data collaborations enabled by federated computing, as all parties can have confidence in the security and confidentiality of their data.”
Building a safer financial ecosystem, together
This effort to fight fraud collaboratively will help build a safer and more secure financial ecosystem. By harnessing the power of federated learning and adhering to strong principles of data privacy, security, platform interoperability, confidentiality, and scalability, this solution has the potential to redefine how we combat fraud in the age of fragmented globalized finance and demonstrates a commitment to building a more resilient and trustworthy financial world.
A few months back, we kicked-off Network Performance Decoded, a series of whitepapers sharing best practices for network performance and benchmarking. Today, we’re dropping the second installment – and this one’s about some of those pesky performance limiters you’re bound to run into. In this blog, we’re giving you the inside scoop on how to tackle these issues head-on.
First up: A Brief Look at Network Performance Limiters
Mbit/s isn’t everything: It’s not just about raw speed. How you package your data (packet size) seriously impacts throughput and CPU usage.
Bigger packets, better results: Larger packets mean less overhead per packet, which translates to better throughput and less strain on your CPU.
Offload for a TCP boost: TCP Segmentation Offload (TSO) and Generic Receive Offload (GRO) let your network interface card do some of the heavy lifting, freeing up your CPU and giving your TCP throughput a nice bump — even with smaller packets.
Watch out for packet-per-second limits: Smaller packets can sometimes hit a bottleneck because of how many packets your system can handle per second.
At a constant bitrate, bigger packets are more efficient: Even with a steady bitrate, larger packets mean fewer packets overall, which leads to less CPU overhead and a more efficient network.
Get a handle on these concepts and you’ll be able to fine-tune your network for better performance and efficiency, no matter the advertised speed.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 to try Google Cloud networking’), (‘body’, <wagtail.rich_text.RichText object at 0x3ed0694ed460>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectpath=/products?#networking’), (‘image’, None)])]>
Next: A Brief Look at Round Trip Time
This whitepaper dives into TCP Round Trip Time (RTT) — a key network performance metric. You’ll learn how it’s measured, what can throw it off, and how to use that info to troubleshoot network issues like a pro. We’ll show you how the receiving application’s behavior can mess with RTT measurements, and call out some important nuances to consider. For example, TCP RTT measurements do not include the time TCP may spend resending lost segments, which your applications see as latency. Lastly, we’ll show how you can use tools like netperf (also included in our PerfKit Benchmarker toolkit) to get an end-to-end picture.
Finally: A Brief Look at Path MTU Discovery
Last but not least, this whitepaper breaks down Path MTU discovery, a process that helps prevent IP fragmentation. Understanding how networks handle packet sizes can help you optimize your network setup, avoid those frustrating fragmentation issues, and troubleshoot effectively. We’ll even walk you through common problems — like those pesky ICMP blocks leading to large packets being dropped without any notification to the sender — and how to fix them. Plus, you’ll learn the difference between Maximum Transmission Unit (MTU) and Maximum Segment Size (MSS) — knowledge that’ll come in handy when you’re configuring your network and troubleshooting packet size problems.
Stay tuned!
These resources are part of our mission to create an open, collaborative space for network performance benchmarking and troubleshooting. The examples might be from Google Cloud, but the ideas apply everywhere – regardless of where your workloads may be running. You can find all our whitepapers (past, present, and future) on our webpage. Keep an eye out for more!
It’s been more than two and a half years since we introduced AlloyDB for PostgreSQL, our 100% PostgreSQL-compatible database that offers superior performance, availability, and scale. AlloyDB reimagines PostgreSQL with Google’s cutting-edge technology. It includes a scale-out architecture, built-in analytics, and AI/ML-powered management for a truly modern data experience, and is fully managed so you can focus on your application.
PostgreSQL has long been a favorite among developers for its flexibility, reliability, and robust feature set. AlloyDB brings PostgreSQL to the next level with faster performance, stronger functionality, better migration options, and smarter AI capabilities.
As 2024 comes to a close, it felt like a great time to celebrate with a snazzy AlloyDB overview video and summarize the AlloyDB’s key benefits in an infographic. Whether you’re new to the product or have tried it already, take a look to make sure you’re taking advantage of every benefit. You can also download our in-depth AlloyDB e-book for a deeper dive.
Want to learn more about how AlloyDB can revolutionize your PostgreSQL experience? Download our in-depth AlloyDB e-book and discover the transformative ways AlloyDB is redefining what’s possible. You’ll uncover:
How AlloyDB delivers superior transactional performance at half the cost
Why AlloyDB is the best database service for building gen AI apps
The flexibility of running AlloyDB anywhere, on any platform
How AI-driven development and operations can simplify your database journey
The power of real-time business insights with AlloyDB’s columnar engine
The future of PostgreSQL is here, and it’s built for you. Start building your next great app with a 30-day AlloyDB free trial.
Think about your favorite apps – the ones that deliver instant results from massive amounts of data. They’re likely powered by vector search, the same technology that fuels generative AI.
Vector search is crucial for developers who need to build applications that are lightning-fast, handle massive datasets, and remain cost-effective, even with huge spikes in traffic. But building and deploying this technology can be a real challenge, especially for gen AI applications that demand incredible flexibility, scale, and speed. In a previous blog post, we showed you how to create production-ready AI applications with features like easy filtering, automatic scaling, and seamless updates.
Today, we’ll share how Vertex AI’s vector search is tackling these challenges head-on. We’ll explore real-world performance benchmarks demonstrating incredible speed and scalability – all in a cost-effective way.
aside_block
<ListValue: [StructValue([(‘title’, ‘Start building today and enjoy a $1,000 credit’), (‘body’, <wagtail.rich_text.RichText object at 0x3e80a7f0f4f0>), (‘btn_text’, ”), (‘href’, ”), (‘image’, None)])]>
How does Vertex AI vector search work?
Imagine you own a popular online store: to keep shoppers happy, your search engine needs to instantly sift through millions of products and deliver relevant results, even during peak shopping seasons. Vector search is a technique for finding similar items within massive datasets. It works by converting data, like text or images, into numerical representations called embeddings. These embeddings capture the semantic meaning of the data, allowing for more accurate and relevant search results.
For example, imagine your customers are searching for a “navy blue dress shirt.” A keyword search might miss products labeled “midnight blue button-down,” even though they’re essentially the same. Vector search does a better job of surfacing the right products because it uses embeddings to understand the relationships between words and concepts.
A smooth, crisp and responsive semantic search experience is a must-have for e-commerce, media and other consumer-facing web services and is only possible with highly performant vector search. See this blog post for the details of the Infinite Nature demo that offers a glimpse into the future of how we’ll interact with information.
You can use it for a wide range of applications, like the e-commerce example shared above, or as a retrieval-augmented generation (RAG) system for generative AI agents, where it grounds responses in your data or recommendation systems that deliver personalized suggestions based on user preferences.
As Xun Wang, Chief Technology Officer of Bloomreach, recently said, “Bloomreach has made the strategic decision to replace OpenAI with Google Vertex AI Embeddings and Vertex AI vector search. Google’s platform delivers clear advantages in performance, scalability, reliability and cost optimization. We’re confident this move will drive significant benefits and we’re thrilled to embark on this new partnership.”
Real-world impact of Vertex AI’s vector search
Our customers are achieving remarkable results with vector search. Here are four standout ways this technology is helping them build high-performance gen AI apps.
#1: The fastest vector search for highly responsive applications
To meet customer expectations, fast response times are critical across search, recommendation systems, and gen AI applications. Studies have consistently found that faster response times directly contribute to an increase in revenue, conversion, and retention.
Vector search is engineered for incredibly low latency at high quality, while maintaining cost-effectiveness. In our testing, vector search was able to maintain ultra-low latency (9.6ms at P95) and high recall (0.99) while scaling up to 5K queries per second on a dataset of one billion vectors. By achieving such low latencies, Vertex AI vector search ensures that users receive fast, relevant responses, no matter how large the dataset or how many parallel requests hit the system.
As Yuri M. Brovman from eBay wrote in a recent blog post, “[eBay’s vector search] hit a real-time read latency of less than 4ms at 95%, as measured server-side on the Google Cloud dashboard for vector search”.
#2: Massively scalable for all application sizes
Another important consideration in production-ready applications is the ability of your application to support growing data sizes and user bases.
This means it can easily accommodate sudden spurts in demand, making it massively scalable for applications of any size. Vertex AI vector search can scale up to support billions of embeddings and hundreds of thousands of queries per second while maintaining ultra low latency.
#3: Up to 4X more cost effective
Vertex AI vector search not only maintains performance at scale, but it is also 4x more cost effective than competing solutions, especially for high performance applications. With Vertex AI vector search’s ANN index, you will need significantly less compute for fast and relevant results at scale.
Dataset
QPS
Recall
Latency (P95)
Glove 100 / 100 dim
44,876
0.96
3ms
OpenAI 5M / 1536 dim
2,981
0.96
9ms
Cohere 10M / 768 dim
3,144
0.96
7ms
LAION 100M / 768 dim
2,997
0.96
9ms
BigANN 10M / 128 dim
33,921
0.97
3.5 ms
BigANN 100M / 128 dim
9,871
0.97
7.2 ms
BigANN 1B / 128 dim
4,967
0.99
9.6 ms
Vertex AI vector search’s real-world benchmarks of public datasets by using 2 replicas of n2d machines. Latency was measured at the provided QPS; vector search can scale up beyond this throughput by adding additional replicas.
#4: It’s highly configurable for all application types
In some scenarios, developers might be interested in trading-off latency for higher recall (or vice versa). For example, an e-commerce website might prioritize speed for quick product suggestions, while a research database might prioritize comprehensive results even if it takes slightly longer. Vector search enables tuning these parameters and hitting higher recall or higher latency, to match business needs.
Additionally, vector search supports auto-scaling – and when load on the deployment increases, it scales to maintain performance. We measured auto-scaling and found that vector search was able to maintain consistent latency with high recall, as QPS increased from 1K to 5K.
Developers can also increase the number of replicas in order to handle higher throughput, as well as pick different machine types to balance cost and performance. This flexibility makes vector search suitable for a wide range of applications beyond semantic search, including recommendation systems, chatbots, multimodal search, anomaly detection, and image similarity matching.
Going further with hybrid search
Dense embedding-based semantic search, while excellent at understanding meaning and context, has a weak point: it cannot find items that the embedding model can’t make sense of. Items like product numbers, company’s internal codenames or newly coined terms, aren’t found by semantic search because the embedding model doesn’t understand their meanings.
With Vertex AI vector search’s hybrid search, building this type of sophisticated search engine is no longer a daunting task. Developers can easily create a single index that incorporates both dense and sparse embeddings, representing semantic meaning and keyword relevance respectively. This streamlined approach allows for rapid development and deployment of high-performance search applications, fully customized to meet specific business needs.
As Nicolas Presta, Sr. Engineering Manager at Mercado Libre wrote, “Most of our successful sales start with a search, so it is important that we give precise results that best match a user’s query. These complex searches are getting better with the addition of the items retrieved from vector search, which will ultimately increase our conversion rate. Hybrid search will unlock more opportunities to uplevel our search engine so that we can create the best customer experience while improving our bottom line.” – Nicolas Presta, Sr. Engineering Manager at Mercado Libre.
Starting today, you can record thumbnail images in Amazon Interactive Video Service (Amazon IVS) Real-Time Streaming. When thumbnail recording is enabled, Amazon IVS automatically generates images at the interval you configure and stores them in the Amazon S3 bucket you select. Thumbnails can be used for preview images in content discovery or as part of content moderation workflows. There is no additional cost for enabling thumbnail recording, but standard Amazon S3 storage and request costs apply.
Amazon IVS is a managed live streaming solution that is designed to make low-latency or real-time video available to viewers around the world. Video ingest and delivery are available over a managed network of infrastructure optimized for live video. Visit the AWS region table for a full list of AWS Regions where the Amazon IVS console and APIs for control and creation of video streams are available.
Model Evaluation on Amazon Bedrock allows you to evaluate, compare, and select the best foundation models for your use case. Amazon Bedrock offers a choice of using an LLM-as-a-judge, programmatic evaluation, and human evaluation. You can use an LLM-as-a-judge for metrics such as correctness, completeness, and coherence, as well as responsible AI metrics such as answer refusal and harmfulness. Programmatic evaluation offers algorithms for metrics such as accuracy, robustness, and toxicity. Additionally, for those metrics or subjective and custom metrics, such as friendliness or style, you can set up a human evaluation workflow with a few clicks. Human evaluation leverages your own employees or an AWS-managed team as reviewers. Model evaluation provides built-in curated datasets or you can bring your own datasets. Now, customers can evaluate models in the Europe (Zurich).
Model Evaluation on Amazon Bedrock is now available in these regions, and evaluation type availability varies by region.
To learn more about Model Evaluation on Amazon Bedrock, see the Amazon Bedrock Evaluations page. To get started, sign in to Amazon Bedrock on the AWS Management Console or use the Amazon Bedrock APIs.