At Google, we believe in empowering people and founders to use AI to tackle humanity’s biggest challenges. That’s why we’re supporting the next generation of AI leaders through our Google for Startups Accelerator: AI First programs. We announced the program in January and today, we’re proud to welcome 16 UK-based startups into our accelerator community that are using AI to drive real-world impact.
Out of hundreds of applicants, we’ve carefully selected these 16 high-potential startups to receive 1:1 guidance and support from Google, each demonstrating a unique vision for leveraging AI to address critical challenges and opportunities. This diverse cohort showcases how AI is being applied across sectors — from early cancer detection and climate resilience, to smarter supply chains and creative content generation. By joining the Google for Startups Accelerator: AI First UK program, these startups gain access to technical expertise, mentorship, and a global network to help them scale responsibly and sustainably.
“Google for Startups Accelerator: AI First provides an exceptional opportunity for us to enhance our AI expertise, accelerate the development of our data-driven products, and engage meaningfully with potential investors.” – Denise, Williams, Managing Director, Dysplasia Diagnostics.
Read more about the selected startups and the founders shaping the future of AI:
Bindbridge (London) is a generative AI platform that discovers and designs molecular glues for targeted protein degradation in plants.
Building Atlas (Edinburgh) uses data and AI to support the decarbonisation of non-domestic buildings by modelling the best retrofit plans for any portfolio size.
Comply Stream (London) helps to streamline financial crime compliance operations for businesses and consumers.
Datawhisper (London) provides safe and compliant AI Agentic solutions tailored for the fintech and payments industry.
Deducta (London) is a data intelligence platform that supports global procurement teams with supply chain insights and efficiencies.
Dysplasia Diagnostics (London) develops AI-based, non-invasive, and affordable solutions for early cancer detection and treatment monitoring.
Flow.bio (London)is an end-to-end cloud platform for running large sequencing pipelines and auto-structuring bio-data for machine learning workflows.
Humble (London) enables non-technical users to build and share AI-powered apps and workflows, allowing them to automate without writing code.
Immersive Fox (London) is an AI studio for creating presenter-led marketing and communication videos directly from text.
Kestrix (London) uses thermal drones and advanced software to map and quantify heat loss from buildings and generate retrofit plans.
Measmerize (Birmingham) provides sizing advice for fashion e-commerce retailers, enabling brands to increase sales and decrease return rates.
PSi (London) uses AI to host large-scale online deliberations, enabling local governments to harness collective intelligence for effective policymaking.
Shareback (London) is an AI platform that allows employees to securely interact with GPT-based assistants trained on company, department, or project-specific data.
Sikoia (London) streamlines customer verification for financial services by consolidating data, automating tasks, and delivering actionable insights.
SmallSpark (Cardiff) enables low power AI at the edge, simplifying the deployment, management, and optimization of ML models on embedded devices.
Source.dev (London) simplifies the software development lifecycle for smart devices, to help accelerate innovation and streamline software updates.
“Through the program, we aim to leverage Google’s expertise and cutting-edge AI infrastructure to supercharge our growth on all fronts.” Lauren Ladd, Founder, Shareback
These 16 startups reflect the diversity and depth of AI innovation happening across the UK. Each company will receive technical mentorship, strategic guidance, and access to strategic connections from Google, and will continue to receive hands-on support via our alumni network after the program wraps in July.
Congratulations to this latest cohort! To learn more about applying for an upcoming Google for Startups program , visit the program page here.
aside_block
<ListValue: [StructValue([(‘title’, ‘Try Google Cloud for free’), (‘body’, <wagtail.rich_text.RichText object at 0x3e4e9c1c2520>), (‘btn_text’, ‘Get started for free’), (‘href’, ‘https://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>
In 2023, the Waze platform engineering team transitioned to Infrastructure as Code (IaC) using Google Cloud’s Config Connector (KCC) — and we haven’t looked back since. We embraced Config Connector, an open-source Kubernetes add-on, to manage Google Cloud resources through Kubernetes. To streamline management, we also leverage Config Controller, a hosted version of Config Connector on Google Kubernetes Engine (GKE), incorporating Policy Controller and Config Sync. This shift has significantly improved our infrastructure management and is shaping our future infrastructure.
The shift to Config Connector
Previously, Waze relied on Terraform to manage resources, particularly during our dual-cloud, VM-based phase. However, maintaining state and ensuring reconciliation proved challenging, leading to inconsistent configurations and increased management overhead.
In 2023, we adopted Config Connector, transforming our Google Cloud infrastructure into Kubernetes Resource Modules (KRMs) within a GKE cluster. This approach addresses the reconciliation issues encountered with Terraform. Config Sync, paired with Config Connector, automates KRM synchronization from source repositories to our live GKE cluster. This managed solution eliminates the need for us to build and maintain custom reconciliation systems.
The shift helped us meet the needs of three key roles within Waze’s infrastructure team:
Infrastructure consumers: Application developers who want to easily deploy infrastructure without worrying about the maintenance and complexity of underlying resources.
Infrastructure owners: Experts in specific resource types (e.g., Spanner, Google Cloud Storage, Load Balancers, etc.), who want to define and standardize best practices in how resources are created across Waze on Google Cloud.
Platform engineers: Engineers who build the system that enables infrastructure owners to codify and define best practices, while also providing a seamless API for infrastructure consumers.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud containers and Kubernetes’), (‘body’, <wagtail.rich_text.RichText object at 0x3e4e81a9b730>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectpath=/marketplace/product/google/container.googleapis.com’), (‘image’, None)])]>
First stop: Config Connector
It may seem circular to define all of our Google Cloud infrastructure as KRMs within a Google Cloud service, however, KRM is actually a great representation for our infrastructure as opposed to existing IaC tooling.
Terraform’s reconciliation issues – state drift, version management, out of band changes – are a significant pain. Config Connector, through Config Sync, offers out-of-the-box reconciliation, a managed solution we prefer. Both KRM and Terraform offer templating, but KCC’s managed nature aligns with our shift to Google Cloud-native solutions and reduces our maintenance burden.
Infrastructure complexity requires generalization regardless of the tool. We can see this when we look at the Spanner requirements at Waze:
Consistent backups for all Spanner databases
Each Spanner database utilizes a dedicated Cloud Storage bucket and Service Account to automate the execution of DDL jobs.
All IAM policies for Spanner instances, databases, and Cloud Storage buckets are defined in code to ensure consistent and auditable access control.
To define these resources, we evaluated various templating and rendering tools and selected Helm, a robust CNCF package manager for Kubernetes. Its strong open-source community, rich templating capabilities, and native rendering features made it a natural fit. We can now refer to our bundled infrastructure configurations as ‘Charts.’ While KRO has since emerged that achieves a similar purpose, our selection process predated its availability.
Under the hood
Let’s open the hood and dive into how the system works and is driving value for Waze.
Waze infrastructure owners generically define Waze-flavored infrastructure in Helm Charts.
Infrastructure consumers use these Charts with simplified inputs to generate infrastructure (demo).
Infrastructure code is stored in repositories, enabling validation and presubmit checks.
Code is uploaded to a Artifact Registry where Config Sync and Config Connector align Google Cloud infrastructure with the code definitions.
This diagram represents a single “data domain,” a collection of bounded services, databases, networks, and data. Many tech orgs today consist of Prod, QA, Staging, Development, etc.
Approaching our destination
So why does all of this matter? Adopting this approach allowed us to move from Infrastructure as Code to Infrastructure as Software. By treating each Chart as a software component, our infrastructure management goes beyond simple code declaration. Now, versioned Charts and configurations enable us to leverage a rich ecosystem of software practices, including sophisticated release management, automated rollbacks, and granular change tracking.
Here’s where we apply this in practice: our configuration inheritance model minimizes redundancy. Resource Charts inherit settings from Projects, which inherit from Bootstraps. All three are defined as Charts. Consequently, Bootstrap configurations apply to all Projects, and Project configurations apply to all Resources.
Every change to our infrastructure – from changes on existing infrastructure to rolling out new resource types – can be treated like a software rollout.
Now that all of our infrastructure is treated like software, we can see what this does for us system-wide:
Reaching our destination
In summary, Config Connector and Config Controller have enabled Waze to achieve true Infrastructure as Software, providing a robust and scalable platform for our infrastructure needs, along with many other benefits including:
Infrastructure consumers receive the latest best practices through versioned updates.
Infrastructure owners can iterate and improve infrastructure safely.
Platform Engineers and Security teams are confident our resources are auditable and compliant
For data scientists and ML engineers, building analysis and models in Python is almost second nature, and Python’s popularity in the data science community has only skyrocketed with the recent generative AI boom. We believe that the future of data science is no longer just about neatly organized rows and columns. For decades, many valuable insights have been locked in images, audio, text, and other unstructured formats. And now, with the advances in gen AI, data science workloads must evolve to handle multi-modality and use new gen AI and agentic techniques.
To prepare you for the data science of tomorrow, we announced BigQuery DataFrames 2.0 last week at Google Cloud Next 25, bringing multimodal data processing and AI directly into your BigQuery Python workflows.
Extending Pandas DataFrames for BigQuery Multimodal Data
In BigQuery, data scientists frequently look to use Python to process large data sets for analysis and machine learning. However, this almost always involves learning a different Python framework and rewriting the code that worked on smaller data sets. You can hardly take Pandas code that worked on 10 GB of data and get it working for a terabyte of data without expending significant time and effort.
Version 2.0 also strengthens the core foundation for larger-scale, Python data science. And then it builds on this foundation, adding groundbreaking new capabilities that unlock the full potential of your data, both structured and unstructured.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud data analytics’), (‘body’, <wagtail.rich_text.RichText object at 0x3eca84717640>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/bigquery/’), (‘image’, None)])]>
BigQuery DataFrames adoption
We launched BigQuery DataFrames last year as an open-source Python library that scales Python data processing without having to add any new infrastructure or APIs, transpiling common Python data science APIs from Pandas and scikit-learn to various BigQuery SQL operators. Since its launch, there’s been over 30X growth in how much data it processes and, today, thousands of customers use it to process more than 100 PB every month.
During the last year we evolved our library significantly across 50+ releases and worked closely with thousands of users. Here’s how a couple of early BigQuery DataFrames customers use this library in production.
Deutsche Telekom has standardized on BigQuery DataFrames for its ML platform.
“With BigQuery DataFrames, we can offer a scalable and managed ML platform to our data scientists with minimal upskilling.” – Ashutosh Mishra, Vice President – Data Architecture & Governance, Deutsche Telekom
Trivago, meanwhile, migrated its PySpark transformations to BigQuery DataFrames.
“With BigQuery DataFrames, data science teams focus on business logic and not on tuning infrastructure.” – Andrés Sopeña Pérez, Head of Data Infrastructure, Trivago
What’s new in BigQuery Dataframes 2.0?
This release is packed with features designed to streamline your AI and machine learning pipelines:
Working with multimodal data and generative AI techniques
Multimodal DataFrames (Preview): BigQuery Dataframes 2.0 introduces a unified dataframe that can handle text, images, audio, and more, alongside traditional structured data, breaking down the barriers between structured and unstructured data. This is powered by BigQuery’s multimodal capabilities enabled by ObjectRef, helping to ensure scalability and governance for even the largest datasets.
When working with multimodal data, BigQuery DataFrames also abstracts many details for working with multimodal tables and processing multimodal data, leveraging BigQuery features behind the scene like embedding generation, vector search, Python UDFs, and others.
Pythonic operators for BigQuery AI Query Engine (experimental): BigQuery AI Query Engine makes it trivial to generate insights from multimodal data: Now, you can analyze unstructured data simply by including natural language instructions in your SQL queries. Imagine writing SQL queries where you can rank call transcripts in a table by ‘quality of support’ or generate a list of products with ‘high satisfaction’ based on reviews in a column. BigQuery AI Query Engine makes that possible with simple, stackable SQL.
BigQuery DataFrames offers a DataFrame interface to work with AI Query Engine. Here’s a sample:
code_block
<ListValue: [StructValue([(‘code’, ‘import bigframes.pandas as bpdrnrnfrom bigframes.ml import llm rngemini_model = llm.GeminiTextGenerator(model_name=”gemini-1.5-flash-002″)rnrn# Get Top K products with higher satisfacton rndf = bpd.read_gbq(“project.dataset.transcripts_table”)rnresult = df.ai.top_k(“The reviews in {review_transcription_col} indicates higher satisfaction”, model=gemini_model)rnrn# Works with multimodal data as well. rndf = bpd.from_glob_path(“gs://bucket/images/*”, name=”image_col”)rnresult = df.ai.filter(“The main object in the {image_col} can be seen in city streets”, model=gemini_model)’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3eca952504f0>)])]>
Gemini Code Assist for DataFrames (Preview): To keep up with the evolving user expectations around code generation, we’re also making it easier to develop BigQuery DataFrames code, using natural language prompts directly within BigQuery Studio. Together, Gemini’s contextual understanding and DataFrames-specific training help ensure smart, efficient code generation. This feature is released as part of Gemini in BigQuery.
Strengthening the core
To make the core Python data science workflow richer and faster to use, we added the following features.
Partial ordering (GA): By default, BigQuery DataFrames maintains strict ordering (as does Pandas). With 2.0, we’re introducing a relaxed ordering mode that significantly improves performance, especially for large-scale feature engineering. This “spin” on traditional Pandas ordering is tailored for the massive datasets common in BigQuery. Read more about partial ordering here.
Here’s some example code that uses partial ordering :
code_block
<ListValue: [StructValue([(‘code’, ‘import bigframes.pandas as bpdrnimport datetimernrn# Enable the partial ordering modernbpd.options.bigquery.ordering_mode = “partial”rnrnpypi = bpd.read_gbq(“bigquery-public-data.pypi.file_downloads”)rnrn# Show a preview of the previous day’s downloads.rn# The partial ordering mode is 4,000,000+ more efficient in terms of billed bytes.rnlast_1_days = datetime.datetime.now(datetime.timezone.utc) – datetime.timedelta(days=1)rnbigframes_downloads = pypi[(pypi[“timestamp”] > last_1_days) & (pypi[“project”] == “bigframes”)]rnbigframes_downloads[[“timestamp”, “project”, “file”]].peek()’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3eca84682850>)])]>
Work with Python UDF (Preview): BigQuery Python user-defined functions are now available in preview [see the documentation].
Within BigQuery DataFrames you can now auto-scale Python function execution to millions of rows, with serverless, scale-out execution. All you need to do is put a “@udf” decorator on top of a function that needs to be pushed to the server-side.
Here is an example code that tokenizes comments from stackoverflow data stored in a BigQuery public table with ~90 million rows using a Python UDF:
code_block
<ListValue: [StructValue([(‘code’, ‘import bigframes.pandas as bpdrnrn# Auto-create the server side Python UDFrn@bpd.udf(packages=[“tokenizer”])rndef get_sentences(text: str) -> list[str]:rn from tokenizer import split_into_sentences rn return list(split_into_sentences(text))rnrndf = bpd.read_gbq(rn “bigquery-public-data.stackoverflow.comments”rn)rn# Invoke the Python UDFrnresult = df[“text”].apply(get_sentences)rnresult.peek()’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3eca666e2550>)])]>
dbt Integration (Preview): For all the dbt users out there, you can now integrate BigQuery DataFrames Python into your existing dbt workflows. The new dbt Python model allows you to run BigQuery DataFrames code alongside your BigQuery SQL, unifying billing, and simplifying infrastructure management. No new APIs or infrastructure to learn — just the power of Python and BigQuery DataFrames within your familiar dbt environment. [Try now ]
For years, unstructured data has largely resided in silos, separate from the structured data in data warehouses. This separation restricted the ability to perform comprehensive analysis and build truly powerful AI models. BigQuery’s multimodal capabilities and BigQuery Dataframes 2.0 eliminates this divide, bringing the capabilities traditionally associated with data lakes directly into the data warehouse, enabling:
Unified data analysis: Analyze all your data – structured and unstructured – in one place, using a single, consistent Pandas-like API.
LLM-powered insights: Unlock deeper insights by combining the power of LLMs with the rich context of your structured data.
Simplified workflows: Streamline your data pipelines and reduce the need for complex data movement and transformation.
Scalability and governance: Leverage BigQuery’s serverless architecture and robust governance features for all your data, regardless of format.
See BigQuery Dataframes 2.0 in Action
You can see all of these features in action in this video from Google Cloud Next ’25
Get started today!
BigQuery Dataframes 2.0 is a game-changer for anyone working with data and AI. It’s time to unlock the full potential of your data, regardless of its structure. Start experimenting with the new features today!
The daily grind of sifting through endless alerts and repetitive tasks is burdening security teams. Too often, defenders struggle to keep up with evolving threats, but the rapid pace of AI advancement means it doesn’t have to be that way.
Agentic AIpromises a fundamental, tectonic shift for security teams, where intelligent agents work alongside human analysts to autonomously take on routine tasks, augment human decision-making, automate workflows and empower them to focus on what matters most: the complex investigations and strategic challenges that truly demand human expertise.
The agentic AI future
While assistive AI primarily aids human analyst actions, agentic AI goes further and can independently identify, reason through, and dynamically execute tasks to accomplish goals — all while keeping human analysts in the loop.
Our vision for this agentic future for security builds on the the tangible benefits our customers experience today with Gemini in Security Operations:
“No longer do we have our analysts having to write regular expressions that could take anywhere from 30 minutes to an hour. Gemini can do it within a matter of seconds,” said Hector Peña, senior information security director, Apex Fintech Solutions.
We believe that agentic AI will transform security operations. The agentic security operations center (SOC), powered by multiple connected and use-case driven agents, can execute semi-autonomous and autonomous security operations workflows on behalf of defenders.
The agentic SOC
We are rapidly building the tools for the agentic SOC with Gemini in Security. Earlier this month at Google Cloud Next, we introduced two new Gemini in Security agents:
The alert triage agent in Google Security Operations autonomously performs dynamic investigations and provides a verdict.
In Google Security Operations, an alert triage agent performs dynamic investigations on behalf of users. Expected to preview for select customers in Q2 2025, this agent analyzes the context of each alert, gathers relevant information, and renders a verdict on the alert.
It also provides a fully transparent audit log of the agent’s evidence, reasoning and decision making. This always-on investigation agent will vastly reduce the manual workload of Tier 1 and Tier 2 analysts who otherwise are triaging and investigating hundreds of alerts per day.
The malware analysis agent in Google Threat Intelligence performs reverse engineering.
In Google Threat Intelligence, a malware analysis agent performs reverse engineering tasks to determine if a file is malicious. Expected to preview for select customers in Q2 2025, this agent analyzes potentially malicious code, including the ability to create and execute scripts for deobfuscation. The agent will summarize its work, and provide a final verdict.
Building on these investments, the agentic SOC is a connected, multi-agent system that works collaboratively with the human analyst to achieve exponential gains in efficiency. These intelligent agents are designed to fundamentally change security and threat management, working alongside analysts to automate common tasks and workflows, improve decision-making, and ultimately enable a greater focus on complex threats.
The agentic SOC will be a connected, multi-agent system that works collaboratively with human analysts.
To illustrate this vision in action, consider the following examples of how agentic collaboration could transform everyday security tasks with agents. At Google Cloud, we believe many critical SOC functions can be automated and orchestrated:
Data management:Ensures data quality and optimizes data pipelines.
Alert triage: Prioritizes and escalates alerts.
Investigation:Gathers evidence and provides verdicts on alerts, documents each analysis step, and determines the response mechanism.
Response: Remediates issues using hundreds of integrations,such as endpoint isolation.
Threat research:Bridges silos by analyzing and disseminating intelligence to other agents, such as the threat hunt agent.
Threat hunt:Proactively hunts for unknown threats in your environment with data from Google Threat Intelligence.
Malware analyst:Analyzes files at scale for potentially malicious attributes.
Exposure management: Proactively monitors internal and external sources for credential leaks, initial access brokers, and exploited vulnerabilities.
Detection engineering: Continuously analyzes threat profiles and can create, test, and fine-tune detection rules.
How the Google advantage helps agentic AI
Developing dependable and impactful agents for real-world security applications requires three key ingredients, all of which Google excels in:
We harness our deep reservoir of security data and expertise to provide guiding principles for the agents.
We integrate our cutting-edge AI research, and use mature agent development tools and frameworks to enable the creation of a reusable and scalable agentic system architecture.
Our ownership of the complete AI technology stack, from highly scalable and secure infrastructure to state-of-the-art models, provides a robust foundation for agentic AI development.
These advantages allow us to establish a well-defined framework for security agents, empowering AI to emulate human-level planning and reasoning, leading to superior performance in security tasks compared to general-purpose large language models.
This approach ensures high-quality and consistent results across security tasks and also facilitates the development of new agents through the modular composition of existing security capabilities – building a diverse garden of reusable, task-focused security agents.
Furthermore, agent interoperability, regardless of developer, boosts autonomy, productivity, and reduces long-term costs. Our open Agent2Agent (A2A) protocol, announced at Google Cloud Next, facilitates this, complementing the model context protocol (MCP) for standardized AI interaction with security applications and platforms.
To further advance interoperability, we are pleased to announce the open-sourcing of MCP servers for Google Unified Security, allowing users to build custom security workflows that use both Google Cloud and ecosystem tools. We are committed to an open ecosystem, envisioning a future where agents can collaborate dynamically across different products and vendors.
“We see an immediate opportunity to use MCP with Gemini to connect with our array of custom and commercial tools. It can help us make ad-hoc execution of data gathering, data enrichment, and communication easier for our analysts as they use the Google Security Operations platform,” said Grant Steiner, principal cyber-intelligence analyst, Enablement Operations, Emerson.
Introducing SecOps Labs for AI
To help defenders as our AI work rapidly advances, and to give the community an opportunity to offer direct feedback, we’re excited to introduce SecOps Labs. This initiative offers customers early access to cutting-edge AI pilotsin Google Security Operations, and is designed to foster collaboration with defenders through firsthand experience, valuable feedback, and direct influence on future Google Security Operations technologies.
Initial pilots showcase AI’s potential to address key security challenges, such as:
Detection engineering: This pilot autonomously converts threat reports into detection rules and generates synthetic data for testing their effectiveness.
Response playbooks: This pilot recommends and generates automation playbooks for new alerts based on analysis of past incidents.
Data parsing: This pilot is a first step towards AI generated parsers starting with allowing users to update their parsers using natural language.
SecOps Labs is a collaborative space to refine AI capabilities, to ensure they address real-world security challenges and deliver tangible value, while enabling teams to experiment with the latest pre-production capabilities. Stay tuned for more in Q2 2025 to participate in shaping the future of agentic security operations with Google Cloud Security.
Meet us at RSAC to learn more
Excited about agentic AI and the impact it will have on security? Connect with our experts and see Google Cloud Security tech in action. Find us on the show floor at booth #N-6062 Moscone Center, North Hall, or at the Marriott Marquis to meet with our security experts and learn how you can make Google part of your security team.
Not able to join us in person? Stream RSA Conference or catch up on-demand here, and connect with Google Cloud Security experts and fellow professionals in the Google Cloud Security Community to share knowledge, access resources, discover local events and elevate your security experience.
Cybersecurity is facing a unique moment, where AI-enhanced threat intelligence, products, and services are poised to give defenders an advantage over the threats they face that’s proven elusive — until now.
To empower security teams and business leaders in the AI era, and to help organizations proactively combat evolving threats, today at RSA Conference we’re sharing Mandiant’s latest M-Trends report findings, and announcing enhancements across Google Unified Security, our product portfolio, and our AI capabilities.
M-Trends 2025
The 16th edition of M-Trends is now available. The report provides data, analysis, and learnings drawn from Mandiant’s threat intelligence findings and over 450,000 hours of incident investigations conducted in 2024. Providing actionable insights into current cyber threats and attacker tactics, this year’s report continues our efforts to help organizations understand the evolving threat landscape and improve their defenses based on real-world data.
We see that attackers are relentlessly seizing opportunities to further their objectives, from using infostealer malware, to targeting unsecured data repositories, to exploiting cloud migration risks. While exploits are still the most common way that attackers are breaching organizations, they’re using stolen credentials more than ever before. The financial sector remains the top target for threat actors.
From M-Trends 2025, the most common initial infection vector was exploit (33%), followed by stolen credentials (16%), and email phishing (14%).
M-Trends 2025 dives deep into adversarial activity, loaded with highly relevant threat data analysis, including insider risks from North Korean IT workers, blockchain-fueled cryptocurrency threats, and looming Iranian threat actor activity. Our unique frontline insight helps us illustrate how threat actors are conducting their operations, how they are achieving their goals, and what organizations need to be doing to prevent, detect, and respond to these threats.
Google Unified Security
Throughout 2024, Google Cloud Security customers directly benefited from the threat intelligence and insights now publicly released in the M-Trends 2025 report. The proactive application of our ongoing findings included expert-crafted threat intelligence, enhanced detections in our security operations and cloud security solutions, and Mandiant security assessments, ensuring customers quickly received the latest insights and detections as threats were uncovered on the frontlines.
Now, with the launch of Google Unified Security, customers benefit from even greater visibility into threats and their environment’s attack surface, while Mandiant frontline intelligence is actioned directly through curated detections and playbooks in the converged solution.
By integrating Google’s leading threat intelligence, security operations, cloud security, secure enterprise browsing, and Mandiant expertise, Google Unified Security creates a single, scalable security data fabric across the entire attack surface. Gemini AI enhances threat detection with real-time insights; streamlines security operations; and fuels our new malware analysis and triage AI agents, empowering organizations to shift from reactive to preemptive security.
In today’s threat landscape, one of the most critical choices you need to make is who will be your strategic security partner, and Google Unified Security is the best, easiest, and fastest way to make Google part of your security team. Today, we’re excited to share several enhancements across the product portfolio.
Google Unified Security is powered by Mandiant frontline intelligence gathered from global incident response engagements.
What’s new in Google Security Operations
Google Security Operations customers now benefit from Curated Detections and Applied Threat Intelligence Rule Packs released for specific M-Trends 2025 observations, which can help detect malicious activity, including infostealer malware, cloud compromise, and data theft.
For example, the indicators of compromise (IOCs) and tactics, techniques, and procedures (TTPs) from cloud compromise observations have been added to the Cloud Threats curated detections rule pack.
We’re also excited to announce several AI and product updates designed to simplify workflows, dramatically reduce toil, and empower analysts.
We’ve already seen the transformative power of AI in security operations through the tangible benefits our customers experience today with Gemini in Google Security Operations. Our vision for the future is even more ambitious: an agentic security operations center (SOC), where security operations are fundamentally enhanced by a collaborative multi-agent system.
As we bring this vision to life, we’re developing intelligent, use-case driven agents that are designed to work in concert with human analysts as they automate routine tasks and improve decision-making. Ultimately, the agentic SOC will enable a greater focus on complex threats, helping to deliver autonomous security operations workflows and exponential gains in efficiency.
To further accelerate the adoption and refinement of AI-powered security capabilities, we are launching SecOps Labs, a new space for customers to get early access to our latest AI pilots and provide feedback. Initial features include an Natural Language Parser Extension, a Detection Engineering Agent for automated rule creation and testing, and a Response Agent for generating automation playbooks. SecOps Labs will foster collaboration in shaping the future of AI-powered security operations.
Composite Detections, in preview, can connect the dots between seemingly isolated events to help defenders uncover a more complete attack story. Your SOC can use it to create sophisticated multi-stage detections and attacker activity correlation, simplify detection engineering, and minimize false positives and false negatives.
Composite Detections can help teams build reusable detection logic to reveal hidden connections, stop advanced attackers that evade simple detection, and overcome the assumed precision and recall tradeoff inherent to most detection engineering.
Connect detections, catch more threats.
The Content Hub, in preview, is your go-to for the resources you need to streamline security operations and maximize the platform’s potential. Security operations teams can access content packs for top product integrations and use cases, making data ingestion configuration and data onboarding more efficient.
There’s also a library of certified integrations, pre-built dashboards, and ready-to-install search queries. Plus, you can gain deeper insights into your security posture with access to curated detections and insights into their underlying logic. Now you can discover, onboard, and manage all your security operations content in one place.
Activate your platform with ready-to-use content packs.
With Gemini in Google Security Operations, we’re also introducing a new way to get your product questions answered instantly, accessible from anywhere in the platform (in preview). You can now search documentation with Gemini, which will provide fast and high-quality answers for your security operations related questions, complete with reference links.
Get instant answers to your Google Security Operations product questions.
What’s new in Security Command Center
Rapidly building on AI Protection, which was announced in March, we are adding new multi-modal capabilities for detecting sensitive data in images used for training and inference.
To help security teams gain more visibility into AI environments, discover a wider range of sensitive data, and configure image-redaction rules if needed, AI Protection will be able to conduct object-based detection (such as barcodes) available in June.
Multi-modal detection: Sensitive data redacted from scanned loan application.
In addition to detecting sensitive data in images, we’ve added new AI threat detectors to AI Protection to identify specific cloud-based threats against your AI workloads. Aligned with MITRE ATLAS tactics, AI Protection detects threats like Suspicious/Initial Access, Persistence, and Access Modifications for your Vertex workloads and associated resources, empowering your organization with the visibility and context needed to rapidly investigate and respond to threats against your AI environment.
AI Protection is currently in preview (sign up here), and provides full AI lifecycle security that discovers AI assets and prioritizes top risks, secures AI with guardrails and safety controls, and helps detect, investigate, and respond to AI threats.
We’re also excited to share our latest research on the intersection of security and AI, Secure AI Framework (SAIF) in the Real World. We provide key considerations for applying SAIF principles across the data, infrastructure, application, and model dimensions of your AI projects.
What’s new in Mandiant Cybersecurity Consulting
Google Unified Security integrates Mandiant’s expertise through the Mandiant Retainer, offering on-demand access to experts with rapid incident response and flexible pre-paid funds for consulting services and, through Mandiant Threat Defense, which provides AI-assisted threat detection, hunting, and response, extending customer security teams through expert collaboration and SOAR playbooks.
Mandiant’s new Essential Intelligence Access (EIA) subscription, available now, offers organizations direct and flexible access to our world-class threat intelligence experts. These experts serve as an extension of your security team, providing personalized research and analysis, delivering tailored insights to inform critical decisions, focus defenses, and strengthen cybersecurity strategies.
EIA also helps customers maximize the value and efficiency of their Cyber Threat Intelligence (CTI) investments. Going beyond raw threat feeds, EIA analyzes data in the context of your specific environment to illuminate unique threats. Crucially, this includes personalized guidance from human experts deeply experienced in operationalizing threat intelligence, upskilling teams, prioritizing threats, and delivering continuous support to improve security posture and reduce organizational risk.
Evolve your security strategy with Google Cloud
The M-Trends 2025 report is a call to action. It highlights the urgency of adapting your defenses to meet increasingly sophisticated attacks.
At RSA Conference, we’ll be sharing how these latest Google Cloud Security advancements and more can transform threat intelligence into proactive, AI-powered security. You can find us at booth #N-6062 Moscone Center, North Hall, and connect with security experts at our Customer Lounge in the Marriott Marquis.
You can also stream the conference or catch up on-demand here, and join the Google Cloud Security Community to share knowledge, access resources, discover local events, and elevate your security experience.
Feel more secure about your security, by making Google part of your security team today.
Amazon Bedrock Data Automation (BDA) now supports modality enablement, modality routing by file type, extraction of embedded hyperlinks when processing documents in Standard Output, and an increased overall document page limit of 3,000 pages. These new features give you more control over how your multimodal content is processed and improve BDA’s overall document extraction capabilities.
With Modality Enablement and Routing, you can configure which modalities (Document, Image, Audio, Video) should be enabled for a given project and manually specify the modality routing for specific file types. JPEG/JPG and PNG files can be processed as either Images or Documents based on your specific use case requirements. Similarly, MP4/M4V and MOV files can be processed as either video files or audio files, allowing you to choose the optimal processing path for your content.
Embedded Hyperlink Support enables BDA to detect and return embedded hyperlinks found in PDFs as part of the BDA standard output. This feature enhances the information extraction capabilities from documents, preserving valuable link references for applications such as knowledge bases, research tools, and content indexing systems.
Lastly, BDA now supports processing documents up to 3,000 pages per document, doubling the previous limit of 1,500 pages. This increased limit allows you to process larger documents without splitting them, simplifying workflows for enterprises dealing with long documents or document packets.
Amazon Bedrock Data Automation is generally available in the US West (Oregon) and US East (N. Virginia) AWS Regions.
Starting today, in the AWS GovCloud (US-East) and AWS GovCloud (US-West) Regions, you can now deliver events from an Amazon EventBridge Event Bus directly to AWS services in another account. Using multiple accounts can improve security and streamline business processes while reducing the overall cost and complexity of your architecture.
Amazon EventBridge Event Bus is a serverless event broker that enables you to create scalable event-driven applications by routing events between your own applications, third-party SaaS applications, and other AWS services. This launch allows you to directly target services in another account, without the need for additional infrastructure such as an intermediary EventBridge Event Bus or Lambda function, simplifying your architecture and reducing cost. For example, you can now route events from your EventBridge Event Bus directly to a different team’s SQS queue in a different account. The team receiving events does not need to learn about or maintain EventBridge resources and simply needs to grant IAM permissions to provide access to the queue. Events can be delivered cross-account to EventBridge targets that support resource-based IAM policies such as Amazon SQS, AWS Lambda, Amazon Kinesis Data Streams, Amazon SNS, and Amazon API Gateway.
In addition to the AWS GovCloud (US) Regions, direct delivery to cross-account targets is available in all commercial AWS Regions. To learn more, please read our blog post or visit our documentation. Pricing information is available on the EventBridge pricing page.
Today, AWS Resource Groups is adding support for an additional 160 resource types for tag-based Resource Groups. Customers can now use Resource Groups to group and manage resources from services such as AWS Code Catalyst and AWS Chatbot.
AWS Resource Groups enables you to model, manage and automate tasks on large numbers of AWS resources by using tags to logically group your resources. You can create logical collections of resources such as applications, projects, and cost centers, and manage them on dimensions such as cost, performance, and compliance in AWS services such as myApplications, AWS Systems Manager and Amazon CloudWatch.
Resource Groups expanded resource type coverage is available in all AWS Regions, including the AWS GovCloud (US) Regions. You can access AWS Resource Groups through the AWS Management Console, the AWS SDK APIs, and the AWS CLI.
Starting today, Amazon Q Developer operational investigations is available in preview in 11 additional regions. With this launch, Amazon Q Developer operational investigations is now available in US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland), Europe (Frankfurt), Europe (Stockholm), Europe (Spain), Asia Pacific (Tokyo), Asia Pacific (Hong Kong), Asia Pacific (Sydney), Asia Pacific (Singapore), and Asia Pacific (Mumbai).
Amazon Q Developer helps you accelerate operational investigations across your AWS environment in just a fraction of the time. With a deep understanding of your AWS cloud environment and resources, Amazon Q Developer looks for anomalies in your environment, surfaces related signals for you to explore, identifies potential root-cause hypotheses, and suggests next steps to help you remediate issues faster.
The new operational investigation capability within Amazon Q Developer is available at no additional cost during preview. To learn more, see getting started and best practices documentation.
AWS Resource Explorer now supports AWS PrivateLink in all commercial AWS Regions, allowing you to search for and discover your AWS resources within your Amazon Virtual Private Cloud (VPC) without traversing the public internet.
With AWS Resource Explorer you can search for and discover your AWS resources across AWS Regions and accounts in your organization, either using the AWS Resource Explorer console, the AWS Command Line Interface (AWS CLI), the AWS SDKs, or the unified search bar from wherever you are in the AWS Management Console.
For more information about the AWS Regions where AWS Resource Explorer is available, see the AWS Region table.
The Amazon Connect agent workspace now supports additional capabilities for third-party applications including the ability make outbound calls, accept, transfer, and clear contacts, and update agent status. These enhancements allow you to integrate applications that give agents more intuitive workflows. For example, agents can now initiate one-click outbound calls from a custom-built call history interface that presents their most recent customer interactions.
Third-party applications are available in the following AWS Regions: US East (N. Virginia), US-West (Oregon), Africa (Cape Town), Asia Pacific (Seoul), Asia Pacific (Singapore), Asia Pacific (Sydney), Asia Pacific (Tokyo), Canada (Central), Europe (Frankfurt), and Europe (London).
Starting today, AWS AppSync Events, a fully managed service for serverless WebSocket APIs with full connection management, now supports data source integrations for channel namespaces. This new feature enables developers to associate AWS Lambda functions, Amazon DynamoDB tables, Amazon Aurora databases, and other data sources with channel namespace handlers to process published events and subscription requests. Developers can now connect directly to Lambda functions without writing code and leverage both request/response and event modes for synchronous and asynchronous operations.
With these new capabilities, developers can create sophisticated event processing workflows by transforming and filtering published events using Lambda functions, or save batches of events to DynamoDB using the new AppSyncJS batch utilities for DynamoDB. This integration enables complex interactive flows, making it easier for developers to build rich, real-time applications with features like data validation, event transformation, and persistent storage of events. By simplifying the architecture of real-time applications, this enhancement significantly reduces development time and operational overhead for front-end web and mobile development.
This feature is now available in all AWS Regions where AWS AppSync is offered, providing developers worldwide with access to these powerful new integration capabilities. Powertools for AWS Lambda new AppSync Events integration are also now available to easily write your Lambda functions.
To learn more about AWS AppSync Events and channel namespace integrations, visit thelaunch blog post, the AWS AppSync documentation, and the Powertools for Lambda documentation (TypeScript, Python, .NET). You can get started with these new features through the AWS AppSync console.
In today’s data-driven world, the ability to extract meaningful insights quickly is paramount. Yet, for many, the journey from raw data to actionable intelligence is fraught with challenges. Complex SQL queries, time-consuming iterative analyses, and the gap between technical and non-technical users often hinder progress. BigQuery data canvas is a visual workspace designed to democratize data analysis and empower everyone to unlock the power of their BigQuery data. At Google Cloud Next 25 earlier this month, we introduced a built-in AI-assistive chat experience in data canvas powered by Gemini that encapsulates a variety of workflow analysis processes, ranging from data exploration to visualization, all with a single prompt.
Data canvas isn’t just another feature; it’s a fundamental change in how data practitioners interact with data. By seamlessly integrating visual workflows with BigQuery and Gemini, we’re bridging the gap between raw data and impactful insights.
The data canvas assistant at work
Core features: A deep dive
Let’s take a look at what you can do with the data canvas assistant.
Gemini powers your AI data agent
We integrated Gemini, our powerful AI model, into data canvas to enhance your data exploration experience. With it, you can use natural language to generate and refine queries, ask questions about your data, and receive intelligent suggestions and insights. For example, if you type “Show me the top 10 customers by revenue” data canvas powered by Gemini generates the corresponding query as well as offers insights about the dataset. Gemini also assists in data discovery, suggesting datasets that may be relevant to your questions.
The Gemini-powered AI chat experience encapsulates workflow analysis processes, from data exploration to visualization — all with a single prompt. Don’t know where to start? Use the suggested prompts to start exploring your data. Based on your selected or most used tables, BigQuery Data Canvas uses Gemini to generate natural language questions about your data, along with the corresponding SQL queries to answer them. You can add multiple data sources to the chat context from which Gemini can answer your questions. You can also further ground the chat by passing system instructions to pass domain knowledge about your data, to increase the accuracy of the resulting answers. For example, perhaps your organization’s fiscal year does not run from January to December — you can inform Gemini of this using system instructions. You can also use the system instructions to mold the way your answers are formatted and returned to you, e.g., “always present findings with charts, use green colour for positive and red color for negative.”
And coming soon, for complex problems like forecasting and anomaly detection, the chat experience will support advanced analysis using Python. Toggle this feature on in your chat’s settings bar, and based on the complexity of your prompt, Gemini chat assist will use a Python code interpreter to answer your question.
“Data Canvas is a game-changer in BigQuery, allowing data professionals to interactively discover, query, transform, and visualize data using a seamless blend of natural language processing and graphical workflows, all powered by Gemini AI.” – Sameer Zubair, Principal Platform Tech Lead, Virgin Media O2
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud data analytics’), (‘body’, <wagtail.rich_text.RichText object at 0x3eb2fc348280>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/bigquery/’), (‘image’, None)])]>
Visual query building: Explore multiple paths in one place
When sitting down to do data analysis, imagine a unified hub where you can filter, join, aggregate, or visualize data across multiple tables, each in its own container, all on the same page. Instead of forcing you down a linear path, data canvas uses a DAG (Directed Acyclic Graph) approach, allowing you to branch off at any point to explore alternative angles, circle back to earlier steps, or compare multiple outcomes simultaneously. Adding data is simple: just search for the tables you need, and add them to the canvas. You can start by asking questions of your data using natural language, and data canvas automatically generates the underlying SQL, which you can review or tweak whenever you like. This node-based method lowers the barrier to analysis for experienced SQL pros and newer analysts alike, allowing them to follow insights wherever they lead, without wrestling with complex query syntax.
Interactive visualizations: Uncover insights in real time
Data canvas offers a variety of interactive visualizations, from charts and graphs to tables. It’s easy to customize your visualizations, explore data interactively, and identify trends and anomalies. Want to see the distribution of sales across different regions? Add the “Region” and “Sales” fields onto the canvas, and let data canvas generate a chart for you automatically. Simply select the best visualization for the data, or select your own visualization, and watch as your data comes to life. Furthermore, you can export these visualizations as a PNG or to Looker Studio for further manipulation and sharing.
Putting data canvas to work in the real world
There’s no end of ways you can use new AI assistive capabilities in BigQuery data canvas. Here are a few industry-specific ideas to get your creative juices flowing.
Telecom support and diagnostics: Speeding up service restoration
Imagine a telecom support team that’s troubleshooting customer issues. Support tickets get ingested into BigQuery every hour, and can be queried in data canvas to extract who (customer phone), where (postcode), what (the affected service), when (timestamp), and which (closest cell tower). Each of these data points is handled in its own node, all within a single canvas, so analysts don’t need to toggle across multiple query tabs to perform this analysis. This visual workflow lets them spot localized outages, route technicians to the right towers, and resolve service disruptions faster than ever.
E-commerce analytics: Boosting sales and customer engagement
Picture a marketing team analyzing customer purchase data to optimize campaigns. Using data canvas, they can visually join customer and product tables, filter by purchase history, and visualize sales trends across different demographics. They can quickly identify top-selling products, high-value customer segments, and the effectiveness of their marketing campaigns, to make data-driven decisions.
Supply chain optimization: Streamlining logistics
A logistics manager could use data canvas to track inventory levels, analyze delivery routes, and identify potential bottlenecks. By visualizing this supply chain data, they can optimize delivery schedules, reduce costs, and improve efficiency. They can also create interactive dashboards to monitor key performance indicators and make real-time adjustments.
The future of data exploration is visual and AI-powered
BigQuery data canvas is a significant leap forward in making data accessible and actionable for everyone. By combining visual workflows, the power of BigQuery, and the intelligence of Gemini, we’re empowering you to unlock the full potential of your data. Start your journey today and experience the future of data exploration.
Get started with BigQuery data canvas today with this course. It’s completely free to use.
AWS AppConfig now supports dual-stack endpoints, facilitating connectivity through Internet Protocol Version 6. The existing AWS AppConfig endpoints supporting IPv4 will remain available for backwards compatibility.
The continuous growth of the internet has created an urgent need for IPv6 adoption, as IPv4 address space reaches its limits. Through AWS AppConfig’s implementation of dual-stack endpoints, organizations can execute a strategic transition to IPv6 architecture on their own timeline. This approach enables companies to satisfy IPv6 regulatory standards while preserving IPv4 connectivity for systems that have not yet moved to IPv6 capabilities.
Amazon SageMaker Lakehouse now supports attribute-based access control (ABAC), using AWS Identity and Access Management (IAM) principal and session tags to simplify data access, grant creation, and maintenance. With ABAC, you can manage permissions using dynamic business attributes associated with user identities.
Previously, SageMaker Lakehouse granted access to lakehouse databases and tables by directly assigning permissions to specific principals such as IAM users and IAM roles, a process that could quickly become unwieldy as the number of users grew. ABAC now allows administrators to grant permissions on a resource with conditions that specify user attribute keys and values. This means that any IAM principal or IAM role with matching principal or session tag keys and values will automatically have access to the resource making the experience more efficient. You can use ABAC though the AWS Lake Formation console to provide access to IAM users and IAM roles for both in-account and cross-account scenarios. For instance, rather than creating individual policies for each developer, administrators can now simply assign them an IAM tag with a key such as “team” and value “developers” and provide access to all developers with a single permission grant. As new developers join with the matching tag and value, no additional policy modifications are required.
This feature is available in all AWS Regions where SageMaker Lakehouse is available. To get started, read the launch blog and read ABAC documentation.
With this launch, VPC Reachability Analyzer and VPC Network Access Analyzer are now available in Europe (Spain) Region.
VPC Reachability Analyzer allows you to diagnose network reachability between a source resource and a destination resource in your virtual private clouds (VPCs) by analyzing your network configurations.For example, Reachability Analyzer can help you identify a missing route table entry in your VPC route table that could be blocking network reachability between an EC2 instance in Account A that is not able to connect to another EC2 instance in Account B in your AWS Organization.
VPC Network Access Analyzer allows you to identify unintended network access to your resources on AWS. Using Network Access Analyzer, you can verify whether network access for your VPC resources meets your security and compliance guidelines. For example, you can create a scope to verify that the VPCs used by your Finance team are separate, distinct, and unreachable from the VPCs used by your Development team.
Yesterday’s databases aren‘t sufficient for tomorrow’s applications, which need to deliver dynamic, AI-driven experiences at unpredictable scale and with zero downtime. To help, at Google Cloud Next 25, we announced new functionality, improved performance, and migration tooling to simplify modernizing database workloads from MySQL to Spanner, Google Cloud’s horizontally scalable, always-on operational database.
MySQL simply wasn’t designed for today’s most pressing scaling and availability needs. Common fixes, like sharding or manual replication, are complex and risky—coming exactly at the time when the business can tolerate it least. Planning and executing scaling on self-managed databases typically require expensive after-market solutions, which can take months to architect and test, diverting development teams from more pressing user-facing features. And because of the overhead of scaling, organizations often provision for peak usage, even if that capacity remains unused most of the time.
Tomorrow’s applications also need to do more than just transaction processing. New experiences like semantic discovery, collaborative recommendations, real-time fraud detection, and dynamic pricing, require different ways of storing and querying data.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud databases’), (‘body’, <wagtail.rich_text.RichText object at 0x3eac53abe880>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/products?#databases’), (‘image’, None)])]>
Simpler live migrations from MySQL to Spanner
To help organizations struggling to grow and modernize their apps, Spanner provides a well-defined migration path to safely and easily move production workloads from MySQL with virtually no downtime. Once there, they can take advantage of Spanner’s hands-free reliability and rich graph, full-text search, and integrated AI capabilities.
A key part of this is Spanner migration tool, which automates schema and data migration to support live cutovers, including consolidating petabyte-sized sharded MySQL databases in days, not months. Improved data movement templates provide increased throughput at significantly lower cost as well as new flexibility to transform data as it’s migrated, and updated built-in reverse replication synchronizes data back from Spanner to sharded MySQL instances to allow for near real-time failover in a disaster scenario. Finally, new Terraform configurations and CLI integration provide flexibility to customize implementations.
Spanner migration tool architecture
Improved latency with fewer code and query changes
To further reduce the cost and complexity of migrating application code and queries, we introduced a rich new set of relational capabilities in Spanner that map closely to MySQL.
Repeatable read is the default isolation level in MySQL, balancing performance and consistency. We’re excited to bring this flexibility to Spanner as well. New repeatable read isolation, now in preview, complements Spanner’s existing serializable isolation. It will be familiar to MySQL developers and gives them additional tools to significantly improve performance. In fact, most common workloads can see up to a 5x latency improvement compared to what was possible in Spanner previously. In addition, new auto_increment keys, SELECT…FOR UPDATE, and close to 80 new MySQL functions dramatically reduce the changes required to migrate an application to Spanner.
“As our calendar sharing service gained popularity, demand grew steadily. At 55 million users, we hit Aurora MySQL’s scalability limits for both data volume and active connections. But scalability wasn’t the only issue. Our app teams spent too much time managing the database, leaving less for feature development. Fully managed Spanner solved this, significantly cutting costs and enabling future growth. Migrations are challenging, but with Google Cloud support and the Spanner migration tool, we completed it successfully with minimal downtime.” – Eiki Kanai, SRE Manager, TimeTree
A recent Total Economic Impact study from Forrester Consulting also found that Spanner provided a 132% ROI and $7.74M of total benefits over three years for a composite organization representative of interviewed customers. This comes largely from retiring self-managed databases and taking advantage of Spanner’s elastic scalability and built-in, hands-free, high availability operations. Forrester found that decreased disruptions from unplanned downtime and system maintenance with Spanner reduced time to onboard new apps and allowed development teams to address new opportunities without complex re-architecture projects or new capital expenditures.
Get started today
To learn more about how Spanner can take the stress out of your organization’s next growth milestone and set your development teams up for success, visit https://cloud.google.com/spanner. There, you’ll find reference architectures, examples of successful migrations, and a directory of qualified partners to help with a free assessment. Read up on how to run a successful migration from MySQL. Or try Spanner yourself today with a 90-day free trial and production instances starting around $65 USD/month.
How is generative AI actually impacting developers’ daily work, team dynamics, and organizational outcomes? We’ve moved beyond simply asking if organizations are using AI, and instead are focusing on how they’re using it.
That’s why we’re excited to share DORA’s Impact of Generative AI in Software Development report. Based on extensive data and developer interviews, the report moves beyond the hype to offer perspective on AI’s impact on individuals, teams, and organizations.
Let’s take a look at some of the highlights – research-backed ways organizations are already benefitting from AI in their software development, plus five actionable ways to maximize AI’s benefits while mitigating potential risks.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud AI and ML’), (‘body’, <wagtail.rich_text.RichText object at 0x3ea66a0b1430>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/vertex-ai/’), (‘image’, None)])]>
Understanding the real-world impact
Our research shows real productivity gains, organizational benefits, and grassroots adoption of AI.Here are just a few of the key highlights:
The AI imperative is real: A staggering 89% of organizations are prioritizing the integration of AI into their applications, and 76% of technologists are already using AI in some part of their daily work. This signals both top-down and grassroots adoption solidifying the fact that this isn’t a future trend; it’s happening now.
Productivity gains confirmed: Developers using gen AI report significant increases in flow, productivity, and job satisfaction. For instance, a 25% increase in AI adoption is associated with a 2.1% increase in individual productivity.
Organizational benefits are tangible: Beyond individual gains, we found strong correlations between AI adoption and improvements in crucial organizational metrics. A 25% increase in AI adoption is associated with increases in document quality, code quality, code review speed and approval speed.
How to maximize AI adoption and impact
So how do you make the most of AI in your software development? The report explores five practical approaches for both leaders and practitioners:
Have transparent communications: Our research suggests that organizations that apply this strategy can gain an estimated 11.4% increase in team adoption of AI.
Empower developers with learning and experimentation: Our research shows that giving developers dedicated time during work hours to explore AI leads to a 131% increase in team AI adoption.
Establish clear policies: Our data suggest that organizations with clear AI acceptable-use policies see a 451% increase in AI adoption compared to those without.
Rethink performance metrics: Shift the focus from hours worked to outcomes and value delivered. Acknowledge the labor involved in effectively working with AI, including prompt engineering and refining AI-generated output.
Embrace fast feedback loops: Implement mechanisms that enable faster feedback for continuous integration, code reviews, and testing. These loops are becoming even more critical as we venture into workflows with AI agents.
The future of software development is here
Generative AI is poised to revolutionize software development. But realizing its full potential requires a strategic, thoughtful, and human-centered approach.
Consumer packaged goods brands invest significantly in advertising, driving brand affinity to boost sales now and in the future. Campaigns are often optimized as they run by monitoring media-in-progress metrics against strategies like targeting specific audiences cohorts. However, because most sales happen in physical stores, accurately linking media sales lift to target audiences while ads are running can be a challenge.
Many solutions use“total ad sales” for measurement, but this metric doesn’t always correlate toincremental sales,which is Mars Wrigley’s gold standard key performance indicator (KPI) for media effectiveness.
So how do you know if your current ad spend is paying off while it’s still possible to optimize your in-flight campaigns?
Mars Wrigley is working with EPAM, and using Google Cloud Cortex Framework, to make significant progress tackling this issue with an approach that introduces an agile way to accurately measure in-flight audience effectiveness based on incremental sales.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud data analytics’), (‘body’, <wagtail.rich_text.RichText object at 0x3ea669de8f70>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/bigquery/’), (‘image’, None)])]>
The Mars Wrigley approach: Connecting data for actionable audience insights
After exploring many solutions, Mars Wrigley decided to look inward and harness the power of its own data. However, this data was siloed in various media and retailer platforms.
To solve this, the company adopted Cortex Framework, using its pre-built data connectors and standardized data models to quickly integrate media data from sources like YouTube with sales information from retailers, creating a unified view of ad impact within a central, AI-ready cloud data foundation in BigQuery.
By combining data in BigQuery and using built-in data science tools like BQML, Mars Wrigley can now better understand how specific audience targeting strategies in its media investments are driving incremental sales lift across key customer groups.
For example, by identifying stores with similar sales patterns, the company can create geo-targeted control and expose Designated Market Areas (DMAs) for running audience testing.
By dividing its audiences into distinct segments, each with a control group, Mars Wrigley can experiment and monitor live campaign performance to optimize its investments for maximum sales lift.
Google Cloud Cortex Framework: Accelerating insights and decisions
The accelerated access to a consolidated AI-enabled data core represents a valuable addition to Mars Wrigley’s portfolio of media effectiveness tools. Cortex Framework provides instant insights with its predefined and customizable analytics content as well as seamless integration with major media platforms like Google Ads, YouTube, TikTok, Meta, and more.
“Before, we were struggling to get an accurate in-flight view of our audiences’ performance. With Google Cloud Cortex Framework, we realized that the answer was within our internal data. We partnered with EPAM Systems to harness the synergy of our internal data sources, enabling us to run timely experimentation based on actual sales lift. This filled an important gap within our portfolio of measurement tools and allowed us to continue making data-driven decisions when it matters.” – Lía Inoa Pimentel – Sr. Global Manager, Brand Experience & Media Measurement, Mars Wrigley.
By embracing Cortex Framework, Mars Wrigley is not only gaining a clearer understanding of media impact on sales but also paving the way for a more data-driven and agile approach to marketing in the consumer packaged goods industry.
This approach includes some of the following key benefits:
Agile hypothesis testing: Bringing insights in-house significantly accelerates the ability to test hypotheses and adapt strategies quickly.
Scalability: The architecture allows for easy expansion to encompass more media investment insights and a broader range of retail customers.
Versatility: Beyond audience testing, Mars Wrigley can also leverage Cortex Framework for other use cases, such as media formats, content variations, shopper media, and more.
To learn more about solutions that can help accelerate your marketing journey in the cloud visit the EPAM and Google Cloud Cortex Framework websites.
Amazon Redshift now supports history mode for zero-ETL integrations with eight third-party applications including Salesforce, ServiceNow, and SAP. This addition complements existing history mode support for Amazon Aurora PostgreSQL-compatible and MySQL-compatible, DynamoDB, and RDS for MySQL databases. The expansion enables you to track historical data changes without Extract, Transform, and Load (ETL) processes, simplifying data management across AWS and third-party applications.
History Mode for zero-ETL integrations with third-party applications lets customers easily run advanced analytics on historical data from their applications, build comprehensive lookback reports, and perform trend analysis and data auditing across multiple zero-ETL data sources. This feature preserves the complete history of data changes without maintaining duplicate copies across various external data sources, allowing organizations to meet data retention requirements while significantly reducing storage needs and operational costs. Available for both existing and new integrations, history mode offers enhanced flexibility by allowing selective enabling of historical tracking for specific tables within third-party application integrations, giving businesses precise control over their data analysis and storage strategies.
To learn more about history mode for zero-ETL integrations in Amazon Redshift and how it can benefit your data analytics workflows, visit the history mode documentation. To learn more about the supported third-party applications, visit the AWS Glue documentation. To get started with zero-ETL integrations, visit the getting started guides for Amazon Redshift.