GCP – Redefining enterprise data with agents and AI-native foundations
The world is not just changing; it’s being re-engineered in real-time by data and AI. The way we interact with data is undergoing a fundamental transformation, moving beyond human-led analysis to a collaborative partnership with intelligent agents. This is the agentic shift, a new era where specialized AI agents work autonomously and cooperatively to unlock insights at a scale and speed that was previously unimaginable. At Google Cloud, we’re not just participants in this shift — we are building the core intelligence, interconnected ecosystems, and AI-native data platforms that power it.
To make this agentic reality possible, you need a different kind of data platform — not a collection of siloed tools, but a single, unified, AI-native cloud. That’s Google’s Data Cloud. At its heart are our unified analytical and operational engines, that are removing the historic divide between business transaction data and strategic analysis. Google Data Cloud provides agents with a complete, real-time understanding of the business, transforming it from a collection of processes into a self-aware, self-tuning, reliable organization.
Today, we are delivering major innovations across three key areas that bring this vision to life:
- A new suite of data agents: specialized AI agents designed to act as expert partners for every data user, from data scientists and engineers to business analysts.
- An interconnected network for agent collaboration: a suite of APIs, tools, and protocols that allow developers to integrate Google agents with their own agents and AI efforts creating a single, intelligent ecosystem.
- A unified, AI native foundation: a platform that enables intelligent agents by unifying data, providing persistent memory, and embedding AI-driven reasoning.
Specialized data agents as expert partners
The agentic era begins with a new workforce of specialized AI agents, providing an AI-native interface to turn intent into action.
- For data engineers: We are introducing the Data Engineering Agent (Preview) in BigQuery to simplify and automate complex data pipelines. You can now use natural language prompts to streamline the entire workflow, from data ingestion from sources like Google Cloud Storage to performing transformations and maintaining data quality. Simply describe what you need — “Create a pipeline to load a CSV file, cleanse these columns, and join it with another table” — and the agent generates and orchestrates the entire workflow.
Fig. 1 – Data engineering agent for automation of complex data pipelines
- For data scientists: We are reimagining an AI-first Colab Enterprise Notebook experience available in BigQuery and Vertex AI, featuring a new Data Science Agent (Preview). Powered by Gemini, the Data Science Agent triggers entire autonomous analytical workflows, including exploratory data analysis (EDA), data cleaning, featurization, machine learning predictions, and much more. It creates a plan, executes the code, reasons about the results, and presents its findings, all while allowing you to provide feedback and collaborate in sync.
Fig. 2 – Data science agent to transform each stage of data science tasks
For business users and analysts: Last year, we introduced the Conversational Analytics Agent, empowering users to get answers from their data using natural language. Today, we’re taking that agent to the next level, with our Code Interpreter (Preview). This enhancement supports the many critical business questions that go beyond what simple SQL can answer — for example, “Perform a customer segmentation analysis to group customers into distinct cohorts?” Powered by Gemini’s advanced reasoning capabilities, and developed in partnership with Google DeepMind, the Code Interpreter translates complex natural language questions into executable Python code. It delivers a complete analytical flow — generating code, providing clear natural language explanations, and creating interactive visualizations — all within the governed and secure environment of the Google Data Cloud.
Fig 3 – Conversational Analytics with Code Interpreter for advanced analysis
Building the interconnected agent ecosystem
The agentic ecosystem is not a closed loop; it’s an open platform for builders. The true potential of the agentic shift is realized when developers not only use existing agents, but also extend and connect them to their own intelligent systems, creating a broader network. Our first-party agents provide powerful, out-of-the-box capabilities as well as foundational building blocks including APIs, tools, and protocols to build custom agents, integrate conversational intelligence into existing applications, and orchestrate complex, multi-agent workflows that solve unique business problems.
To enable this, we are launching Gemini Data Agents APIs, with first being the new Conversational Analytics API (Preview). This API provides the building blocks to integrate Looker’s powerful natural language processing and Code Interpreter capabilities directly into your own applications, products, and workflows. This allows you to create unique, engaging, and accessible data experiences that meet your specific business needs.
Beyond conversational experiences, we are providing the tools to create custom agents from the ground up. Our new Data Agents API and the Agent Development Kit (ADK) allow you to build specialized agents tailored to your unique business processes. The foundation for all this secure interaction is our investment in Model Context Protocol (MCP), including the MCP Toolbox for Databases and the addition of the new Looker MCP Server (Preview).
Fig 4 – Gemini CLI querying semantic layer from Looker MCP Server
A unified and AI-native data foundation
Intelligent agents and the networks they form cannot operate on a traditional data stack. They need a cognitive foundation that unifies data from across the enterprise and provides new capabilities to understand meaning and provide a persistent memory to reason against.
A core requirement of this AI-native foundation is that it unifies live transactional and historical analytical data stored in OLTP and OLAP systems. We started down this path with a columnar engine for AlloyDB to supercharge analytics for PostgreSQL workloads. Today, we are extending that performance commitment to our flagship scale-out database with the new Spanner columnar engine (Preview); analytical queries on Spanner columnar engine perform up to 200× faster than on Spanner’s row store on the SSD tier — right on your transactional data. As part of our unified Data Cloud, this innovation directly benefits our analytical engine, BigQuery via Data Boost, which leverages the Spanner columnar engine to close the gap between transactional and analytical workloads and make it faster for BigQuery to analyze live operational data.
With this unified data plane in place, the next requirement is giving agents a comprehensive memory that is grounded in your company’s factual data. To ensure trustworthy agents and prevent hallucinations, they must use a technique called Retrieval-Augmented Generation (RAG). The foundation of effective RAG is vector search that spans both real-time operational data and deep historical, analytical data. This is why we embed vector search and generation capabilities directly into our data foundations — to give agents access to both transactional and analytical memory.
However, optimizing vector search is complex, often forcing developers to make tough trade-offs between performance, quality, and operational overhead. In AlloyDB AI, new capabilities like adaptive filtering (Preview) solve this for transactional memory, automatically maintaining vector indexes and optimizing for fast queries on live operational data. To provide deep analytical memory, we are also bringing autonomous vector embeddings and generation to BigQuery. Now, BigQuery can automatically prepare and index multimodal data for vector search, a crucial step in building a rich, long-term semantic memory for your agents.
Finally, on top of this unified and accessible data, we are embedding AI reasoning directly into our query engines. With the new AI Query Engine in BigQuery (Preview), all data practitioners can perform AI-powered computations on structured and unstructured data right inside BigQuery, quickly and easily getting answers to subjective questions like “Which of these customer reviews sound the most frustrated?”
AI Query Engine brings the power of LLMs directly to SQL
The future is agentic
The announcements today — from specialized agents for every user to the AI-native foundation that powers them — are more than just a roadmap. They are the building blocks for the new agentic enterprise. By bringing together a new workforce of intelligent agents, enabling them to collaborate within an open and interconnected network, and grounding them in a unified data cloud that erases the line between operational and analytical worlds, we are providing a platform that lets you be an innovator, not just an integrator. This is a fundamental shift in how your organization will interact with its data, moving from complex human-led analysis to a powerful partnership between your teams and intelligent agents. The agentic era is here. We are incredibly excited to see what you will build, and we invite you to join us on this journey to redefine what’s possible with data.
Read More for the details.