GCP – Next 25 developer keynote: From prompt, to agent, to work, to fun
Attending a tech conference like Google Cloud Next can feel like drinking from a firehose — all the news, all the sessions, and breakouts, all the learning and networking… But after a busy couple of days, watching the developer keynote makes it seem like there’s a method to the madness. A coherent picture starts to emerge from all the things that you’ve seen, pointing the way to all the cool things you can do when you get back to your desk.
This year, the developer keynote was hosted by the inimitable duo of Richard Seroter, Google Cloud Chief Evangelist, and Stephanie Wong, Head of Developer Skills and Community, plus a whole host of experts from around Google Cloud product, engineering, and developer advocacy teams. The keynote itself was organized around a noble, relatable goal: Use AI to help remodel AI Developer Experience Engineer Paige Bailey’s 1970s era kitchen. But how?
It all starts with a prompt
The generative AI experience starts by prompting a model with data and your intent. Paige was joined on stage by Logan Kilpatrick, Senior Product Manager at Google DeepMind. There, Logan and Paige prompted AI Studio to analyze Paige’s kitchen, supplying it with text descriptions, floor plans, and images. In return, it suggested cabinets, a cohesive design, color palette, and materials, relying on Gemini’s native image generation capabilities to bring its ideas to life. Then, to answer important questions on cost, especially for Paige’s area, they used Grounding with Google Search to pull in real-world material costs, local building codes and regulations, and other relevant information.
As Logan said, “From understanding videos, to native image generation, to grounding real information with Google Search – these are things that can only be built with Gemini.”
New things that make this possible:
-
Gemini 2.5 Pro is available in preview on Vertex AI and the Gemini app.
-
Gemini 2.5 Flash — our workhorse model optimized specifically for low latency and cost efficiency — is coming soon to Vertex AI, AI Studio, and the Gemini app.
From prompt to agent
We all know that a prompt is the heart of a generative AI query. “But what the heck is an agent?” asked Richard. “That’s the million-dollar question.”
“An agent is a service that talks to an AI model to perform a goal-based operation using the tools and context it has,” Stephanie explained. And how do you go from prompt to agent? One way is to use Vertex AI, our comprehensive platform for building and managing AI applications and agents, and Agent Development Kit (ADK), an open-source framework for designing agents. ADK makes it easier than ever to get started with agents powered by Gemini models and Google AI tools.
Dr. Fran Hinkelman, Developer Relations Engineering Manager at Google Cloud, took the stage to show off ADK. An agent needs three things, Fran explained: 1) instructions to define your agent’s goal, 2) tools to enable them to perform, and 3) a model to handle the LLM’s tasks.
Fran wrote the agent code using Python, and in a matter of minutes, deployed it, and got a professionally laid out PDF that outlined everything a builder might need to get started on a kitchen remodel. “What a massive time-saver,” Fran said.
New things that make this possible:
-
Agent Development Kit (ADK) is our new open-source framework that simplifies the process of building agents and sophisticated multi-agent systems while maintaining precise control over agent behavior. With ADK, you can build an AI agent in under 100 lines of intuitive code.
-
ADK support for Model Context Protocol (MCP), which creates a standardized structure and format for all the information an LLM needs to process a data request.
From one agent to many
It’s one thing to build an agent. It’s another to orchestrate a collection of agents — exactly the kind of thing you need for a complex process like remodeling a kitchen. To show you how, Dr. Abirami Sukumaran, Staff Developer Advocate at Google Cloud, used ADK to create a multi-agent ecosystem with three types of agents: 1) a construction proposal agent 2) a permits and compliance agent 3) an agent for ordering and delivering materials.
And when the multi-agent system was ready, she deployed it directly from ADK to Vertex AI Agent Engine, a fully managed agent runtime that supports many agent frameworks including ADK.
It gets better: After deploying her agent, Abirami tested it out in Google Agentspace, a hub for sharing your own agents and those from third-parties.
There was a problem, though. Midway through, the agent system appeared to fail. Abirami sprung into action, launching Gemini Cloud Assist Investigations, which used Logs Explorer to return relevant observations and hypotheses about the source of the problem. It even supplied a recommended code fix for the agents. Abirami examined the code, accepted it, redeployed her agents, and saved the day.
This is really key. “It’s hard enough to build systems that orchestrate complex agents and services,” Abirami said. “Developers shouldn’t have to sit around debugging multiple dependencies — getting to the logs, going through the code, all of this can take a lot of time and resources that devs typically don’t have.”
New things that make this possible:
- Vertex AI Agent Engine is a fully managed runtime in Vertex AI that helps you deploy your custom agents to production with built-in testing, release, and reliability at a global, secure scale.
- Cloud Assist Investigations helps diagnose problems with infrastructure and even issues in the code.
- Agent2Agent (A2A) protocol: We’re proud to be the first hyperscaler to create an open protocol to help enterprises support multi-agent ecosystems, so agents can communicate with each other, regardless of the underlying technology.
Choose your own IDE and models
“Have you heard of vibe coding?” i.e., agentic coding, asked our next presenter, Debi Cabrera, Senior Developer Advocate at Google Cloud. Essentially, people can prompt an agent with ideas as well as code to get to an effective programming output. People are doing it more and more using Windsurf, a popular new Integrated Development Environment (IDE), and she’s a fan.
Debi also showed using Gemini in Cursor and IntelliJ with Copilot, but you could also use Visual Studio Code, Tabnine, Cognition, or Aider. (She even wrote her prompts in Spanish, which Gemini handled sin problema). At the end of the day, “we’re enabling devs to use Gemini wherever it suits you best,” Debi said.
Conversely, if you don’t want to use Gemini as your model, you can also use one of the more than 200 models in Vertex AI Model Garden, including Llama, Gemma 3, Anthropic, and Mistral, or open source models from Hugging Face.
“No matter what you use, we’re excited to see what you come up with!”
New things that make this possible:
-
Gemini 2.5 Pro is now available in Gemini Code Assist for individuals.
-
Android Studio support for Gemini Code Assist is now available in preview.
-
Gemini in Firebase provides complete AI assistance in the new Firebase Studio.
In a field of dreams
Next up, presenters took a break from Paige’s kitchen remodel to tackle another high-value problem: how to throw a pitch.
With all the data that Major League Baseball processes with Google Cloud — 25 million data points per game — pitching technique is a problem that’s ripe for AI.
Jake DiBattista, winner of the recent Google Cloud x MLB Hackathon, started by analyzing a video of a great left-handed pitcher, Clayton Kershaw. He pre-processed the video using a computer vision library, and stored it in Google Cloud, using selections such as pitch type and game state to pull MLB data. Finally, after sending all this information to the Gemini API, he got his answer: Kershaw threw his signature curveball with nearly no deviation from his ideal.
Impressive, but how well does it work for those of us who aren’t pros? Jake created an “amateur mode” for less experienced players, and used a video of our host, Richard, throwing a pitch! After some prompt engineering to adapt from the professional model for Kershaw to an amateur model for Richard, the results were a little more prescriptive: He has potential, he just needs to tighten up his arm a little, and use more leg drive to maximize his power.
Jake shared the inspiration for his project: As a shot putter in college, he wanted to measure the accuracy of his throwing technique. How can you improve if you don’t know what you’re doing wrong – or right? Back then, having this kind of data would have been incredibly valuable for his development.
But what’s truly amazing is that Jake built this fully customizable prompt generator for analyzing pitches in just one week. “This essentially worked out of the box,” Jake said. “I didn’t need to implement a custom model or build overly complex datasets.”
Get back to work
Meanwhile, back at his day job, our next presenter Jeff Nelson, Developer Advocate at Google Cloud, took the stage with a clear goal: to turn raw data into a data application for use by sales managers. He started in BigQuery Notebook to build a forecast and wrote some SQL code. BigQuery loaded the results into a Python DataFrame, because Python makes it easy to use libraries to execute code over tables of any size.
But how can you actually use this agent to forecast sales? Jeff selected the Gemini Data Science Agent built into the Notebook, hit “Ask Agent,” and inputted a prompt that asked for a sales forecast from his table. The best part – from that point onward, all code was generated and executed by the Gemini Data Science Agent.
Plus, he pointed out that the agent used Spark for feature engineering, which is only possible because of our new Serverless Spark engine in BigQuery. Switching between SQL, Spark, and Python is easy, so you can use the right tool for the job.
To build the forecast itself, Jeff used a new Google foundation model, TimesFM, that’s accessible directly from BigQuery. Unlike traditional models, this one’s pre-trained and on massive times-series datasets, so you get forecasts by simply inputting data. “The forecast becomes a data app accessible to everyone,” Jeff said.
- New things that make this possible:
-
Specialized agents to support Data Engineers, Data Analysts, and Business Users – in preview.
-
Data Science Agent integrated in BigQuery notebooks is coming soon. You can get started in Colab today.
Focus on the fun stuff
As a developer, how would you like it if you could hand off boring things like creating technical design or product requirement docs? Scott Densmore, Senior Director of Engineering, closed out the demos to show us an incredible way to cut through tedious work: Gemini Code Assist and its new Kanban board.
Code Assist can help you orchestrate agents in all aspects of the software development lifecycle, including with what Scott calls a “backpack” that holds all your engineering context. Using a technical design doc for a Java migration as an example, Scott created a comment and assigned it to Code Assist right from the Google doc. Instantly, the new task shows up on the Kanban board, ready to be tracked. Nor is this capability limited to Google Docs — you can also assign tasks directly from your chatrooms and bug trackers, or have Code Assist proactively find them for you.
Then, he took a tougher example: he asked Code Assist to create a prototype for a product requirement doc. He told Code Assist the changes he wanted, and hit repeat until he was happy with what he saw. Easy.
“Gemini Code Assist provides an extra pair of coding hands to help you create applications and remove repetitive and mundane tasks — so you can focus on the fun stuff.”
New things that make this possible:
- Gemini Code Assist Kanban board lets you interact with our agents, review the workplan that Gemini creates to complete the tasks, and track the progress of the various jobs/requests.
Pretty amazing, right? But don’t just take our word for it, for a true sense of all the magic that we demonstrated here, go ahead and rewatch the full developer keynote. We promise that it will be an hour well spent.
Read More for the details.