GCP – Agent Factory Recap: Build AI Apps in Minutes with Google’s Logan Kilpatrick
In our latest episode of The Agent Factory, we were thrilled to welcome Logan Kilpatrick from Google Deep Mind for a vibe coding session that showcased the tools shaping the future of AI development. Logan, who has had a front-row seat to the generative AI revolution at both OpenAI and now Google, gave us a hands-on tour of the vibe coding experience in Google AI Studio, showing just how fast you can go from an idea to a fully-functional AI application.
This post guides you through the key ideas from our conversation. Use it to quickly recap topics or dive deeper into specific segments with links and timestamps.
The Build Experience in Google AI Studio – What is it?
This episode focused on the Build feature in Google AI Studio and Logan used the term vibe coding to describe the experience of using it. This feature is designed to radically accelerate how developers create AI-powered apps. The core idea is to move from a natural language prompt of an idea for an app to a live, running application in under a minute. It handles the scaffolding, code generation, and even error correction, allowing you to focus on iterating and refining your idea.
The Factory Floor
The Factory Floor is our segment for getting hands-on. Here, we moved from high-level concepts to practical code with live demos.
Vibe Coding a Virtual Food Photographer
Timestamp: [01:14]
To kick things off, Logan hit the “I’m Feeling Lucky” button to generate a random app idea: a virtual food photographer for restaurant owners. The goal was to build an app that could:
-
Accept a simple text-based menu.
-
Generate realistic, high-end photography for each dish.
-
Allow for style toggles like “rustic and dark” or “bright and modern.”
In about 90 seconds, we had a running web app. Logan fed it a quirky menu of pizza, blueberries, and popcorn, and the app generated images of each. We also saw how you can use AI-suggested features to iteratively adjust the prepared photos—like adding butter to the popcorn, and add functionality—like changing the entire design aesthetic of the site.

Grounding with Google Maps
Timestamp: [10:25]
Next, Logan showcased one of the most exciting new features: grounding with Google Maps. This allows the Gemini models to connect directly to Google Maps to pull in rich, real-time place data without setting up a separate API. He demonstrated a starter template app that acted as a local guide, finding Italian restaurants in Chicago and describing the neighborhood.

Exploring the AI Studio Gallery
Timestamp: [14:55]
For developers looking for inspiration, Logan walked us through the AI Studio Gallery. This is a collection of pre-built, interactive examples that show what the models are capable of. Two highlights were:
-
Prompt DJ: An app that uses the Lyria model to generate novel, real-time music based on a prompt.
-
Vibe Check: A fun tool for visually testing and comparing how different models respond to the same prompt, which is becoming a popular way for developers to quickly evaluate a model’s suitability for their use case.

“Yap to App”: A Conversational Pair Programmer
Timestamp: [19:51]
For the final demo, Logan used a speech-to-text input to describe an app idea which he called “Yap to App”. His pitch: an AI pair programmer that could generate HTML code and then vocally coach him on how to improve it. After turning his spoken request into a written prompt, AI Studio built a voice-interactive app. The AI assistant generated a simple HTML card and then, when asked, provided verbal suggestions for improvement.

The Agent Industry Pulse
Timestamp: [26:19]
In this segment, we covered some of the biggest recent launches in the agent ecosystem:
-
Veo 3.1: Google’s new state-of-the-art video generation model that builds on Veo 3, adding richer native audio and the ability to define the first and last frames of a video to generate seamless transitions. Smitha showcased a quick applet, built entirely in AI Studio, where users can upload a selfie of themselves and generate a video of their future career in AI using Veo 3.1.
-
Anthropic’s Skills: A new feature that allows you to give Claude specific tools (like an Excel script) that it can decide to use on its own to complete a task. We compared this to Gemini Gems, noting the difference in approach between creating a persona (Gem) and providing a tool (Skill).
-
Recent Google Launches: Logan highlighted several other key releases, including the new Gemini computer use model for building agents that can navigate browsers, updates to the Flash and Flash-Lite models, and foundational upgrades to the AI Studio experience itself.
Logan Kilpatrick on the Future of AI Development
We also had the chance to discuss the bigger picture with Logan, from developer reactions to the future of models themselves.
Grounding with Google Maps
Timestamp: [31:26]
When asked which launch developers have been most excited about, Logan admitted he was surprised by the overwhelmingly positive reception for grounding with Google Maps. He noted that the Maps API is one of the most widely used developer APIs in the world, and making it incredibly simple to integrate with Gemini unlocked key use cases for countless developers and startups.
From Models to Systems: The Next Frontier
Timestamp: [32:26]
Looking ahead, Logan shared his excitement for the continued progress on code generation, which he sees as a fundamental accelerant for all other AI capabilities. He also pointed out a trend: models are evolving from simple tools into complex systems.
Historically, a model was something that took a token in and produced a token out. Now, models are starting to look more like agents out of the box. They can take actions: spinning up code sandboxes, pinging APIs, and navigating browsers. “Folks have thought about agents and models as these decoupled concepts,” Logan said, “and it feels like they’re coming closer and closer together as the model capabilities keep improving.”
Conclusion
This conversation was a powerful reminder of how quickly the barrier to entry for building sophisticated AI applications is falling. With tools like Google AI Studio, the ability to turn a creative spark into a working prototype is no longer a matter of weeks or days, but minutes. The focus is shifting from complex scaffolding to rapid, creative iteration.
Your turn to build
We hope this episode inspired you to get hands-on. Head over to Google AI Studio to try out vibe coding for yourself, and don’t forget to watch the full episode for all the details.
Connect with us
Read More for the details.

