GCP – How the Max Planck Institute is sharing expert skills through multimodal agents
Effective monitoring and treatment of complex diseases like cancer and Alzheimer’s disease depends on understanding the underlying biological processes, for which proteins are essential. Mass spectrometry-based proteomics is a powerful method for studying these proteins in a fast and global manner. Yet the widespread adoption of this technique remains constrained by technical complexity as mastering these sophisticated analytical instruments and procedures requires specialized training. This creates an expertise bottleneck that slows research progress.
To address this challenge, researchers at the Max Planck Institute of Biochemistry collaborated with Google Cloud to build a Proteomics Lab Agent that assists scientists with their experiments. This agent simplifies performing complex scientific procedures through personalized AI guidance, making them easier to execute, while automatically documenting the process.
“A lab’s critical expertise is often tacit knowledge that is rarely documented and lost to academic turnover. This agent addresses that directly, not only by capturing hands-on practice to build an institutional memory, but by systematically detecting experimental errors to enhance reproducibility. Ultimately, this is about empowering our labs to push the frontiers of science faster than ever before.”, said Prof. Matthias Mann, a pioneer in mass spectrometry-based proteomics who leads the Department of Proteomics and Signal Transduction at the Max Planck Institute of Biochemistry.
The agent was built using the Agent Development Kit (ADK), Google Cloud infrastructure, and Gemini models, which offer advanced video and long-context understanding uniquely suited to the needs of advanced research.
One of the agent’s core capabilities is to detect errors and omissions by analyzing a video of a researcher performing lab work and comparing their actions against a reference protocol. This process takes just over two minutes and catches about 74% of procedural errors with high accuracy, although domain-specific knowledge and spatial recognition should still be improved.Our Ai-assisted approach is more efficient compared to the current manual approach, which relies on a researcher’s intuition to either spot subtle mistakes during the procedure or, more commonly, to troubleshoot only after an experiment has failed.
By making it easier to spot mistakes and offering personalized guidance, the agent can reduce troubleshooting time and build towards a future where real-time AI guidance can help prevent errors from happening.
The potential of the Proteomics AI agent goes beyond life sciences, addressing a universal challenge in specialized fields: capturing and transferring the kind of expertise that is learned through hands-on practice, not from manuals. To enable other researchers and organizations to adapt this concept to their own domains, the agentic framework has been made available as an open-source project on GitHub.
In this post, we will detail the agentic framework of the Proteomics Lab Agent, how it uses multimodal AI to provide personalized laboratory guidance, and the results from its deployment in a real-world research environment.
Proteomics Lab Agent generates protocols and detects errors
Proteomics Lab Agent generates protocols and detects errors
The challenge: Preserving expert knowledge in a high-turnover environment
Imagine it’s a Friday evening in the lab. A junior researcher needs to use a sophisticated analytical instrument, a mass spectrometer, but the senior expert who is responsible for it has already left for the weekend. The researcher has to search through lengthy protocols, interpret the instrument’s performance, which depends on multiple factors reflected in diverse metrics, and proceed without guidance. A single misstep could potentially damage the expensive equipment, waste a unique and valuable sample, or compromise the entire study.
Such complexity is a regular hurdle in specialized research fields like mass spectrometry-based proteomics. Scientific progress often depends on complex techniques and instruments that require deep technical expertise. Laboratories face a significant bottleneck in training personnel, documenting procedures, and retaining knowledge, especially with the high rate of academic turnover. When an expert leaves, their accumulated knowledge often leaves with them, forcing the team to partially start over. Collectively, this creates accessibility and reproducibility challenges, which slows down new discoveries.
A solution: an AI agent for lab guidance
The proteomics lab agent addresses these challenges by connecting directly to the lab’s collective knowledge – from protocols and instrument data to past troubleshooting decisions. With this it provides researchers with personalized AI guidance for complex procedures across the entire experimental workflow. Examples include regular wet-lab work such as pipetting or the interactions with specialized equipment and software as required for operating a mass spectrometer. A further feature of the agent is the ability to automatically generate detailed protocols from videos of experiments, detect procedural errors, and provide guidance for correction, reducing troubleshooting and documentation time.
An AI agent architecture for the lab
The underlying multimodal agentic AI framework uses a main agent that coordinates the work of several specialized sub-agents, as shown in Figure 1. Built with Gemini models and the Agent Development Kit, this main agent acts as an orchestrator. It receives a researcher’s query, interprets the request, and delegates the task to the appropriate sub-agent.
Figure 1: Architecture of the Proteomics Lab Agent for multimodal guidance.
The sub-agents are designed for specific functions and connect to the lab’s existing knowledge systems:
-
Lab Note and Protocol Agents: These agents handle video-related tasks. When a researcher provides a video of an experiment, these agents upload videos to Google Cloud Storage to allow the analysis of the visual and spoken content of a video. Following, the agent can check for errors or generate a new protocol.
-
Lab Knowledge Agent: This agent connects to the laboratory’s knowledge base (MCP Confluence) to retrieve protocols or save new lab notes, making knowledge accessible to the entire team.
-
Instrument Agent: To provide guidance on using complex analytical instruments, this agent retrieves instrument performance metrics from a self-build MCP server that monitors the lab’s mass spectrometers (MCP AlphaKraken).
-
Quality Control Memory Agent: This agent captures all instrument-related decisions and their outcomes in a database (e.g. MCP BigQuery). This creates a searchable history of what has worked in the past and preserves valuable troubleshooting experience.
Together, these agents can provide guidance adapted to the current instrument status and the researcher’s experience level while automatically documenting the researcher’s experience.
A closer look: Catching experimental errors with video analysis
While generative AI has proven effective for digital tasks in science – from literature analysis to controlling lab robots through code – it has not addressed the critical gap between digital assistance and hands-on laboratory execution. Our work demonstrates how to bridge this divide by automatically generating lab notes and detecting experimental errors from a video.
Figure 2: Agent workflow for the video-based lab note generation and error detection.
The process, illustrated in Figure 2, unfolds in several steps:
-
A researcher records their experiment and submits the video to the agent with a prompt like, “Generate a lab note from this video and check for mistakes.”.
-
The main agent delegates the task to the Lab Note Agent, which uploads the video to Google Cloud Storage and analyzes the actions performed in the video.
-
The main agent asks the Lab Knowledge Agent to find the protocol that matches these actions. The Lab Knowledge Agent then retrieves it from the lab’s knowledge base, Confluence.
-
With both the video analysis and the baseline protocol, the task is passed on to the Lab Note Agent again, which has the knowledge how to perform a step-by-step comparison of video and protocol. It flags any potential mistakes, such as missed steps, incorrectly performed actions, added steps not in the protocol, or steps completed in the wrong order.
-
The main agent returns the generated lab notes to the researcher with these potential errors flagged for review. The researcher can accept the notes or make corrections.
-
Once finalized, the corrected notes are saved back to the Confluence knowledge base via the Lab Knowledge Agent, preserving a complete and accurate record of the experiment.
Building institutional memory
To support a lab in building a knowledge base, the Protocol Agent can generate lab instructions directly from a video. A researcher can record themselves performing a procedure while explaining the steps aloud. The agent analyzes the video and audio to produce a formatted, publication-ready protocol. We found that providing the model with a diverse set of examples, step-by-step instructions, and relevant background documents produced the best results.
Figure 3: Agent workflow for guiding instrument operations.
The agent can also support instrument operations (see Figure 3). A researcher may ask, “Is instrument X ready so that I can measure my samples?”. The agent retrieves the latest instrument metrics via the Instrument Agent and compares it with past troubleshooting decisions from the Quality Control Memory Agent. It then provides a recommendation, such as “Yes, the instrument is ready,” or “No, calibration is recommended first”. It can even provide the relevant calibration protocol from the Lab Knowledge Agent. Subsequently, it saves the final researcher’s decision and actions with the Quality Control Memory Agent. With this, every reasoning and its outcome is saved, creating a continuously improving knowledge base for operating specialized equipment and software.
More technical details are described in our full publication.
Real-world impact: Making complex scientific procedures easier
To measure the AI agent’s value in a real-world setting, we deployed it in our department at the Max Planck Institute of Biochemistry, a group with 40 researchers. We evaluated the agent’s performance across three key laboratory functions: detecting procedural errors, generating protocols, and providing personalized guidance.
The results showed strong gains in both speed and quality. Key findings include:
-
AI-assisted error detection: The agent successfully identified 74% of all procedural errors (a metric known as recall) with an overall accuracy of 77% when comparing 28 recorded lab procedures against their reference protocols. While precision (41%) is still a limitation at this early stage, the results are highly promising.
-
Fast, expert-quality protocols: From lab videos, the agent generated standardized, publication-ready protocols in about 2.6 minutes. This was approximately 10 times faster than manual creation and achieved an average quality score of 4.4 out of 5 across 10 diverse protocols.
-
Personalized, real-time support: The agent successfully integrated real-time instrument data with past performance decisions to provide researchers with tailored advice on equipment use.
A deeper analysis of the error-detection results revealed specific strengths and areas for improvement. As shown in Figure 4, the system is already effective at recognizing general lab equipment and reading on-screen text. The main limitations were in understanding highly specialized proteomics equipment (27% of these errors were unrecognized) and perceiving fine-grained details, such as the exact placement of pipette tips on a 96-well grid (47%) or small text on pipettes (41%) (see Appendix of corresponding paper). As multimodal models advance, we expect their ability to interpret these details will improve, strengthening this critical safeguard against experimental mistakes.
Figure 4: Strengths and current limitations of the Proteomics Lab Agent in a lab.
Our agent already automates documentation and flags errors in recorded videos, but its future potential lies in prevention, not just correction. We envision an interactive assistant that uses speech to prevent mistakes in real-time before they happen. By making this project open source, we invite the community to help build this future.
Scaling for the future
In conclusion, this framework addresses critical challenges in modern science, from the reproducibility crisis to knowledge retention in high-turnover academic environments. By systematically capturing not just procedural data but also the expert reasoning behind them, the agent builds an institutional memory.
“This approach helps us capture and share the practical knowledge that is often lost when a researcher leaves the lab”, notes Matthias Mann. “This collected experience will not only accelerate the training of new team members but also creates the data foundation we need for future innovations like predictive instrument maintenance for mass spectrometers and automated protocol harmonization within individual labs and across different labs”.
The principles behind the Proteomics Lab Agent are not limited to one field. The concepts outlined in this study are a generalizable solution for any discipline that relies on complex, hands-on procedures, from life sciences to manufacturing.
Dive deeper into the methodology and results by reading our full paper. Explore the code on GitHub and adapt the Proteomics Lab Agent for your own research. Follow the work of the Mann Lab at the Max Planck Institute to see what comes next either on LinkedIn, BlueSky or X.
This project was a collaboration between the Max Planck Institute of Biochemistry and Google. The core team included Patricia Skowronek and Matthias Mann from Department of Proteomics and Signal Transduction at the Max Planck Institute for Biochemistry and Anant Nawalgaria from Google. P.S. and M.M. want to thank the entire Mann Lab for their support.
Read More for the details.
