GCP – Cloud CISO Perspectives: 5 tips for secure AI success
Welcome to the first Cloud CISO Perspectives for March 2025. Today, Royal Hansen, vice-president, Engineering, and Nick Godfrey, Office of the CISO senior director, discuss how new AI Protection capabilities in Security Command Center fit in with our overall approach to securing AI — and offer five tips on how organizations can ensure secure AI success.
As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.
–Phil Venables, VP, TI Security & CISO, Google Cloud
- aside_block
- <ListValue: [StructValue([(‘title’, ‘Get vital board insights with Google Cloud’), (‘body’, <wagtail.rich_text.RichText object at 0x3dff8dfd9700>), (‘btn_text’, ‘Visit the hub’), (‘href’, ‘https://cloud.google.com/solutions/security/board-of-directors?utm_source=cloud_sfdc&utm_medium=email&utm_campaign=FY24-Q2-global-PROD941-physicalevent-er-CEG_Boardroom_Summit&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
5 tips to set up your organization for secure AI success
By Royal Hansen, vice-president, Engineering, and Nick Godfrey, senior director, Office of the CISO
In the centuries-old English proverb “the cobbler’s children have no shoes,” we are reminded of the hard-working laborer whose attention to others prevents them from providing for themselves. It’s an aphorism that’s appropriate for security and AI – as defenders guide others on how to secure their use of AI, we must ensure that we also use AI to support our goals of stronger defensive action.
Royal Hansen, vice-president, Engineering, Google
Earlier this month, Google Cloud introduced AI Protection, new AI risk-management capabilities for defenders, available in Security Command Center. AI Protection helps you to:
- Discover AI inventory in your environment and assess it for potential vulnerabilities.
- Securing AI assets with controls, policies, and guardrails.
- Managing threats against AI systems with detection, investigation, and response capabilities.
You can read all the enhancements here, but most importantly, we want our customers to know that we are making sure that they have proper shoes: As we develop guidelines on how to build AI securely, we are also working to ensure that AI is strengthening defenses for our customers.
Nick Godfrey, senior director, Office of the CISO
We’re building our AI tools according to our secure by design and secure by default principles, as we’ve done with our products for more than a decade. Our Secure AI Framework (SAIF) is designed to benefit everyone, particularly as developers build and deploy agents.
Meanwhile, Google has been a founding member of industry-guiding efforts such as Coalition for Secure AI (CoSAI), which includes AI industry leaders and peers. CoSAI uses SAIF to focus on the key challenges of developing AI securely: AI systems’ supply chain security, preparing defenders for the changing cybersecurity landscape, and AI security governance.
We’ve made big strides in cybersecurity over the past decade, and we have no intention of forgetting to apply what we’ve learned to our own services and offerings.
In a recent report on the adversarial use of generative AI, Google Threat Intelligence Group researchers discovered that while threat actors are seeking to use AI tools to further their objectives, Gemini’s guardrails prevented the development of novel capabilities.
Focusing on secure products means extending emerging capabilities with security. Coding becomes secure coding, email becomes secure email, summarization becomes secure summarization, generative AI agents come with enhanced delegation of tasks, and deployment agents become secure deployment and patching.
Meanwhile, Google has been a founding member of industry-guiding efforts such as Coalition for Secure AI (CoSAI), which includes AI industry leaders and peers. CoSAI uses SAIF to focus on the key challenges of developing AI securely: AI systems’ supply chain security, preparing defenders for the changing cybersecurity landscape, and AI security governance.
We’ve made big strides in cybersecurity over the past decade, and we have no intention of forgetting to apply what we’ve learned to our own services and offerings.
In a recent report on the adversarial use of generative AI, Google Threat Intelligence Group researchers discovered that while threat actors are seeking to use AI tools to further their objectives, Gemini’s guardrails prevented the development of novel capabilities.
Nevertheless, the report’s authors conclude that AI-powered attacks are still a strong possibility in the future as threat actors adopt new AI technologies in their operations. While current foundation models are “unlikely” to provide threat actors with breakthrough capabilities, they note that, “the AI landscape is in constant flux, with new AI models and agentic systems emerging daily.”
Defenders are in the process of evolving, too. For example, red teaming has been a long-standing practice, a sparring partner for defenders who can probe for weaknesses and help identify vulnerabilities that may have otherwise gone unnoticed. Red teaming is also a core component of securing AI technologies, and in July 2023 we released our first AI red team report.
As we work on using AI to make defenders more resilient and stronger against evolving cyberattacks and risks, this mindset needs to be adopted more broadly. Focusing on secure products means extending emerging capabilities with security. Coding becomes secure coding, email becomes secure email, summarization becomes secure summarization, generative AI agents come with enhanced delegation of tasks, and deployment agents become secure deployment and patching.
In the near future, here are five tips we recommend to help set up your organization for secure AI success.
- Implement strong AI governance: Good governance can help ensure that experts from across the organization are empowered to review gen AI initiatives in a holistic, programmatic, and repeatable format, one that can influence and improve the entire process from start to finish.
- Use good data: Implement robust data governance practices that closely align to the organization’s existing data governance program.
- Control access: Adopt strict, role-based access controls and apply the principle of least privilege. Limiting AI access to only necessary data can reduce the risk of data breaches, unauthorized exposure, and better safeguard customers’ personal and financial well-being.
- Address inherited vulnerabilities: Take steps to mitigate the risks associated with inherited vulnerabilities, so be sure to conduct due diligence on third-party and fine-tuned AI models before incorporating them into their systems.
- Mitigate internal AI tool risks: Apply consistent security measures to both public-facing and internal AI tools. This involves implementing robust access controls, data encryption, and regular security assessments for all AI implementations, regardless of their intended audience.
The current pace of AI innovation can help us better anticipate and address emerging threats. Our focus on post-quantum cryptography can help us address a small but growing and crucial part of that.
Security professionals can also use AI to seriously investigate the next security moonshots: Can we eliminate ransomware? Develop self-healing applications that find and patch their own vulnerabilities before a penetration tester could even get started? Create a supervisor agent that can help admins avoid granting excessive privileges?
To learn more about how Google Cloud is supporting customers and using AI to advance our goals of stronger defensive action against AI-enhanced risks, please check out our CISO Insights Hub.
- aside_block
- <ListValue: [StructValue([(‘title’, ‘Join the Google Cloud CISO Community’), (‘body’, <wagtail.rich_text.RichText object at 0x3dff8dfd9220>), (‘btn_text’, ‘Learn more’), (‘href’, ‘https://rsvp.withgoogle.com/events/ciso-community-interest?utm_source=cgc-blog&utm_medium=blog&utm_campaign=2024-cloud-ciso-newsletter-events-ref&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
In case you missed it
Here are the latest updates, products, services, and resources from our security teams so far this month:
- Get ready for a unique, immersive security experience at Next ‘25: Here’s why Google Cloud Next is shaping up to be a must-attend event for security experts and the security-curious alike. Read more.
- The ultimate insider threat: North Korean IT workers: Employers can take concrete steps to mitigate the risk of North Korean IT workers. Here’s what business leaders need to know. Read more.
- How we do red teaming at Google scale: The best red teams are creative sparring partners for defenders, probing for weaknesses. Here’s how we do red teaming at Google scale. Read more.
- Project Shield makes it easier to sign up, set up, automate DDoS protection: It’s now easier than ever for vulnerable organizations to apply to Project Shield, set up protection, and automate their defenses. Here’s how. Read more.
- Introducing AI Protection: Security for the AI era: Google Cloud’s new AI Protection safeguards AI workloads and data across clouds and models — no matter the platform. Here’s how it can help your team. Read more.
- Google named Leader in 2025 Forrester Data Security Platforms Wave: We’re pleased to announce that Google Cloud has been recognized as a Leader in The Forrester Wave™: Data Security Platforms, Q1 2025 report. Read more.
- Use a strategic journey map to transform your cloud migration: Many large organizations embark on modernization journeys with ambitious goals, only to be derailed by security challenges. Here’s some of what we’ve learned on how to avoid those pitfalls. Read more.
- Measuring the SOC: What to count in 2025: It is one thing to want a SOC, but it is very different to have a well-running, operationally effective Security Operations Center. Learn here about the SOC metrics that matter most. Read more.
- Vulnerability Reward Program: 2024 in review: In 2024, our Vulnerability Reward Program awarded just shy of $12 million to more than 600 researchers based in countries around the globe across all of our programs. Here’s the highlights from Google’s VRP last year. Read more.
Please visit the Google Cloud blog for more security stories published this month.
- aside_block
- <ListValue: [StructValue([(‘title’, ‘Tell Google Cloud what you think’), (‘body’, <wagtail.rich_text.RichText object at 0x3dff8dfd9dc0>), (‘btn_text’, ‘Vote now’), (‘href’, ‘https://www.linkedin.com/feed/update/urn:li:activity:7307475819021369344’), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
Threat Intelligence news
- China-nexus espionage actor UNC3886 targets Juniper routers: Last year Mandiant discovered custom backdoors deployed on Juniper Networks’ Junos OS routers, and attributed the backdoors to China-nexus espionage actor UNC3886. We dive into the activity and malware, and provide recommendations and detections. Read more.
- A deep dive into TTD instruction emulation bugs: In this in-depth exploration of Microsoft’s Time Travel Debugging (TTD) framework, we explore how subtle inaccuracies in the emulation process can lead to significant security and reliability issues. Read more.
- Deobfuscating strings in garbled binaries: The FLARE team often encounters malware written in Go that is protected using garble. We detail garble’s string transformations and the process of automatically deobfuscating them, and present a tool to streamline the reverse engineering process. Read more.
- Rosetta 2 artifacts in macOS intrusions: We’ve observed sophisticated threat actors using x86-64 compiled macOS malware, but it turns out that Rosetta 2, Apple’s translation technology for running x86-64 binaries on Apple Silicon macOS systems, produces artifacts that can help forensic investigators. Read more.
Please visit the Google Cloud blog for more threat intelligence stories published this month.
Now hear this: Podcasts from Google Cloud
- How to engineer cloud systems for diverging regulations: Archana Ramamoorthy, senior director of product management, Google Cloud, explains how Google can build systems that need to comply with laws that are often mutually contradictory, with hosts Anton Chuvakin and Tim Peacock. Listen here.
- LLMs for anomaly detection and real-world cloud security: Yigael Berger, head of AI, Sweet Security, goes deep into the current state of AI and security, and discusses with Anton and Tim how well the promise of AI holds up in practice. Listen here.
- Defender’s Advantage: Conversations with the C-suite and board: Imran Ahmad, senior partner, Canadian head of technology, and Canadian co-head of cybersecurity and data privacy, Norton Rose Fulbright, joins host Luke McNamara to discuss how executives are thinking about cyber risk in a changing and evolving landscape. Listen here.
- Defender’s Advantage: What to watch for in 2025: Kelli Vanderlee, Kate Morgan, and Jamie Collier join Luke to discuss trends that are top of mind for them in tracking emergent threats this year, from nation-state intrusions to financially motivated ransomware campaigns. Listen here.
- Behind the Binary: Piano tuning and debugging, the story of x64dbg: We sit down with Duncan Ogilvie, the creator of x64dbg to discuss how one of the most popular Windows debuggers got its start. Listen here.
To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in February with more security-related updates from Google Cloud.
Read More for the details.