GCP – Cloud CISO Perspectives: AI as a strategic imperative to manage risk
Welcome to the second Cloud CISO Perspectives for October 2025. Today, Jeanette Manfra, senior director, Global Risk and Compliance, shares her thoughts on the role of AI in risk management.
As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.
- aside_block
- <ListValue: [StructValue([(‘title’, ‘Get vital board insights with Google Cloud’), (‘body’, <wagtail.rich_text.RichText object at 0x7f5e28391940>), (‘btn_text’, ‘Visit the hub’), (‘href’, ‘https://cloud.google.com/solutions/security/board-of-directors?utm_source=cloud_sfdc&utm_medium=email&utm_campaign=FY24-Q2-global-PROD941-physicalevent-er-CEG_Boardroom_Summit&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
AI as a strategic imperative: Modernizing risk management
By Jeanette Manfra, Senior Director, Global Risk and Compliance, Google Cloud
         Jeanette Manfra, Senior Director, Global Risk and Compliance, Google Cloud
AI is more than a technological upgrade: It’s a strategic imperative for modernizing risk management, security, and compliance. It can help organizations fundamentally shift from reactive responses to proactive, data-driven strategies.
AI systems that can enable predictive risk analytics and accurately inform decision-making in a timely manner is the holy grail of risk management, although adoption has not been uniform. Great strides have been made in many disciplines, particularly in financial risk modeling. Other areas have struggled to take advantage of advances in analytics, for various reasons.
What I am focused on is the integration of a unified risk posture that is agile as inputs change — and meets the needs of a rapidly-growing company. There are four key areas where AI can help across the risk management lifecycle:
- Risk identification: AI algorithms can analyze large volumes of structured and unstructured data from many sources to detect patterns and anomalies indicative of emerging risks. Natural Language Processing (NLP) specifically can help extract insights from text data, and can help identify risks from regulatory changes, customer complaints, and employee feedback. For financial institutions, AI can identify policies and procedures that align with regulations and pinpoint compliance gaps.
- Risk assessment: AI models can use predictive analytics to forecast potential risks based on historical data and current trends to enable proactive management. They can run simulations for various risk scenarios to assess impact, which can improve decision-making. Machine learning algorithms can be trained to continuously learn from new data, dynamically adjusting risk assessments and improving accuracy.
- Risk mitigation: AI-powered systems are being developed that can implement and enforce automated controls to reduce exposure to identified risks in near real-time. They suggest optimal mitigation strategies based on changing risk profiles and business objectives.
- Risk monitoring and reporting: AI-driven systems can provide continuous monitoring, generating alerts for unusual activities or deviations. They can automate data collection and analysis, generate detailed reports, and improve compliance reporting, such as automating Suspicious Activity Reports (SARs) filings.
We can also track the value of AI across key risk-management uses:
- In cybersecurity threat detection, AI-driven systems can monitor enterprise environments, network traffic, and user activity, and help enable detection. They can identify anomalies and predict attack vectors, shifting security from reactive to proactive.
- In regulatory change management, AI systems can review regulatory documents and updates, then summarize the changes and other important details in plain language.
- In quality assurance and quality control, AI is being explored by compliance departments to help with tasks, such as executing secondary reviews with large population samples.
Organizational and operational challenges
Implementing AI requires careful planning and testing to secure buy-in and acceptance from regulators, employees, executives, and other stakeholders. Boards of directors also can play a vital role in helping guide AI adoption. Conversely, a lack of broad organizational commitment and involvement from senior leadership can limit the beneficial impact of AI.
Organizations generally pursue one of two paths for AI adoption. AI tools can be integrated into existing workflows, or organizations can use AI as a starting point to transform workflows from scratch to make AI an integral part of the process. Both often face operational challenges when working with legacy infrastructure not designed for modern, data-intensive systems. Additionally, fragmentation of existing security tools can hamper a unified view of the threat landscape.
Organizations can face fragmented risk oversight from a lack of alignment, so effective AI risk management should be integrated into broader enterprise risk-management strategies. Business and security leaders, and boards of directors, should be prepared to implement cultural changes as required.
There is also a significant shortage of experienced specialists capable of effectively deploying, managing, and operating AI solutions. AI security solutions, for example, require specialized talent, ongoing training, and infrastructure investments.
While AI can automate many tasks, over-reliance on automated systems can diminish the critical role of human judgment and contextual understanding, leading to unfair or harmful outcomes when AI systems fail to account for nuanced or context-specific factors. Human decision-making authority should remain final in AI compliance.
Risk measurement and management with AI can also face an additional level of complexity when organizations rely on third-party suppliers for AI products and services. Differing metrics, lack of transparency, and less control over use cases can all impair the use of AI, so contingency processes for failures in third-party data and AI systems should be strongly considered.
Adopting comprehensive AI risk-management frameworks
Organizations can face fragmented risk oversight from a lack of alignment, so effective AI risk management should be integrated into broader enterprise risk-management strategies. Business and security leaders, and boards of directors, should be prepared to implement cultural changes as required.
Many organizations lack structured AI governance. To implement AI compliance and risk management properly, the legal, data governance, technical development, and cybersecurity teams should be brought together. Organizations need a structured, comprehensive approach.
At Google Cloud, part of our approach is to align AI risk management with the Secure AI Framework (SAIF), the NIST AI Risk Management Framework (AI RMF), and ISO 42001. Beyond NIST, organizations can integrate AI into existing enterprise risk-management frameworks including ISO 31000 and Committee of Sponsoring Organizations (COSO) to enhance their effectiveness by introducing automation, scalability, and near real-time capabilities.
Google Cloud’s approach to trustworthy AI
We also adhere to a holistic approach to AI risk management and compliance. We focus on several key areas:
- Innovating responsibly, guided by AI principles;
- Extending security best practices to AI-specific risks through SAIF (guidance here) and the Coalition for Secure AI (CoSAI);
- Employing an AI risk assessment methodology for identifying, assessing, and mitigating risks;
- Developing and using an automated, scalable, and evidence-based approach for auditing generative AI workloads;
- And emphasizing human oversight and collaboration in our risk assessments and governance councils.
Additionally, we use explainability tools to help understand and interpret AI predictions and evaluate potential bias; privacy-preserving technologies such as masking and tokenization and adhering to privacy laws; continuous monitoring and auditing for security vulnerabilities that AI might miss; investing in training programs to bridge the AI knowledge gap; and encouraging “interdisciplinary collaboration” between data scientists, risk analysts, and domain experts is also key.
AI is a transformative force, enabling unprecedented levels of proactive risk management, enhanced security, and streamlined compliance. The path forward requires a holistic, leadership-driven approach, spanning structured frameworks, ethical AI design, interdisciplinary collaboration, and continuous investments in talent and technology. Staying adaptable to evolving technologies and regulations is not just a competitive advantage; it’s an operational necessity.
For more guidance on using AI in risk management, please check out our CISO Insights hub.
- aside_block
- <ListValue: [StructValue([(‘title’, ‘Tell us what you think’), (‘body’, <wagtail.rich_text.RichText object at 0x7f5e28391430>), (‘btn_text’, ‘Join the conversation’), (‘href’, ‘https://google.qualtrics.com/jfe/form/SV_2n82k0LeG4upS2q’), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
In case you missed it
Here are the latest updates, products, services, and resources from our security teams so far this month:
- How Google Does It: Building AI agents for cybersecurity and defense: At Google, we’ve moved from talking about AI agents to actively using them for security. Here are four critical lessons that helped shape our approach. Read more.
- How Model Armor can help protect your AI apps: You can use Model Armor to protect against prompt injections and jailbreaks. Here’s how. Read more.
- Enabling a safe agentic web with reCAPTCHA: At Google Cloud, we believe preventing fraud and abuse in the agentic web should fundamentally result in a simpler customer experience. Here’s how we’re doing it. Read more.
- New from Mandiant Academy: Practical training to protect your perimeter: Protecting the Perimeter: Practical Network Enrichment teaches the skills to transform network traffic analysis into a powerful, precise security asset. Read more.
- How we’re helping customers prepare for a quantum-safe future: Google has been working on quantum-safe computing for nearly a decade. Here’s our latest on protecting data in transit, digital signatures, and public key infrastructure. Read more.
- Google is named a Leader in the 2025 Gartner® Magic Quadrant™ for SIEM: We’re excited to share that Gartner has recognized Google as a Leader in the 2025 Gartner® Magic Quadrant™ for Security Information and Event Management (SIEM). Read more.
- Cloud Armor named Strong Performer in Forrester WAVE, new features launched: New capabilities in Cloud Armor offer more comprehensive security policies and granular network configuration controls. Read more.
- A practical guide to Google Cloud’s Parameter Manager: Google Cloud Parameter Manager is designed to reduce unnecessarily sharing key cloud configurations, and it works with many types of data formats. Read more.
Please visit the Google Cloud blog for more security stories published this month.
- aside_block
- <ListValue: [StructValue([(‘title’, ‘Join the Google Cloud CISO Community’), (‘body’, <wagtail.rich_text.RichText object at 0x7f5e283916d0>), (‘btn_text’, ‘Learn more’), (‘href’, ‘https://rsvp.withgoogle.com/events/ciso-community-interest?utm_source=cgc-blog&utm_medium=blog&utm_campaign=2024-cloud-ciso-newsletter-events-ref&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
Threat Intelligence news
- EtherHiding in the open, part 1: DPRK hides nation-state malware on blockchains: Google Threat Intelligence Group (GTIG) and Mandiant have observed the North Korea threat actor UNC5342 using “EtherHiding” to deliver malware and facilitate cryptocurrency theft, the first time we have observed a nation-state actor adopting this method. EtherHiding uses transactions on public blockchains to store and retrieve malicious payloads, and is notable for its resilience against conventional takedown and blocklisting efforts. Read more.
- EtherHiding in the open, part 2: How UNC5142 uses it to distribute malware: Since late 2023, UNC5142 has significantly evolved their tactics, techniques, and procedures (TTPs) to enhance operational security and evade detection. The group is characterized by its use of compromised WordPress websites and EtherHiding on the BNB Smart Chain to store its malicious components in smart contracts. Read more.
- New malware attributed to Russia state-sponsored COLDRIVER: COLDRIVER, a Russian state-sponsored threat group known for targeting high-profile representatives from non-governmental organizations, policy advisors, and dissidents, swiftly shifted operations after GTIG’s May 2025 public disclosure of its LOSTKEYS malware. Only five days later, the group began deploying new malware families. Read more.
- Pro-Russia information operations leverage Russian drone incursions into Polish airspace: GTIG has observed multiple instances of pro-Russia information operations (IO) actors promoting narratives related to the reported incursion of Russian drones into Polish airspace that occurred in September. The IO activity appeared consistent with previously-observed instances of pro-Russia IO targeting Poland — and more broadly the NATO Alliance and the West. Read more.
- Vietnamese actors using fake job posting campaigns to deliver malware and steal credentials: GTIG is tracking a cluster of financially-motivated threat actors operating from Vietnam that use fake job postings on legitimate platforms to target individuals in the digital advertising and marketing sectors. Read more.
Please visit the Google Cloud blog for more threat intelligence stories published this month.
Now hear this: Podcasts from Google Cloud
- What really makes your SOC ready for AI: What impact will AI have on security teams: Will it turn them into powered-up superheroes, or is the future more Jekyll-and-Hyde? Monzy Merza, co-founder and CEO, Crogl, discusses AI’s potential destinies with hosts Anton Chuvakin and Tim Peacock. Listen here.
- How to stop playing security theater and start practicing security reality: Jibran Ilyas, director, Incident Response, Google Cloud, talks with hosts Anton and Tim about why tabletops for incident response preparedness are effective yet rarely done well. Listen here.
- Behind the Binary: Building a robust network at Black Hat: Host Josh Stroschein is joined by Mark Overholser, a technical marketing engineer, Corelight, who also helps run the Black Hat Network Operations Center (NOC). He gives us an insider’s look at the philosophy and challenges behind building a robust network for a security conference. Listen here.
To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in a few weeks with more security-related updates from Google Cloud.
Read More for the details.

 
                                                                    