GCP – Audit smarter: Introducing Google Cloud’s Recommended AI Controls framework
As organizations build new generative AI applications and AI agents to automate business workflows, security and risk management management leaders face a new set of governance challenges. The complex, often opaque nature of AI models and agents, coupled with their reliance on vast datasets and potential for autonomous action, creates an urgent need to apply better governance, risk, and compliance (GRC) controls.
Today’s standard compliance practices struggle to keep pace with AI, and leave critical questions unanswered. These include:
-
How do we prove our AI systems operate in line with internal policies and evolving regulations?
-
How can we verify that data access controls are consistently enforced across the entire AI lifecycle, from training to inference to large scale production?
-
What is the mechanism for demonstrating the integrity of our models and the sensitive data they handle?
We need more than manual checks to answer these questions, which is why Google Cloud has developed an automated approach that is scalable and evidence-based: the Recommended AI Controls framework, available now as a standalone service and as part of Security Command Center.
Developed by Google Cloud Security experts and validated by our Office of the CISO, this prebuilt framework incorporates best practices for securing AI systems, and uses industry standards including the NIST AI Risk Management Framework and the Cyber Risk Institute (CRI) profile as baselines. Our framework provides a direct path for organizations to assess, monitor, and audit the cloud native security and compliance posture of their generative AI workloads on Google Cloud.
- aside_block
- <ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud security products’), (‘body’, <wagtail.rich_text.RichText object at 0x3ed43e2c24c0>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/welcome’), (‘image’, None)])]>
The challenge of auditing a modern AI workload
A typical generative AI workload is a complex ecosystem. It integrates AI-specific platforms like Vertex AI with foundational platform services that include Cloud Storage, Identity and Access Management (IAM), Secret Manager, Cloud Logging, and VPC Networks.
Google Cloud’s AI Protection provides full lifecycle safety and security capabilities for AI workloads from development and training to runtime and large scale production. In addition, it is paramount to not only secure AI workloads, but to also audit whether they adhere to compliance, and ensure we are able to define controls for AI assets and monitor drift. Google Cloud has taken a more holistic approach to define best practices for platform components.
Below is an example of AI workload:
Foundation components of AI workloads.
How the Recommended AI Controls Framework can help audit AI workloads
Audit Manager helps you identify compliance issues earlier in your AI compliance and audit process, integrating it directly into your operational workflows. Here’s how you can move from manual checklists to automated assurance for your generative AI workloads:
-
Establish your security controls baseline. Audit Manager provides a baseline to audit your generative AI workloads. These baselines are based on industry best practices and frameworks to help give you a clear, traceable directive for your audit.
-
Understand control responsibilities. Aligned with Google’s shared fate approach, the framework can help you understand the responsibility for each control — what you manage versus what the cloud platform provides — so you can focus your efforts effectively.
-
Run the audit with automated evidence collection. Evaluate your generative AI workloads against industry-standard technical controls in a simple, automated manner. Audit Manager can reduce manual audit preparation by automatically collecting evidence relative to the defined controls for your Vertex AI usage and supporting services.
-
Assess findings and remediate. The audit report will highlight control violations and deviations from recommended best practices. This can help your teams perform timely remediation before minor issues escalate into significant risks.
-
Create and share reports. Generate and share comprehensive, evidence-backed reports with a single click, which can support continuous compliance monitoring efforts with internal stakeholders and external auditors.
-
Enable continuous monitoring. Move beyond point-in-time snapshots. Establish a consistent methodology for ongoing compliance by scheduling regular assessments. This allows you to continuously monitor AI model usage, permissions, and configurations against best practices, and can help maintain a strong GRC posture over time.
Inside the Recommended AI Controls framework
The framework provides controls specifically designed for generative AI workloads, mapped across critical security domains. Crucially, these high-level principles are backed by auditable, technical checks linked directly to data sources from Vertex AI and its supporting Google Cloud services.
Here are a few examples of the controls included:
-
Access control:
-
Disable automatic IAM grants for default service accounts: This control restricts default service accounts with excessive permissions.
-
Disable root access on new Vertex AI Workbench user-managed notebooks and instances: This boolean constraint, when enforced, prevents newly created Vertex AI Workbench user-managed notebooks and instances from enabling root access. By default root access is enabled.
Data controls:
-
Customer Managed Encryption Keys (CMEK): Google Cloud offers organization policy constraints to help ensure CMEK usage across an organization. Using Cloud KMS CMEK gives you ownership and control of the keys that protect your data at rest in Google Cloud.
-
Configure data access control lists: You can customize these lists based on a user’s need to know. Apply data access control lists, also known as access permissions, to local and remote file systems, databases, and applications.
System and information integrity:
-
Vulnerability scanning: Our Artifact Analysis service scans for vulnerabilities in images and packages in Artifact Registry.
Audit and accountability:
-
Audit and accountability policy and procedures requirements: Google Cloud services write audit log entries to track who did what, where, and when with Google Cloud resources.
Configuration management:
-
Restrict resource service usage: This constraint ensures only customer-approved Google Cloud services are used in the right places. For example, production and highly sensitive folders have a list of Google Cloud services approved to store data. The sandbox folder may have a more permissive list of services, with accompanying data security controls to prevent data exfiltration in the event of a breach.
How to automate your AI audit in three steps
Security and compliance teams can immediately use this framework to move from manual checklists to automated, continuous assurance.
-
Select the framework: In the Google Cloud console, navigate to Audit Manager and select Google Recommended AI Controls framework from the library.
-
Define the scope: Specify the Google Cloud projects, folders, or organization where your generative AI workloads are deployed. Audit Manager automatically understands the relevant resources within that scope.
-
Run the assessment: Initiate an audit. Audit Manager collects evidence from the relevant services (including Vertex AI, IAM, and Cloud Storage) against the controls. The result is a detailed report showing your compliance status for each control, complete with direct links to the collected evidence.
Automate your AI assurance today
You can access the Audit Manager directly from your Google Cloud console. Navigate to the Compliance tab in your Google Cloud console, and select Audit Manager. For a comprehensive guide on using Audit Manager, please refer to our detailed product documentation.
We encourage you to share your feedback on this service to help us improve Audit Manager’s user experience.
Read More for the details.