GCP – Google Cloud’s commitment to EU AI Act support
Google Cloud is committed to being a trusted partner for customers who are navigating AI regulations in Europe. We have long understood that this requires a proactive and collaborative approach in order to ensure Europeans can access secure, first-rate AI tools as they become available.
This week, Google announced that we intend to sign the European Union AI Act Code of Practice (the Code). Google Cloud supports the stated vision of the AI Office to offer a simple and transparent way to demonstrate compliance with the AI Act that offers a streamlined compliance process, with enforcement focused on monitoring their adherence to the Code. We believe that this approach can result in greater predictability and a reduced administrative burden.
By participating, we believe our customers will benefit by being able to compare among the best cloud services and derive their own compliance benefits.
Looking ahead, customers should become familiar with these three compliance documents as they seek to develop and deploy AI in the EU.
-
The EU AI Act is a legal and regulatory framework that establishes obligations for certain AI systems based on their potential risks and levels of impact.
-
The General-Purpose AI Code of Practice is a voluntary tool, prepared by independent experts in a multi-stakeholder process, designed to help industry comply with the AI Act’s obligations for providers of general-purpose AI models.
-
Separately, the GPAI guidelines focus on the scope of the obligations for providers of general-purpose AI models laid down in the AI Act, in light of their imminent entry into application on August 2, 2025.
Google Cloud’s approach to helping customers with the Act
A core pillar of our approach to trust in AI, and a critical component of AI Act compliance, is data governance and privacy. Customers control how and where their data is used. We embed privacy-by-design principles throughout our product lifecycle, including in AI development, ensuring architectures include privacy safeguards like data encryption and providing meaningful transparency and control over data use.
We have delivered on the commitment to European customers we made back in 2020 to help them transform their businesses and address their strict data security and privacy requirements. To date, we’ve invested billions of euros to expand access to secure, high-performance computing capacity with seven data centers in Europe in addition to 13 cloud regions in Poland, Finland, Germany, Italy, Spain, France, Belgium, Sweden, the Netherlands, Switzerland, and more under development.
Our Sensitive Data Protection service and VPC Service Controls further assist customers in protecting sensitive data and meeting data residency requirements. We are already working to add new features to support data governance in-line with AI Act compliance requirements.
Supporting your compliance
We were among the first organizations to publish AI principles in 2018, and have published an annual transparency report since 2019. We consistently review our policies, practices, and frameworks, incorporating robust safety and security practices, privacy-by-design, and risk assessments.
We are committed to providing and regularly updating documentation about our AI tools and services. The Cloud Compliance Center remains the most up-to-date resource for all customer compliance artifacts including Google Cloud’s ISO 42001 AI Management System certification and EU AI Act related documentation. As we prepare for compliance for all new models which will be launched globally, including in the EU, we’ll update all artifacts to ensure timely integration into Google Cloud customer uses.
Our continuously updated Secure AI Framework (SAIF) provides a conceptual framework for securing AI systems across data, infrastructure, application, and model dimensions, emphasizing defense-in-depth and secure-by-design foundations. This ensures early inclusion of prevention and detection controls, adapted to specific product and user risks.
Of course, operationalizing any industry framework requires close collaboration with others — and above all a forum to make that happen. That’s why last year we worked with industry partners to launch the Coalition for Secure AI (CoSAI) to advance comprehensive security measures for addressing the unique risks that come with AI, for both issues that arise in real time and those over the horizon.
What customers can do to prepare
Customers should work closely with the EU AI Office to understand their legal and regulatory obligations when seeking to modify a foundation model or integrate one into a large system. It will be important to track new guidance and developments released by the AI Office.
We will continue to follow all legal obligations under the Act and demonstrate how we are fulfilling and supporting compliance requirements — including new forthcoming models which will be subject to the Code.
We remain committed to providing our enterprise customers with cutting-edge AI solutions that are both innovative and compliant. We have the capabilities and experience, and we will continue to partner with policymakers and customers as new regulations, frameworks, and standards are developed.
Read More for the details.