GCP – Empowering all to be safer with AI this Cybersecurity Awareness Month
Cybersecurity requires continual vigilance, including using built-in protections and providing resources for changing security threats. In acknowledgment of Cybersecurity Awareness Month, now in its 20th year, we recently shared our progress across a number of security efforts, and announced a few new technologies that help us keep more people safe online than anyone else.
Also this year, artificial intelligence has been igniting massive shifts in the world of technology. As people look to AI to help address global issues ranging from disease detection to natural disaster prediction, AI has the potential to vastly improve how we identify, address, and reduce cybersecurity risks.
When it comes to AI and cybersecurity, the two topics that are top of mind are: how can generative AI enhance security, and how secure is generative AI from attacks.
Using AI to enhance security
According to Deloitte’s Future of Cyber 2023 survey, more than 90% of organizations have experienced at least one cyber compromise. In a data-driven era, globally connected infrastructures and applications inject cyber risk at each level of an organization’s digital activity, and criminals are deriving new ways to automate their tactics.
Fortunately, AI’s potential as a force multiplier in cybersecurity is becoming a reality. Generative AI and foundation models can be used in cybersecurity to overcome challenges in threat monitoring, architecting systems and tools, and talent shortages. A foundation model trained and fine-tuned for security can help:
Better identify threats at scale, in real time, by finding patterns or anomalies in large amounts of data to quickly identify hard-to-find threats and contain them before they spread.Automate the management of cybersecurity systems and tools by streamlining data, prioritizing alerts, and simplifying threat analysis to reduce toil.Make security operations accessible to the entire organization. By pairing the workforce with the intuitive capabilities of a foundation model that can access a variety of datasets and threat intelligence, the in-demand security professionals are freed up so they can operate “at the top of their license,” focusing on the most urgent or priority threats that pose the greatest risks to the organization.
The Google Cloud Security AI Workbench, powered by Google’s security-specific Sec-PaLM 2 model, is a platform for adding gen AI functionality to security products. It’s based on years of foundational AI research by Google. It is designed to help address the core challenges limiting cybersecurity operations today: the scope and scale of the threats, the toil of architecting security tools, and the stubborn talent gap.
Securing AI from attacks
Inspired by industry best practices, Google designed its Secure AI Framework (SAIF) to help organizations assess and mitigate risks specific to AI systems.
SAIF has six core elements:
Expand strong security foundations to the AI ecosystemExtend detection and response to bring AI into an organization’s threat universeAutomate defenses to keep pace with existing and new threatsHarmonize platform level controls to ensure consistent security across the organizationAdapt controls to adjust mitigations and create faster feedback loops for AI deploymentContextualize AI system risks in surrounding business processes
You can read a more detailed SAIF summary here, and review examples of how practitioners can implement SAIF here.
Google is putting SAIF into action by fostering industry support for SAIF with key partners and contributors, and continued industry engagement. We are also working directly with organizations, including private enterprise and governments, to help them understand how to assess AI security risks and mitigate them.
Part of this work includes conducting workshops with practitioners and continuing to publish best practices for deploying AI systems securely. In addition, we share insights from Google’s leading threat intelligence teams including Mandiant and Threat Analysis Group (TAG) on cyber activity involving AI systems.
AI’s impact on the security industry
Our approach to AI is rooted in the principle that we must be bold and responsible with AI, so it can have a positive impact on the security ecosystem. That’s why Google Cloud pursues a shared fate model, focused on protecting people, businesses, and governments by sharing our expertise, empowering the society to address ever-evolving cyber risks, and continuously working to advance the state of the art in cybersecurity to build a safer world for everyone.
Read More for the details.