Welcome to the second Cloud CISO Perspectives for August 2023. As you read this, we’ll be kicking off the third and final day of Google Cloud Next, our annual conference where we unveil our latest advancements — especially in security. In his guest column below, my colleague Sunil Potti, vice president and general manager, Google Cloud Security, explains in more detail our vision for how AI can help achieve stronger security outcomes.
As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.
Embracing the new era: Enhancing security with AI, and securing AI itself
By Sunil Potti, VP/GM, Google Cloud Security
At Google Cloud, we continue to invest in key technologies as we progress towards our true north star of invisible security: We want to make strong security pervasive and simple for everyone.
Our announcements at Next ‘23 show how we’ve come closer to that goal, as we expand our AI capabilities with Duet AI, our AI collaborator that provides generative AI-powered assistance to cloud defenders where and when they need it, in Mandiant Threat Intelligence, Chronicle Security Operations, and Security Command Center.
While we revealed many security innovations and enhancements across our security operations and cloud platforms, including managed threat hunting by Mandiant in Chronicle, agentless vulnerability scanning in Security Command Center, and more, today I want to highlight the important challenges we face as we integrate generative AI into security — and why it can change security for the better.
Google has been working for more than a decade to build AI into our products and solutions, and generative AI represents an industry shift into high gear. Tech companies are integrating generative AI with their tech stacks to enhance how their other systems function, and expand workflow capacities. A core generative AI capability is that it enables an intuitive interface with data in real-time. The way that a consumer can now prompt a foundation model to summarize news items, security teams can query their own organization’s data for insights into cyber threats.
We are focusing on how generative AI capabilities can help us solve some of security’s thornier problems, especially threat overload, toilsome tools, and the talent gap. As we’ve described since our AI and security announcements at the RSA Conference in April, there is an urgent need for technology to simplify complex issues for those with less security expertise, to provide guidance on how best to act, and to help empower IT teams to make security decisions.
Foundation models are the heart of generative AI solutions, and so earlier this year we introduced Google Cloud Security AI Workbench: an industry-first extensible platform powered by our specialized security foundation model, Sec-PaLM 2. Fine-tuned for security use cases, Security AI Workbench empowers organizations to better address challenges with threats, toil, and talent.
Generative AI solutions have the potential to reduce the toil of repetitive tasks that plague security teams, such as aggregating and enriching data from a multitude of sources to gain a more complete understanding of risks and where to focus.
Adding Duet AI into three of our core solutions advances our ability to bring the potential of generative AI to our customers.
Duet AI in Mandiant Threat Intelligence can help security teams quickly understand what Google knows about the adversary, how the latest threats may be targeting their organization, and how to make threat intelligence actionable across an organization.
Duet AI in Chronicle Security Operations can automatically provide a clear summary of what’s happening in threat cases, give context and guidance on the most important threats, and offer recommendations for how to respond. Duet AI also powers Chronicle’s new natural language search.
Duet AI in Security Command Center can empower specialists and non-specialists alike to stay one step ahead of adversaries with near-instant analysis of security findings and possible attack paths.
And just before Next ‘23, we announced AI-powered security and digital sovereignty controls in Workspace to help enterprise and public sector organizations keep their users and data safe.
While building AI into security has been paramount for us this year, this is just one piece of the puzzle – we’ve also been working on securing AI itself. The effort to harden AI so it is more resistant to manipulation and threat actors includes developing a Security AI Framework, our first AI red team report, and helping business leaders assess AI risk.
Threat actors are not slowing down, but neither are we. The power of a generative AI-enabled platform can provide enterprises a path to boost their workforce, prepare for the threats that will emerge in the future, and better secure their organizations.
In case you missed it
Here are the latest updates, products, services, and resources from our security teams so far this month:
Security announcements from Google Cloud Next: Our announcements at this year’s Next are the result of envisioning a new, improved security state, and working hard to achieve it. We introduced Duet AI in Mandiant Threat Intelligence, Chronicle Security Operations, and Security Command Center, we announced managed threat hunting powered by Mandiant in Chronicle, and revealed vital updates to our security solutions. You can check out the scheduled security sessions, and there’s still time to virtually participate in the final day of the conference.
Introducing AI-powered updates for Google Workspace: Just before Next, we unveiled new Zero Trust, digital sovereignty, and threat defense controls powered by Google AI to help organizations keep their data safe. Read more.
What to think about when you’re thinking about securing AI: From a new Google Cloud report, explore how securing AI systems is similar to securing traditional enterprise systems — and how it’s different. Read more.
News from Mandiant
Threat actors are interested in generative AI, but so far use remains limited: Since at least 2019, Mandiant has tracked threat actor interest in, and use of, AI capabilities to facilitate malicious activity. Based on our own observations and open source accounts, adoption of AI in intrusion operations remains limited and primarily related to social engineering. Read more.
How UNC4841 appears to have adapted to Barracuda ESG zero-day remediation: Mandiant researchers detail additional tactics, techniques, and procedures employed by threat actor group UNC4841 that have since been uncovered through our incident response engagements, as well as through collaborative efforts with Barracuda Networks and our international government partners. Read more.
AI and the 5 phases of the threat intelligence lifecycle: AI can help threat intelligence teams to detect and understand novel threats at scale, reduce burnout-inducing toil, and grow their existing talent by democratizing access to subject matter expertise. At Mandiant, we take a more nuanced approach. Read more.
How new SEC regulations can help organizations prepare for a cyber incident: When a cyber incident occurs, organizations need to be ready and able to respond quickly. A new SEC rule puts a premium on having a comprehensive response plan because it can potentially change how forensics, legal, and communications teams work during an incident, and organizations should take steps now to ensure they are ready. Read more.
Defender’s Advantage: Attacks at the edge and securing AI: In the latest issue of The Defender’s Advantage Cyber Snapshot, we pair a deep dive on edge device attacks with guidance on cybersecurity crisis communications, and address how to ensure security is built into AI systems. Read more.
Now hear this: Google Cloud Security and Mandiant podcasts
Next ‘23: How Google Cloud builds AI-powered security tools: The rapid rise this year of artificial intelligence and generative AI presents a rare opportunity to rethink how we approach security. It’s also a cause for concern, from securing AI itself to learning to trust the technology. Hosts Anton Chuvakin and Tim Peacock talk with Eric Doerr, VP of Engineering, Google Cloud Security, about AI, security, and what he plans to talk about at his Next presentation. Listen here.
AI and security: The good, the bad, and the… magical: Is AI a game-changer in cybersecurity? Can cybersecurity even be affected by game-changers? And what AI and security fears keep a CISO up at night? Anton and Tim venture into the heart of the future of cybersecurity with Google Cloud CISO Phil Venables. Listen here.
To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in two weeks with more security-related updates from Google Cloud.
Read More for the details.