Amazon EMR Serverless is a serverless option in Amazon EMR that makes it simple for data engineers and data scientists to run open-source big data analytics frameworks without configuring, managing, and scaling clusters or servers. Today, we are excited to announce support for Public Subnets that allow you to use EMR Serverless for cost effective outbound data transfer from the cloud for big data processing workloads.
EMR Serverless applications allow you to enable VPC connectivity for use cases that need to connect to VPC resources or for outbound data transfer from the cloud to access resources on the Internet or other cloud providers. Previously, VPC connectivity supported only Private Subnets, hence you needed to configure a NAT (network address translation) Gateway for outbound connectivity from the cloud, which adds additional charges based on the amount of data transferred. Now, you can configure VPC connectivity for EMR Serverless applications on Public Subnets, which have a direct route to an internet gateway. This allows you to eliminate the NAT Gateway charges and use EMR Serverless for cost-effective outbound data transfer from the cloud for big data processing workloads.
Amazon EMR Serverless Public Subnet support is available in all supported EMR releases and in all AWS Regions where EMR Serverless is available, including the AWS GovCloud (US) Regions. To learn more, visit Configuring VPC Access in the EMR Serverless documentation.
SES announces that Mail Manager now supports defined email address and domain lists which are used as part of the Mail Manager rules engine to distinguish between known and unknown addresses. This functionality adds both the mechanisms to upload and manage email address and domain lists, and the rules engine controls to make routing decisions based on whether a given address in a message envelope is on such a list or not. Customers are therefore able to ensure trusted delivery for known internal recipients while implementing catch-all behaviors for directory harvesting attacks, mistyped addresses, and standard behaviors for other domains owned and managed by the customer.
SES recipient lists allow customers to upload email addresses individually or in batches via CSV files. They can then configure one or more lists with different routing preferences in the Mail Manager rules engine. This provides immediate changes to mail routing simply by adding another address to an existing list. For example, a list of “Retired Employees” might have new names added with some frequency, but the handling rule — attached to the list name itself — remains the same throughout.
SES Mail Manager recipient lists increase the flexibility and security of customers using Mail Manager to handle incoming mail by increasing resistance to email-based reconnaissance efforts and without disclosing list names or aliases externally. SES Mail Manager recipient lists are available in every region where Mail Manager is launched. Customers can learn more about SES Mail Manager here.
We are excited to announce the launch of storage scaling functions for Amazon Timestream for InfluxDB, allowing you to scale your allocated storage and change your storage Tiers as needed. With Storage Scaling, in you few simple steps you have greater flexibility and control over your time-series data processing and analysis.
Timestream for InfluxDB is used in applications that require high-performance time-series data processing and analysis. You can quickly respond to changes in data ingestion rates, query volumes, or other workload fluctuations by moving to a faster more performant storage tier or extending your allocated storage capacity, ensuring that your Timestream for InfluxDB instances always have the necessary resources to handle your workload and cost effectively. This means you can focus on building and deploying your applications, rather than worrying about storage sizing and management.
Support for Storage Scaling is available in all Regions where Timestream for InfluxDB is available. See here for a full listing of our Regions. To learn more about Amazon Timestream for InfluxDB, please refer to our user guide.
You can create a Amazon Timestream Instance from the Amazon Timestream console, AWS Command line Interface (CLI), or SDK, and AWS CloudFormation. To learn more about compute scaling for Amazon Timestream for InfluxDB, visit the product page,documentation, and pricing page.
CloudWatch Synthetics now allows canaries running in a VPC to make outbound requests to IPv6 endpoints allowing monitoring of IPv6-only and dual stack enabled endpoints over IPv6. You can also access CloudWatch Synthetics APIs over both IPv4 and IPv6 through new dual stack compatible regional endpoints. Additionally, PrivateLink access to Synthetics within VPCs is now available over IPv6 connections.
Using CloudWatch Synthetics, you can now monitor the availability and performance of websites or microservices accessible via IPv6 endpoints ensuring that end users can use the applications seamlessly irrespective of their network protocol. You can create IPv6 enabled canaries in your VPC using the CLI, CDK, CloudFormation, or the AWS console, and update existing VPC canaries to support dual stack connectivity without making any script changes. You can monitor endpoints external to your VPC by giving the canary internet access and configuring the VPC subnets appropriately. Now you can manage Synthetics resources in environments with IPv6-only networking policies, or access Synthetics APIs via IPv6 without traffic traversing the internet using PrivateLink helping meet security and regulatory requirements.
IPv6 support for Synthetics is available in all commercial regions where CloudWatch Synthetics is present at no additional cost to the users.
To learn how to configure a IPv6 canary in a VPC see documentation, or click here to find dual-stack API management endpoints for Synthetics. See user guide and One Observability Workshop to get started with CloudWatch Synthetics.
Amazon Lex has expanded Assisted Slot Resolution to additional AWS regions and enhanced its capabilities through integration with newer Amazon Bedrock foundation models. Bot developers can now select from allowlisted foundation models in their account to enhance slot resolution capabilities, while maintaining the same simplified permission model through bot Service Linked Role updates.
When enabled, this feature helps chatbots better understand user responses during slot collection, activating during slot retries and fallback scenarios. The feature supports AMAZON.City, AMAZON.Country, AMAZON.Number, AMAZON.Date, AMAZON.AlphaNumeric (without regex), and AMAZON.PhoneNumber slot types, with the ability to enable improvements for individual slots during build time.
Assisted Slot Resolution is now available in Europe (Frankfurt, Ireland, London), Asia Pacific (Sydney, Singapore, Seoul, Tokyo), and Canada (Central) regions, in addition to US East (N. Virginia) and US West (Oregon). While there are no additional Amazon Lex charges for this feature, standard Amazon Bedrock pricing applies for foundation model usage.
To learn more about implementing these enhancements, please refer to our documentation on Assisted Slot Resolution. You can enable the feature through the Amazon Lex console or APIs.
Welcome to the second Cloud CISO Perspectives for January 2025. Iain Mulholland, senior director, Security Engineering, shares insights on the state of ransomware in the cloud from our new Threat Horizons Report. The research and intelligence in the report should prove helpful to all cloud providers and security professionals. Similarly, the recommended risk mitigations will work well with Google Cloud, but are generally applicable to all clouds.
As with all Cloud CISO Perspectives, the contents of this newsletter are posted to the Google Cloud blog. If you’re reading this on the website and you’d like to receive the email version, you can subscribe here.
–Phil Venables, VP, TI Security & CISO, Google Cloud
aside_block
<ListValue: [StructValue([(‘title’, ‘Get vital board insights with Google Cloud’), (‘body’, <wagtail.rich_text.RichText object at 0x3ebe101b1f70>), (‘btn_text’, ‘Visit the hub’), (‘href’, ‘https://cloud.google.com/solutions/security/board-of-directors?utm_source=cloud_sfdc&utm_medium=email&utm_campaign=FY24-Q2-global-PROD941-physicalevent-er-CEG_Boardroom_Summit&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
How cloud security can adapt to ransomware threats in 2025
By Iain Mulholland, senior director, Security Engineering, Google Cloud
How should cloud providers and cloud customers respond to the threat of ransomware? Cloud security strategies in 2025 should prioritize protecting against data exfiltration and identity access abuse, we explain in our new Threat Horizons Report.
Iain Mulholland, senior director, Security Engineering, Google Cloud
Research and intelligence in the report shows that threat actors have made stealing data and exploiting weaknesses in identity security top targets. We’ve seen recent adaptations from some threat actor groups, where they’ve started using new ransomware families to achieve their goals. We’ve also observed them rapidly adapt their tactics to evade detection and attribution, making it harder to accurately identify the source of attacks — and increasing the likelihood that victims will pay ransom demands.
As part of our shared fate approach, where we are active partners with our customers in helping them secure their cloud use by sharing our expertise, best practices, and detailed guidance, this edition of Threat Horizons provides all cloud security professionals with a deeper understanding of the threats they face, coupled with actionable risk mitigations from Google’s security and threat intelligence experts.
These mitigations will work well with Google Cloud, but are generally applicable for other clouds, too.
Evolving ransomware and data-theft risks in the cloud
Ransomware and data threats in the cloud are not new, and investigations and analysis of the threats and risks they pose has been a key part of previous Threat Horizons Reports. Notably, Google Cloud security and intelligence experts exposed ransomware-related trends in the Threat Horizons Report published in February 2024, that included threat actors prioritizing data exfiltration over encryption and exploiting server-side vulnerabilities.
We recommend that organizations incorporate automation and awareness strategies such as strong password policies, mandatory multi-factor authentication, regular reviews of user access and cloud storage bucket security, leaked credential monitoring on the dark web, and account lockout mechanisms.
We observed in the second half of 2024 a concerning shift that threat actors were becoming more adept at obscuring their identities. This latest evolution in their tactics, techniques, and procedures makes it harder for defenders to counter their attacks and increases the likelihood of ransom payments — which totalled $1.1 billion in 2023. We also saw threat actors adapt by relying more on ransomware-as-a-service (RaaS) to target cloud services, which we detail in the full report.
We recommend that organizations incorporate automation and awareness strategies such as strong password policies, mandatory multi-factor authentication (MFA), regular reviews of user access and cloud storage bucket security, leaked credential monitoring on the dark web, and account lockout mechanisms. Importantly, educate employees about security best practices to help prevent credential compromise.
Government insights can help here, too. Guidance from the Cybersecurity and Infrastructure Security Agency’s Ransomware Vulnerability Warning Pilot can proactively identify and warn about vulnerabilities that could be exploited by ransomware actors.
I’ve summarized risk mitigations to enhance your Google Cloud security posture to better protect against threats including account takeover, which could lead to threat actor ransomware and data extortion operations.
To help prevent cloud account takeover, your organization can:
Enroll in MFA: Google Cloud’s phased approach to mandatory MFA can make it harder for attackers to compromise accounts even if they have stolen credentials and authentication cookies.
Implement robust Identity and Access Management (IAM) policies: Use IAM policies to grant users only the necessary permissions for their jobs. Google Cloud offers a range of tools to help implement and manage IAM policies, including Policy Analyzer.
To help mitigate ransomware and extortion risks, your organization can:
Establish acloud-specific backup strategy: Disaster recovery testing should include configurations, templates, and full infrastructure redeployment and backups should be immutable for maximum protection.
Enable proactive virtual machine scanning: Part of SCC, Virtual Machine Threat Detection (VMTD) scans virtual machines for malicious applications to detect threats, including ransomware.
Monitor and control unexpected costs: With Google Cloud, you can identify and manage unusual spending patterns across all projects linked to a billing account, which could indicate unauthorized activity.
Organizations can use multiple Google Cloud products to enhance protection against ransomware and data theft extortion. Security Command Center can help establish a multicloud security foundation for your organization that can help detect data exfiltration and misconfigurations. Sensitive Data Protection can help protect against potential data exfiltration by identifying and classifying sensitive data in your Google Cloud environment, and also help you monitor for unauthorized access and movement of data.
Threats beyond ransomware
There’s much more to the cloud threat landscape than ransomware, and also more that organizations can do to mitigate the risks they face. As above, I’ve summarized here five more threat landscape trends that we identify in the report, and our suggested mitigations on how you can improve your organization’s security posture.
Service account risks, including over-privileged service accounts exploited with lateral movement tactics.
What you should do: Investigate and protect service accounts to help prevent exploitation of overprivileged accounts and reduce detection noise from false positives.
Identity exploitation, including compromised user identities in hybrid environments exploited with lateral movement between on-premises and cloud environments.
What you should do: Combine strong authentication with attribute-based validation, modernize playbooks and processes for comprehensive identity incident response (including enforcing mandatory MFA.)
Attacks against cloud databases, including active vulnerability exploits and exploiting weak credentials that guard sensitive information.
Diversified attack methods, including privilege escalation that allows threat actors to charge against victim billing accounts in an effort to maximize their profits from compromised accounts.
What you should do: As discussed above, enroll in MFA, use automated sensitive monitoring and alerting, and implement robust IAM policies.
Data theft and extortion attacks, including MFA bypass techniques and aggressive communication strategies with victims, use increasingly sophisticated tactics against cloud-based services to compromise accounts and maximize profits.
What you should do: Use a defense-in-depth strategy that includes strong password policies, MFA, regular reviews of user access, leaked credential monitoring, account lockout mechanisms, and employee education. Robust tools such as SCC can help monitor for data exfiltration and unauthorized access of data.
We provide more detail on each of these in the full report.
How Threat Horizons Reports can help
The Threat Horizons report series is intended to present a snapshot of the current state of threats to cloud environments, and how we can work together to mitigate those risks and improve cloud security for all. The reports provide decision-makers with strategic threat intelligence that cloud providers, customers, cloud security leaders, and practitioners can use today.
Threat Horizon reports are informed by Google Threat Intelligence Group (GTIG), Mandiant, Google Cloud’s Office of the CISO, Product Security Engineering, and Google Cloud intelligence, security, and product teams.
The Threat Horizons Report for the first half of 2025 can be read in full here. Previous Threat Horizons reports are available here.
aside_block
<ListValue: [StructValue([(‘title’, ‘Join the Google Cloud CISO Community’), (‘body’, <wagtail.rich_text.RichText object at 0x3ebe0fd3a790>), (‘btn_text’, ‘Learn more’), (‘href’, ‘https://rsvp.withgoogle.com/events/ciso-community-interest?utm_source=cgc-blog&utm_medium=blog&utm_campaign=2024-cloud-ciso-newsletter-events-ref&utm_content=-&utm_term=-‘), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
In case you missed it
Here are the latest updates, products, services, and resources from our security teams so far this month:
Get ready for a unique, immersive security experience at Next ‘25: Here’s why Google Cloud Next is shaping up to be a must-attend event for security experts and the security-curious alike. Read more.
How Google secures its own cloud use: Take a peek under the hood at how we use and secure our own cloud environments, as part of our new “How Google Does It” series. Read more.
Privacy-preserving Confidential Computing now on even more machines and services: Confidential Computing is available on even more machine types than before. Here’s what’s new. Read more.
Use custom Org Policies to enforce CIS benchmarks for GKE: Many CIS recommendations for GKE can be enforced with custom Organization Policies. Here’s how. Read more.
Making GKE more secure with supply-chain attestation and SLSA: You can now verify the integrity of Google Kubernetes Engine components with SLSA, the Supply-chain Levels for Software Artifacts framework. Read more.
Office of the CISO 2024 year in review: Google Cloud’s Office of the CISO shared insights in 2024 on how to approach generative AI securely, featured industry experts on the Cloud Security Podcast, published research papers, and examined security lessons learned across many sectors. Read more.
Celebrating one year of AI bug bounties at Alphabet: What we learned in the first year of AI bug bounties, and how those lessons will inform our approach to vulnerability rewards going forward. Read more.
Please visit the Google Cloud blog for more security stories published this month.
aside_block
<ListValue: [StructValue([(‘title’, ‘Tell Google Cloud what you think’), (‘body’, <wagtail.rich_text.RichText object at 0x3ebe0fd3ac10>), (‘btn_text’, ‘Vote now’), (‘href’, ‘https://www.linkedin.com/feed/update/urn:li:activity:7290368088598822913/’), (‘image’, <GAEImage: GCAT-replacement-logo-A>)])]>
Threat Intelligence news
How to stop cryptocurrency heists: Many factors are spurring a spike in cryptocurrency heists, including the lucrative nature of their rewards and the challenges associated with attribution to malicious actors. In our new Securing Cryptocurrency Organizations guide, we detail the defense measures organizations should take to stop cryptocurrency heists. Read more.
Please visit the Google Cloud blog for more threat intelligence stories published this month.
Now hear this: Google Cloud Security and Mandiant podcasts
How the modern CISO balances risk, innovation, business strategy, and cloud: John Rogers, CISO, MSCI, talks about the biggest cloud security challenges CISOs are facing today — and they’re evolving — with host Anton Chuvakin and guest co-host Marina Kaganovich from Google Cloud’s Office of the CISO. Listen here.
Slaying the ransomware dragon: Can startups succeed where others have failed, and once and for all end ransomware? Bob Blakley, co-founder and chief product officer of ransomware defense startup Mimic, tells hosts Anton Chuvakin and Tim Peacock his personal reasons for joining the fight against ransomware, and how his company can help. Listen here.
Behind the Binary: How a gamer became a renowned hacker: Stephen Eckels, from Google Mandiant’s FLARE team, discusses how video game modding helped kickstart his cybersecurity career. Listen here.
To have our Cloud CISO Perspectives post delivered twice a month to your inbox, sign up for our newsletter. We’ll be back in February with more security-related updates from Google Cloud.
In today’s complex digital world, building truly intelligent applications requires more than just raw data — you need to understand the intricate relationships within that data. Graph analysis helps reveal these hidden connections, and when combined with techniques like full-text search and vector search, enables you to deliver a new class of AI-enabled application experiences. However, traditional approaches based on niche tools result in data silos, operational overhead, and scalability challenges. That’s why we introduced Spanner Graph, and today we’re excited to announce that it’s generally available.
In a previous post, we described how Spanner Graph reimagines graph data management with a unified database that integrates graph, relational, search, and gen AI capabilities with virtually unlimited scalability. With Spanner Graph, you gain access to an intuitive ISO Standard Graph Query Language (GQL) interface that simplifies pattern matching and relationship traversal. You also benefit from full interoperability between GQL and SQL, for tight integration between graph and tabular data. Powerful vector and full-text search enable fast data retrieval using semantic meaning and keywords. And you can rely on Spanner’s scalability, availability, and consistency to provide a solid data foundation. Finally, integration with Vertex AI gives you access to powerful AI models directly within Spanner Graph.
aside_block
<ListValue: [StructValue([(‘title’, ‘$300 in free credit to try Google Cloud databases’), (‘body’, <wagtail.rich_text.RichText object at 0x3ebe10508ee0>), (‘btn_text’, ‘Start building for free’), (‘href’, ‘http://console.cloud.google.com/freetrial?redirectPath=/products?#databases’), (‘image’, None)])]>
What’s new in Spanner Graph
Since the preview, we have added exciting new capabilities and partner integrations to make it easier for you to build with Spanner Graph. Let’s take a closer look.
1) Spanner Graph Notebook: Graph visualization is key to developing with graphs. The new open-source Spanner Graph Notebook tool provides an efficient way to query Spanner Graph visually. This tool is natively installed in Google Colab, meaning you can use it directly within that environment. You can also leverage it in notebook environments like Jupyter Notebook. With this tool, you can use magic commands with GQL to visualize query results and graph schemas with multiple layout options, inspect node and edge properties, and analyze neighbor relationships.
Open-source Spanner Graph Notebook.
2) GraphRAG with LangChain integration: Spanner Graph’s integration with LangChain allows for quick prototyping of GraphRAG applications. Conventional RAG, while capable of grounding the LLM by providing relevant context from your data using vector search, cannot leverage the implicit relationships present in your data. GraphRAG overcomes this limitation by constructing a graph from your data that captures these complex relationships. At retrieval time, GraphRAG uses the combined power of graph queries with vector search to provide a richer context to the LLM, enabling it to generate more accurate and relevant answers.
3) Graph schema in Spanner Studio: The Spanner Studio Explorer panel now displays a list of defined graphs, their nodes and edges, and the associated labels and properties. You can explore and understand the structure of your graph data at a glance, making it easier to design, debug, and optimize your applications.
4) Graph query improvements: Spanner Graph now supports the path data type and functions, allowing you to retrieve and analyze the specific sequence of nodes and relationships that connect two nodes in your graph. For example, you can create a path variable in a path pattern, using the IS_ACYCLIC function to check if the path has repeating nodes, and return the entire path:
code_block
<ListValue: [StructValue([(‘code’, ‘GRAPH FinGraphrnMATCH p = (:Account)-[:Transfers]->{2,5}(:Account)rnRETURN IS_ACYCLIC(p) AS is_acyclic_path, TO_JSON(p) AS full_path;’), (‘language’, ”), (‘caption’, <wagtail.rich_text.RichText object at 0x3ebe12caf910>)])]>
5) Graph visualization partner integrations: Spanner Graph is now integrated with leading graph visualization partners. For example, Spanner Graph customers can use GraphXR, Kineviz’s flagship product, which combines cutting-edge visualization technology with advanced analytics to help organizations make sense of complex, connected data.
“We are thrilled to partner with Google Cloud to bring graph analytics to big data. By integrating GraphXR with Spanner Graph, we’re empowering businesses to visualize and interact with their data in ways that were previously unimaginable.” – Weidong Yang, CEO, Kineviz
“Businesses can finally handle graph data with both speed and scale. By combining Graphistry’s GPU-accelerated graph visualization and AI with Spanner Graph’s global-scale querying, teams can now easily go all the way from raw data to graph-informed action. Whether detecting fraud, analyzing journeys, hunting hackers, or surfacing risks, this partnership is enabling teams to move with confidence.” – Leo Meyerovich, Founder and CEO, Graphistry
Visual analytics capabilities in Graphistry: zooming, clustering, filtering, histograms, time bar filtering, node styling (colors), allowing point-and-click analysis to quickly understand the data, clusters, identify patterns, anomalies and other insights.
Furthermore, you can use G.V(), a quick-to-install graph database client, with Spanner Graph to perform day-to-day graph visualization and data analytics tasks with ease. Data professionals benefit from high-performance graph visualization, no-code data exploration, and highly customizable data visualization options.
“Graphs thrive on connections, which is why I’m so excited about this new partnership between G.V() and Google Cloud Spanner Graph. Spanner Graph turns big data into graphs, and G.V() effortlessly turns graphs into interactive data visualizations. I’m keen to see what data professionals build combining both solutions.” – Arthur Bigeard, Founder, gdotv Ltd.
Visually querying and exploring Spanner Graph with G.V().
What customers are saying
Through our road to GA, we have also been working with multiple customers to help them innovate with Spanner Graph:
“The Commercial Financial Network manages commercial credit data for more than 30 million U.S. businesses Managing the hierarchy of these businesses can be complex due to the volume of these hierarchies, as well as the dynamic nature driven by mergers and acquisitions, Equifax is committed to providing lenders with the accurate, reliable and timely information they need as they make financial decisions. Spanner Graph helps us manage these rapidly changing, dynamic business hierarchies easily at scale.” – Yuvaraj Sankaran, Chief Architect of Global Platforms, Equifax
“As we strive to enhance our fraud detection capabilities, having a robust, multi-model database like Google Spanner is crucial for our success. By integrating SQL for transactional data management with advanced graph data analysis, we can efficiently manage and analyze evaluated fraud data. Spanner’s new capabilities significantly improve our ability to maintain data integrity and uncover complex fraud patterns, ensuring our systems are secure and reliable.” – Hai Sadon, Data Platform Group Manager, Transmit Security
“Spanner Graph has provided a novel and performant way for us to query this data, allowing us to deliver features faster and with greater peace of mind. Its flexible data modeling and high-performance querying have made it far easier to leverage the vast amount of data we have in our online applications.” – Aaron Tang, Senior Principal Engineer, U-NEXT
Amazon Redshift now offers enhanced query monitoring capabilities, enabling you to efficiently identify and isolate performance bottlenecks. This feature provides comprehensive insights to track, evaluate, and diagnose query performance within data warehouses, eliminating the need to manually analyze system tables and logs.
Accessible through the AWS console, enhanced query monitoring allows you to view performance history for trend analysis, detect workload changes and understand how query performance has changed over time and diagnose performance issues with query profiler. You can analyze a specific timeframe and find problematic queries, review performance trends, and drill down to detailed query plans. Enhanced query monitoring relies on system views like SYS_QUERY_DETAIL and requires users to connect to the Redshift data warehouse. A regular users can view only their queries whereas administrators with SYS:MONITOR role will be able to monitor queries for the entire data warehouse.
Enhanced query monitoring is now generally available for both Amazon Redshift Serverless and Amazon Redshift provisioned data warehouses in all AWS commercial and the AWS GovCloud (US) Regions where Amazon Redshift is available. To learn more, see the documentation.
AWS Marketplace sellers can now utilize a new self-service process to enable demo and private offer requests for their products through the AWS Marketplace Management Portal and AWS Marketplace Catalog API. Enabling this feature allows customers to request demos and private offers directly from sellers’ product listing pages, accelerating product evaluations and reducing procurement cycle times.
When creating or updating software as a service (SaaS) or server products in the AWS Marketplace Management Portal, sellers who are eligible to receive AWS Opportunity referrals through the APN Customer Engagements (ACE) program and have linked their AWS Marketplace and Partner Central accounts now have the option to enable ‘Request demo’ and/or ‘Request private offer’ call-to-action buttons on their product detail pages. This empowers sellers with direct self-service access to onboard these features. By enabling demo and private offer requests, AWS Marketplace sellers can be connected directly to high-intent prospects that are pre-qualified by AWS.
Are you a cloud architect or IT admin tasked with ensuring deployments are following best practices and generating configuration validation reports? The struggle of adopting best practices is real. And not just the first time: ensuring that a config doesn’t drift from org-wide best practices over time is notoriously difficult.
Workload Manager provides a rule-based validation service for evaluating your workloads running on Google Cloud. Workload Manager scans your workloads, including SAP and Microsoft SQL Server, to detect deviations from standards, rules, and best practices to improve system quality, reliability, and performance. .
Introducing custom rules in Workload Manager
Today, we’re excited to extend Workload Manager with custom rules (GA), a detective-based service that helps ensure your validations are not blocking any deployments, but that allows you to easily detect compliance issues across different architectural intents. Now, you can flexibly and consistently validate your Google Cloud deployments across Projects, Folders and Orgs against best practices and custom standards to help ensure that they remain compliant.
Here’s how to get started with Workload Manager custom rules in a matter of minutes.
1) Codify best practices and validate resources Identify best practices relevant to your deployments from the Google Cloud Architecture Framework, codify them in Rego, a declarative policy language that’s used to define rules and express policies over complex data structures, and run or schedule evaluation scans across your deployments.
You can create new Rego rules based on your preferences, or reach out to your account team to get more help crafting new rules.
2) Export findings to BigQuery dataset and visualize them using Looker You can configure your own BigQuery dataset to export each validation scan and easily integrate it with your existing reporting systems, build a new Looker dashboard, or export results to Google Sheets to plan remediation steps.
Additionally, you can configure Pub/Sub-based notifications to send email, Google Chat messages, or integrate with your third-party systems based on different evaluation success criteria.
A flexible system to do more than typical config validation
With custom rules you can build rules with complex logic and validation requirements across multiple domains. You can delegate build and management to your subject matter experts, reducing development time and accelerating the time to release new policies.
And with central BigQuery table export, you can combine violation findings from multiple evaluations and easily integrate with your reporting system to build a central compliance program.
Get started today with custom rules in Workload Manager by referring to the documentation and testing sample policies against your deployments.
Need more help? Engage with your account teams to get more help in crafting, curating and adopting best practices.
Rapid advancements in artificial intelligence (AI) are unlocking new possibilities for the way we work and accelerating innovation in science, technology, and beyond. In cybersecurity, AI is poised to transform digital defense, empowering defenders and enhancing our collective security. Large language models (LLMs) open new possibilities for defenders, from sifting through complex telemetry to secure coding, vulnerabilitydiscovery, and streamlining operations. However, some of these same AI capabilities are also available to attackers, leading to understandable anxieties about the potential for AI to be misused for malicious purposes.
Much of the current discourse around cyber threat actors’ misuse of AI is confined to theoretical research. While these studies demonstrate the potential for malicious exploitation of AI, they don’t necessarily reflect the reality of how AI is currently being used by threat actors in the wild. To bridge this gap, we are sharing a comprehensive analysis of how threat actors interacted with Google’s AI-powered assistant, Gemini. Our analysis was grounded by the expertise of Google’s Threat Intelligence Group (GTIG), which combines decades of experience tracking threat actors on the front lines and protecting Google, our users, and our customers from government-backed attackers, targeted 0-day exploits, coordinated information operations (IO), and serious cyber crime networks.
We believe the private sector, governments, educational institutions, and other stakeholders must work together to maximize AI’s benefits while also reducing the risks of abuse. At Google, we are committed to developing responsible AI guided by our principles, and we share resources and best practices to enable responsible AI development across the industry. We continuously improve our AI models to make them less susceptible to misuse, and we apply our intelligence to improve Google’s defenses and protect users from cyber threat activity. We also proactively disrupt malicious activity to protect our users and help make the internet safer. We share our findings with the security community to raise awareness and enable stronger protections for all.
aside_block
<ListValue: [StructValue([(‘title’, ‘Adversarial Misuse of Generative AI’), (‘body’, <wagtail.rich_text.RichText object at 0x3e4a6bb3fa00>), (‘btn_text’, ‘Download now’), (‘href’, ‘https://services.google.com/fh/files/misc/adversarial-misuse-generative-ai.pdf’), (‘image’, None)])]>
Executive Summary
Google Threat Intelligence Group (GTIG) is committed to tracking and protecting against cyber threat activity. We relentlessly defend Google, our users, and our customers by building the most complete threat picture to disrupt adversaries. As part of that effort, we investigate activity associated with threat actors to protect against malicious activity, including the misuse of generative AI or LLMs.
This report shares our findings on government-backed threat actor use of the Gemini web application. The report encompasses new findings across advanced persistent threat (APT) and coordinated information operations (IO) actors tracked by GTIG. By using a mix of analyst review and LLM-assisted analysis, we investigated prompts by APT and IO threat actors who attempted to misuse Gemini.
Advanced Persistent Threat (APT) refers to government-backed hacking activity, including cyber espionage and destructive computer network attacks.
Information Operations (IO) attempt to influence online audiences in a deceptive, coordinated manner. Examples include sockpuppet accounts and comment brigading.
GTIG takes a holistic, intelligence-driven approach to detecting and disrupting threat activity, and our understanding of government-backed threat actors and their campaigns provides the needed context to identify threat enabling activity. We use a wide variety of technical signals to track government-backed threat actors and their infrastructure, and we are able to correlate those signals with activity on our platforms to protect Google and our users. By tracking this activity, we’re able to leverage our insights to counter threats across Google platforms, including disrupting the activity of threat actors who have misused Gemini. We also actively share our insights with the public to raise awareness and enable stronger protections across the wider ecosystem.
Our analysis of government-backed threat actor use of Gemini focused on understanding how threat actors are using AI in their operations and if any of this activity represents novel or unique AI-enabled attack or abuse techniques. Our findings, which are consistent with those of our industry peers, reveal that while AI can be a useful tool for threat actors, it is not yet the game-changer it is sometimes portrayed to be. While we do see threat actors using generative AI to perform common tasks like troubleshooting, research, and content generation, we do not see indications of them developing novel capabilities.
Our key findings include:
We did not observe any original or persistent attempts by threat actors to use prompt attacks or other machine learning (ML)-focused threats as outlined in the Secure AI Framework (SAIF) risk taxonomy. Rather than engineering tailored prompts, threat actors used more basic measures or publicly available jailbreak prompts in unsuccessful attempts to bypass Gemini’s safety controls.
Threat actors are experimenting with Gemini to enable their operations, finding productivity gains but not yet developing novel capabilities. At present, they primarily use AI for research, troubleshooting code, and creating and localizing content.
APT actors used Gemini to support several phases of the attack lifecycle, including researching potential infrastructure and free hosting providers, reconnaissance on target organizations, research into vulnerabilities, payload development, and assistance with malicious scripting and evasion techniques. Iranian APT actors were the heaviest users of Gemini, using it for a wide range of purposes. Of note, we observed limited use of Gemini by Russian APT actors during the period of analysis.
IO actors used Gemini for research; content generation including developing personas and messaging; translation and localization; and to find ways to increase their reach. Again, Iranian IO actors were the heaviest users of Gemini, accounting for three quarters of all use by IO actors. We also observed Chinese and Russian IO actors using Gemini primarily for general research and content creation.
Gemini’s safety and security measures restricted content that would enhance adversary capabilities as observed in this dataset. Gemini provided assistance with common tasks like creating content, summarizing, explaining complex concepts, and even simple coding tasks. Assisting with more elaborate or explicitly malicious tasks generated safety responses from Gemini.
Threat actors attempted unsuccessfully to use Gemini to enable abuse of Google products, including researching techniques for Gmail phishing, stealing data, coding a Chrome infostealer, and bypassing Google’s account verification methods.
Rather than enabling disruptive change, generative AI allows threat actors to move faster and at higher volume. For skilled actors, generative AI tools provide a helpful framework, similar to the use of Metasploit or Cobalt Strike in cyber threat activity. For less skilled actors, they also provide a learning and productivity tool, enabling them to more quickly develop tools and incorporate existing techniques. However, current LLMs on their own are unlikely to enable breakthrough capabilities for threat actors.We note that the AI landscape is in constant flux, with new AI models and agentic systems emerging daily. As this evolution unfolds, GTIG anticipates the threat landscape to evolve in stride as threat actors adopt new AI technologies in their operations.
AI-Focused Threats
Attackers can use LLMs in two ways. One way is attempting to leverage LLMs to accelerate their campaigns (e.g., by generating code for malware or content for phishing emails). The overwhelming majority of activity we observed falls into this category. The second way attackers can use LLMs is to instruct a model or AI agent to take a malicious action (e.g., finding sensitive user data and exfiltrating it). These risks are outlined in Google’s Secure AI Framework (SAIF) risk taxonomy.
We did not observe any original or persistent attempts by threat actors to use prompt attacks or other AI-specific threats. Rather than engineering tailored prompts, threat actors used more basic measures, such as rephrasing a prompt or sending the same prompt multiple times. These attempts were unsuccessful.
Jailbreak Attempts: Basic and Based on Publicly Available Prompts
We observed a handful of cases of low-effort experimentation using publicly available jailbreak prompts in unsuccessful attempts to bypass Gemini’s safety controls. Threat actors copied and pasted publicly available prompts and appended small variations in the final instruction (e.g., basic instructions to create ransomware or malware). Gemini responded with safety fallback responses and declined to follow the threat actor’s instructions.
In one example of a failed jailbreak attempt, an APT actor copied publicly available prompts into Gemini and appended basic instructions to perform coding tasks. These tasks included encoding text from a file and writing it to an executable and writing Python code for a distributed denial-of-service (DDoS) tool. In the former case, Gemini provided Python code to convert Base64 to hex, but provided a safety filtered response when the user entered a follow-up prompt that requested the same code as a VBScript.
The same group used a different publicly available jailbreak prompt to request Python code for DDoS. Gemini provided a safety filtered response stating that it could not assist, and the threat actor abandoned the session and did not attempt further interaction.
What is an AI jailbreak?
Jailbreaks are one type of Prompt Injection attack, causing an AI model to behave in ways that they’ve been trained to avoid (e.g., outputting unsafe content or leaking sensitive information). Prompt Injections generally cause the LLM to execute malicious “injected” instructions as part of data that were not meant to be executed by the LLM.
Controls against prompt injection include input/output validation and sanitization as well as adversarial training and testing. Training, tuning, and evaluation processes also help fortify models against prompt injection.
Example of a jailbreak prompt publicly available on GitHub
Some malicious actors unsuccessfully attempted to prompt Gemini for guidance on abusing Google products, such as advanced phishing techniques for Gmail, assistance coding a Chrome infostealer, and methods to bypass Google’s account creation verification methods. These attempts were unsuccessful. Gemini did not produce malware or other content that could plausibly be used in a successful malicious campaign. Instead, the responses consisted of safety-guided content and generally helpful, neutral advice about coding and cybersecurity. In our continuous work to protect Google and our users, we have not seen threat actors either expand their capabilities or better succeed in their efforts to bypass Google’s defenses.
Government-backed attackers attempted to use Gemini for coding and scripting tasks, gathering information about potential targets, researching publicly known vulnerabilities, and enabling post-compromise activities, such as defense evasion in a target environment.
Iran: Iranian APT actors were the heaviest users of Gemini, using it for a wide range of purposes, including research on defense organizations, vulnerability research, and creating content for campaigns. APT42 focused on crafting phishing campaigns, conducting reconnaissance on defense experts and organizations, and generating content with cybersecurity themes.
China: Chinese APT actors used Gemini to conduct reconnaissance, for scripting and development, to troubleshoot code, and to research how to obtain deeper access to target networks. They focused on topics such as lateral movement, privilege escalation, data exfiltration, and detection evasion.
North Korea: North Korean APT actors used Gemini to support several phases of the attack lifecycle, including researching potential infrastructure and free hosting providers, reconnaissance on target organizations, payload development, and assistance with malicious scripting and evasion techniques. They also used Gemini to research topics of strategic interest to the North Korean government, such as the South Korean military and cryptocurrency. Of note, North Korean actors also used Gemini to draft cover letters and research jobs—activities that would likely support North Korea’s efforts to place clandestine IT workers at Western companies.
Russia: With Russian APT actors, we observed limited use of Gemini during the period of analysis. Their Gemini use focused on coding tasks, including converting publicly available malware into another coding language and adding encryption functions to existing code.
Google analyzed Gemini activity associated with known APT actors and identified APT groups from more than 20 countries that used Gemini. The highest volume of usage was from Iran and China. APT actors used Gemini to support several phases of the attack lifecycle, including researching potential infrastructure and free hosting providers, reconnaissance on target organizations, research into vulnerabilities, payload development, and assistance with malicious scripting and evasion techniques. The top use cases by APT actors focused on:
Assistance with coding tasks, including troubleshooting, tool and script development and converting or rewriting existing code
Vulnerability research focused on publicly reported vulnerabilities and specific CVEs
General research on various technologies, translations and technical explanations
Reconnaissance about likely targets, including details about specific organizations
Enabling post-compromise activity, such seeking advice on techniques to evade detection, escalate privileges or conduct internal reconnaissance in a target environment
We observed APT actors use Gemini attempting to support all phases of the attack lifecycle.
Attack Lifecycle
Topics of Gemini Usage
Reconnaissance
Attacker gathers information about the target
Iran
Recon on experts, international defense organizations, government organizations
Topics related to Iran-Israel proxy conflict
North Korea
Research on companies across multiple sectors and geos
Recon on US military and operations in South Korea
Research free hosting providers
China
Research on US military, US-based IT service providers
Understand public database of US intelligence personnel
Research on target network ranges; determine domain names of targets
Weaponization
Attacker develops or acquires tools to exploit target
Develop webcam recording code in C++
Convert Chrome infostealer function from Python to Node.js
Rewrite publicly available malware into another language
Add AES encryption functionality to provided code
Delivery
Attacker delivers weaponized exploit or payload to the target system
Better understanding advanced phishing techniques
Generating content for targeting a US defense organization
Generating content with cybersecurity and AI themes
Exploitation
Attacker exploits vulnerability to gain access
Reverse engineer endpoint detection and response (EDR) server components for health check and authentication
Access Microsoft Exchange using password hash
Research vulnerabilities in WinRM protocol
Understand publicly reported vulnerabilities, including Internet of Things (IoT) bugs
Installation
Attacker installs tools or malware to maintain access
Sign an Outlook Visual Studio Tools for Office (VSTO) plug-in and deploy it silently to all computers
Add a self-signed certificate to Active Directory
Research Mimikatz for Windows 11
Research Chrome extensions that provide parental controls and monitoring
Command and control (C2)
Attacker establishes communication channel with the compromised system
Generate code to remotely access Windows Event Log
Active Directory management commands
JSON Web Token (JWT) security and routing rules in Ruby on Rails
Character encoding issues in smbclient
Command to check IPs of admins on the domain controller
Actions on objectives
Attacker achieves their intended goal such as data theft or disruption
Automate workflows with Selenium (e.g. logging into compromised account)
Generate a PHP script to extract emails from Gmail into electronic mail (EML) files
Upload large files to OneDrive
Solution to TLS 1.3 visibility challenges
Iranian Government-Backed Actors
Iranian government-backed actors accounted for the largest Gemini use linked to APT actors. Across Iranian government-backed actors, we observed a broad scope of research and use cases, including to enable reconnaissance on targets, for research into publicly reported vulnerabilities, to request translation and technical explanations, and to create content for possible use in future campaigns. Their use reflected strategic Iranian interests including research focused on defense organizations and experts, defense systems, foreign governments, individual dissidents, the Israel-Hamas conflict, and social issues in Iran.
At a Glance: Iranian APT Actors Using Gemini
Over 10 Iran-backed groups observed using Gemini
Google abuse-focused use cases:
Researching methods for extracting data from Android devices, including SMS messages, accounts, contacts, and social media accounts
Example use cases:
Coding and scripting
PowerShell and Linux commands
Python code for website scraping
Debugging and improving a Ghidra script
Developing PHP scripts to collect and store user IP addresses and browser information in a MySQL database
Assistance with C# programming
Modifying assembly code
Help understanding error messages
Vulnerability research
Research on specific CVEs and technologies, such as WinRM and IoT devices
Exploitation techniques and proof-of-concept code
Research on server-side request forgery (SSRF) exploitation techniques
Research on the open-source router exploitation tool RomBuster
Research about organizations
International defense organizations
Military and government organizations
Cybersecurity companies
International organizations that monitor development of advanced weapons
Research about warfare defenses
Information on the Iran-Israel proxy conflict
Unmanned aerial vehicles (UAV)
Anti-drone systems
Satellite technology
Remote sensing technology
Israel defense systems
Generating content
Generating content with cybersecurity and AI themes
Tailoring content to target a defense organization
Translating various texts into Farsi, Hebrew, and English
Crafting Phishing Campaigns
Over 30% of Iranian APT actors’ Gemini use was linked to APT42. APT42’s Gemini activity reflected the group’s focus on crafting successful phishing campaigns. We observed the group using Gemini to conduct reconnaissance into individual policy and defense experts, as well as organizations of interest for the group.
In addition to reconnaissance, APT42 used the text generation and editing capabilities of Gemini to craft material for phishing campaigns, including generating content with cybersecurity themes and tailoring the output to a US defense organization. APT42 also utilized Gemini for translation including localization, or tailoring content for a local audience. This includes content tailored to local culture and local language, such as asking for translations to be in fluent English.
Vulnerability Research
The majority of APT42’s efforts focused on research into publicly known vulnerabilities, such as a request to generate a list of critical vulnerabilities from 2023. They also focused on vulnerabilities in specific products such as Mikrotik, Apereo, and Atlassian.
Of note, APT42 appeared to be researching how to use generative AI tools for offensive purposes, asking Gemini for help preparing training content for a red team focused on how offensive teams can use AI tools in their operations.
Research Into Military and Weapons Systems
APT42 also appears to have used Gemini’s translation and explanation functions to better understand publicly available information on defense systems. Their efforts included general research into the Israel-Hamas conflict, as well as strategic trends in China’s defense industry. The threat actor also used Gemini for technical explanations about US-made aerospace systems.
Another Iranian APT group also focused on understanding warfare defenses including specific research into satellite signal jamming and anti-drone systems. Other Iranian APT actors researched specific defense systems, including researching information about specific unmanned aerial vehicle (UAV) models, jamming F-35 fighter jets, anti-drone systems, and Israel’s missile defense systems.
People’s Republic of China (PRC) Government-Backed Actors
Government-backed actors linked to the People’s Republic of China (PRC) attempted to use Gemini to enable reconnaissance on targets, for scripting and development, to request translation and explanation of technical concepts, and attempting to enable deeper access to a network following initial compromise.PRC threat actors’ usage resembled an IT admin seeking to streamline, troubleshoot, or automate their tasks. In a malicious context, however, this activity could be used to enable lateral movement, privilege escalation, data exfiltration, and detection evasion.
At a Glance: People’s Republic of China APT Actors Using Gemini
Over 20 China-backed groups observed using Gemini
Notable use cases:
Reconnaissance
Research US military and US-based IT organizations
Gather US government network ranges
Understand publicly available information about US intelligence community personnel
Determine domain names of targets spanning eight countries, mostly government agencies
Access Microsoft Exchange using password hash
Vulnerability research
Reverse engineer Carbon Black EDR’s server components for health check and authentication
Scripting and development
Generate code to remotely access Windows Event Log
Active Directory management commands
Translation and explanation
Understand graph databases (Nebula Graph)
Solutions to TLS 1.3 visibility challenges
Understand a malicious PHP script
Web JWT security and routing rules in Ruby on Rails
Deeper system access and post-compromise actions
Sign an Outlook VSTO plug-in and deploy it silently to all computers
Add a self-signed certificate to Active Directory
Upload large files to OneDrive
Character encoding issues in smbclient
Command to check IPs of admins on the Domain Controller
Record passwords on the VMware vCenter
Impacket troubleshooting
Enabling Deeper Access in a Target Network
PRC-backed APT actors also used Gemini to work through scripting and development tasks, many of which appeared intended to enable deeper access in a target network after threat actors obtained initial access. For example, one PRC-backed group asked Gemini for assistance figuring out how to sign a plugin for Microsoft Outlook and silently deploy it to all computers. The same actor also asked Gemini to generate code to remotely access Windows Event Log; sought instructions on how to add a self-signed certificate to Active Directory; and asked Gemini for a command to identify the IP addresses of administrators on the domain controller. Other actors used Gemini for help troubleshooting Chinese character encoding issues in smbclient and how to record passwords on the VMware vCenter.
In another example, PRC-backed APT actors asked Gemini for assistance with Active Directory management commands and requested help troubleshooting impacket, a Python-based tool for working with network protocols. While impacket is commonly used for benign purposes, the context of the threat actor made it clear that the actor was using the tool for malicious purposes.
Explaining Tools, Concepts, and Code
PRC actors utilized Gemini to learn about specific tools and technologies and develop solutions to technical challenges. For example, a PRC APT actor used Gemini to break down how to use the graph database Nebula Graph. In another instance, the same actor used Gemini to offer possible solutions to TLS 1.3 visibility challenges. Another PRC-backed APT group sought to understand a malicious PHP script.
Vulnerability Research and Reverse Engineering
In one case, a PRC-backed APT actor attempted unsuccessfully to get Gemini’s help reverse engineering the endpoint detection and response (EDR) tool Carbon Black. The same threat actor copied disassembled Python bytecode into Gemini to convert the bytecode into Python code. It’s not clear what their objective was.
Unsuccessful Attempts to Elicit Internal System Information From Gemini
In one case, the PRC-backed APT actor APT41 attempted unsuccessfully to use Gemini to learn about Gemini’s underlying infrastructure and systems. The actor asked Gemini to share details such as its IP address, kernel version, and network configuration. Gemini responded but did not disclose sensitive information. In a helpful tone, the responses provided publicly available details that would be widely known about the topic, while also indicating that the requested information is kept secret to prevent unauthorized access.
North Korean Government-Backed Actors
North Korean APT actors used Gemini to support several phases of the attack lifecycle, including researching potential infrastructure and free hosting providers, reconnaissance on target organizations, payload development, and assistance with malicious scripting and evasion techniques. They also used Gemini to research topics of strategic interest to the North Korean government, such as South Korean nuclear technology and cryptocurrency. We also observed that North Korean actors were using LLMs in likely attempts to enable North Korea’s efforts to place clandestine IT workers at Western companies.
At a Glance: North Korean APT Actors Using Gemini
Nine North Korea-backed groups observed using Gemini
Google-focused use cases:
Research advanced techniques for phishing Gmail
Scripting to steal data from compromised Gmail accounts
Understanding a Chrome extension that provides parental controls (capable of taking screenshots, keylogging)
Convert Chrome infostealer function from Python to Node.js
Bypassing restrictions on Google Voice
Generate code snippets for a Chrome extension
Notable use cases:
Enabling clandestine IT worker scheme
Best Discord servers for freelancers
Exchange with overseas employees
Jobs on LinkedIn
Average salary
Drafting work proposals
Generate cover letters from job postings
Research on topics
Free hosting providers
Cryptocurrency
Operational technology (OT) and industrial networks
Nuclear technology and power plants in South Korea
Historic cyber events (e.g., major worms and DDoS attacks; Russia-Ukraine conflict) and cyber forces of foreign militaries
Research about organizations
Companies across 11 sectors and 13 countries
South Korean military
US military
German defense organizations
Malware development
Evasion techniques
Automating workflows for logging into compromised accounts
Understanding Mimikatz for Windows 11
Scripting and troubleshooting
Clandestine IT Worker Threat
North Korean APT actors used Gemini to draft cover letters and research jobs—activities that would likely support efforts by North Korean nationals to use fake identities and obtain freelance and full-time jobs at foreign companies while concealing their true identities and locations. One North Korea-backed group utilized Gemini to draft cover letters and proposals for job descriptions, researched average salaries for specific jobs, and asked about jobs on LinkedIn. The group also used Gemini for information about overseas employee exchanges. Many of the topics would be common for anyone researching and applying for jobs.
While normally employment-related research would be typical for any job seeker, we assess the usage is likely related to North Korea’s ongoing efforts to place clandestine workers in freelance gigs or full-time jobs at Western firms. The scheme, which involves thousands of North Korean workers and has affected hundreds of US-based companies, uses IT workers with false identities to complete freelance work and send wages back to the North Korean regime.
North Korea’s AI toolkit
Outside of their use of Gemini, North Korean cyber threat actors have shown a long-standing interest in AI tools. They likely use AI applications to augment malicious operations and improve efficiency and capabilities, and for producing content to support their campaigns, such as phishing lures and profile photos for fake personas. We assess with high confidence that North Korean cyber threat actors will continue to demonstrate an interest in these emerging technologies for the foreseeable future.
DPRK IT Workers
We have observed DPRK IT Workers leverage accounts on assistive writing tools, Monica (monica.im) and Ahrefs (ahrefs.com), which could potentially aid the group’s work despite a lack of language fluency. Additionally, the group has maintained accounts on Data Annotation Tech, a company hiring individuals to train AI models. Notably, a profile photo used by a suspected IT worker bore a noticeable resemblance to multiple different images on the internet, suggesting that a manipulation tool was used to generate the threat actor’s profile photo.
APT43
Google Threat Intelligence Group (GTIG) has detected evidence of APT43 actors accessing multiple publicly available LLM tools; however, the intended purpose is not clear. Based on the capabilities of these platforms and historical APT43 activities, it is possible these applications could be used in the creation of rapport-building emails, lure content, and malicious PowerShell and scripting efforts.
GTIG has detected APT43 actors reference publicly available AI chatbot tools alongside the topic “북핵 해결” (translation: “North Korean nuclear issue solution”), indicating the group is using AI applications to conduct technical research as well as open-source analysis on South Korean foreign and military affairs and nuclear issues.
GTIG has identified APT43 actors accessing multiple publicly available AI image generation tools, including tools used for image manipulation and creating realistic-looking human portraits.
Target Research and Reconnaissance
North Korean actors also engaged with Gemini with several questions that appeared focused on conducting initial research and reconnaissance into prospective targets. They also researched organizations and industries that are typical targets for North Korean actors, including the US and South Korean militaries and defense contractors. One North Korean APT group asked Gemini for information about companies and organizations across a variety of industry sectors and regions. Some of this Gemini usage related directly to organizations that the same group had attempted to target in phishing and malware campaigns that Google previously detected and disrupted.
In addition to research into companies, North Korean APT actors researched nuclear technology and power plants in South Korea, such as site locations, recent news articles, and the security status of the plants. Gemini responded with widely available, public information and facts that would be easily discoverable in an online search.
Help with Scripting, Payload Development, Defense Evasion
North Korean actors also tried to use Gemini to assist with development and scripting tasks. One North Korea-backed group attempted to use Gemini to help develop webcam recording code in C++. Gemini provided multiple versions of code, and repeated efforts by the actor potentially suggested their frustration by Gemini’s answers. The same group also asked Gemini to generate a robots.txt file to block crawlers and an .htaccess file to redirect all URLs except CSS extensions.
One North Korean APT actor used Gemini for assistance developing code for sandbox evasion. For example, the threat actor utilized Gemini to write code in C++ to detect VM environments and Hyper-V virtual machines. Gemini provided responses with short code snippets to perform simple sandbox checks. The same group also sought help troubleshooting Java errors when implementing AES encryption, and separately asked Gemini if it is possible to acquire a system password on Windows 11 using Mimikatz.
Russian Government-Backed Actors
During the period of analysis, we observed limited use of Gemini by Russia-backed APT actors. Of this limited use, the majority of usage appeared benign, rather than threat-enabling. The reasons for this low engagement are unclear. It is possible Russian actors avoided Gemini out of operational security considerations, staying off Western-controlled platforms to avoid monitoring of their activities. They may be using AI tools produced by Russian firms or locally hosting LLMs, which would ensure full control of their infrastructure. Alternatively, they may have favored other Western LLMs.
One Russian government-backed group used Gemini to request help with a handful of tasks, including help rewriting publicly available malware into another language, adding encryption functionality to code, and explanations for how a specific block of publicly available malicious code functions.
At a Glance: Russian APT Actors Using Gemini
Three Russia-backed groups observed using Gemini
Notable use cases:
Scripting
Help rewriting public malware into another language
Payload crafting
Add AES encryption functionality to provided code
Translation and explanation
Understand how some public malicious code works
Financially Motivated Actors Using LLMs
Threat actors in underground marketplaces are advertising ways to bypass security guardrails to help LLMs with malware development, phishing, and other malicious tasks. The offerings include jailbroken LLMs that are ready-made for malicious use.
Throughout 2023 and 2024, Google Threat Intelligence Group (GTIG) observed underground forum posts related to LLMs, indicating there is a burgeoning market for nefarious versions of LLMs. Some advertisements boast customized and jailbroken LLMs that don’t have restrictions for malware development purposes, or they tout a lack of security measures typically found on legitimate services, allowing the user to prompt the LLM about any topic or task without incurring security guardrails or limits on their queries. Examplesinclude FraudGPT, which has been advertised on Telegram as having no limitations, and WormGPT, a privacy focused, “uncensored” LLM capable of developing malware.
Financially motivated actors are using LLMs to help augment business email compromise (BEC) operations. GTIG has noted evidence of financially motivated actors using manipulated video and voice content in business email compromise (BEC) scams. Media reports indicate that financially motivated actors have reportedly used WormGPT to create more persuasive BEC messages.
Findings: Information Operations (IO) Actors Misusing Gemini
At a Glance: Information Operations Actors
IO actors attempted to use Gemini for research, content generation, translation and localization, and to find ways to increase their reach.
Iran: Iranian IO actors used Gemini for a wide range of tasks, accounting for three quarters of all IO prompts. They used Gemini for content creation and manipulation, including generating articles, rewriting text with a specific tone, and optimizing it for better reach. Their activity also focused on translation and localization, adapting content for different audiences, and for general research into news, current events, and political issues.
China: Pro-China IO actors used Gemini primarily for general research on various topics, including a variety of topics of strategic interest to the Chinese government. The most prolific IO actor we track, DRAGONBRIDGE, was responsible for the majority of this activity. They also used Gemini to research current events and politics, and in a few cases, they used Gemini to generate articles or content on specific topics.
Russia: Russian IO actors used Gemini primarily for general research, content creation, and translation. For example, their use involved assistance drafting content, rewriting article titles, and planning social media campaigns. Some activity demonstrated an interest in developing AI capabilities, asking for information on tools for creating online AI chatbots, developer tools for interacting with LLMs, and options for textual content analysis.
IO actors used Gemini for research, content generation including developing personas and messaging, translation and localization, and to find ways to increase their reach.Common use cases include general research into news and current events as well as specific research into individuals and organizations. In addition to creating content for campaigns, including personas and content, the actors researched increasing the efficacy of campaigns, including automating distribution, using search engine optimization (SEO) to optimize the reach of campaigns, and increasing operational security. As with government-backed groups, IO actors also used Gemini for translation and localization and for understanding the meanings or context of content.
Iran-Linked Information Operations Actors
Iran-based information operations (IO) groups used Gemini for a wide range of tasks, including general research, translation and localization, content creation and manipulation, and generating content with a specific bias or tone. We also observed Iran-based IO actors engage with Gemini about news events and ask Gemini to provide details on economic and political issues in Iran, the US, the Middle East, and Europe.
In line with their practice of mixing original and borrowed content, Iranian IO actors translated existing material, including news-like articles. They then used Gemini to explain the context and meaning of particular phrases within the given text.
Iran-based IO actors also used Gemini to localize the content, seeking human-like translation and asking Gemini for help with tasks like making the text sound like a native English speaker. They used Gemini to manipulate text (e.g., asking for help rewriting existing text on immigration and crime in a specific style or tone).
Iran’s activity also included research about improving the reach of their campaigns. For example, they attempted to generate SEO-optimized content, likely in an effort to reach a larger audience. Some actors also used Gemini to suggest strategies for increasing engagement on social media.
At a Glance: Iran-Linked IO Actors Using Gemini
Eight Iran-linked IO groups observed using Gemini
Example use cases:
Content creation – text
Generate article titles
Generate SEO-optimized content and titles
Draft a report critical of Bahrain
Draft titles and hashtags in English and Farsi for videos that are catchy or create urgency to watch the content
Draft titles and descriptions promoting Islam
Translation – content in / out of native language
Translate into Farsi-provided texts about a variety of topics, including the Iranian election, human rights, international law, Islam, and other topics
Translate Farsi-language idioms and proverbs to other languages
Translate news about the US economy, US government, and politics into Farsi, using a specified tone
Draft a French-language headline to get viewers to engage with specific content
Content manipulation – copy editing to refine content
Reformulate specific text about Sharia law
Paraphrase content describing specific improvements to Iran’s export economy
Rewrite a provided text about diplomacy and economic challenges with countries like China and Germany
Provide synonyms for specific words or phrases
Rewrite provided text about Islam and Iraq in different styles or tones
Proofread provided content
Content creation – biased text
Generate or reformulate text to criticize a government minister and other individuals for failures or other actions
Describe how a popular American TV show perpetuates harmful stereotypes
Generate Islam-themed titles for thumbnail previews on social media
General research – news and events
Provide an overview of current events in specific regions
Research about the Iran-Iraq war
Define specific terms
Suggest social media channels for information about Islam and the Quran
Provide information on countries’ policies toward the Middle East
Create persona – photo generation
Create a logo
PRC-Linked Information Operations Actors
IO actors linked to the People’s Republic of China (PRC) used Gemini primarily for general research on a wide variety of topics. The most prolific IO actor we track, the pro-China group DRAGONBRIDGE, was responsible for approximately three quarters of this activity. Of their activity, the majority use was general research about a wide variety of topics, ranging from details about the features of various social media platforms to questions about various topics of strategic interest to the PRC government. Actors researched information on current events and politics in other regions, with a focus on the US and Taiwan. They also showed interest in assessing the impact and risk of certain events. In a handful of cases, DRAGONBRIDGE used Gemini to generate articles or content on specific topics.
At a Glance: PRC-Linked IO Actors Using Gemini
Three PRC-linked IO groups observed using Gemini
Example use cases:
General research – political and social topics
Research about specific countries, organizations, and individuals
Research relations between specific countries and China
Research on topics sensitive to the the Chinese government (e.g., five poisons)
Research on Taiwanese politicians and their actions toward China
Research on US politics and political figures and their attitudes on China
Research foreign press coverage about China
Summarize key takeaways from a video
General research – technology
Compare functionality and features of different social media platforms
Explain technical concepts and suggestions for useful tools
Translation – content in / out of native language
Translate and summarize text between Chinese and other languages
Content creation – text
Draft articles on topics such as the use of AI and social movements in specific regions
Generate a summary of a movie trailer about a Chinese dissident
Create persona – text generation
Generate a company profile for a media company
DRAGONBRIDGE has experimented with other generative AI tools to create synthetic content in support of their IO campaigns. As early as 2022, the group used a commercial AI service in videos on YouTube to depict AI-generated news presenters. Their use of AI-generated video continued through 2024 but has not resulted in significantly higher engagement from real viewers. Google detected and terminated the channels distributing this content immediately upon discovery. DRAGONBRIDGE’s use of AI-generated videos or images has not resulted in significantly higher engagement from real viewers.
Russia-Linked Information Operations Actors
Russian IO actors used Gemini for general research, content creation, and translation. Half of this activity was associated with the Russian IO actor we track as KRYMSKYBRIDGE, which is linked to a Russian consulting firm that works with the Russian government. Approximately 40% of activity was linked to actors associated with Russian state sponsored entities formerly controlled by the late Russian oligarch Yevgeny Prigozhin. We also observed usage by actors tracked publicly as Doppelganger.
The majority of Russian IO actor usage was related to general research tasks, ranging from the Russia-Ukraine war to details about various tools and online services. Russian IO actors also used Gemini for content creation, rewriting article titles and planning social media campaigns. Translation to and from Russian was also a common task.
Russian IO actors focused on the generative AI landscape, which may indicate an interest in developing native capabilities in AI on infrastructure they control. They researched tools that can be used to create an online AI chatbot and developer tools for interacting with LLMs. One Russian IO actor used Gemini to suggest options for textual content analysis.
Pro-Russia IO actors have used AI in their influence campaigns in the past. In 2024, the actor known as CopyCop likely used LLMs to generate content, and some stories on their sites included metadata indicating an LLM was prompted to rewrite articles from genuine news sources with a particular political perspective or tone. CopyCop’s inauthentic news sites pose as US- and Europe-based news outlets and post Kremlin-aligned views on Western policy, the war in Ukraine, and domestic politics in the US and Europe.
At a Glance: Russia-Linked IO Actors Using Gemini
Four Russia-linked IO groups observed using Gemini
Example use cases:
General research
Research into the Russia-Ukraine war
Explain subscription plans and API details for online services
Research on different generative AI platforms, software, and systems for interacting with LLMs
Research on tools and methods for creating an online chatbot
Research tools for content analysis
Translation – content in / out of native language
Translate technical and business terminology into Russian
Translate text to/from Russian
Content creation – text
Draft a proposal for a social media agency
Rewrite article titles to garner more attention
Plan and strategize campaigns
Develop content strategy for different social media platforms and regions
Brainstorm ideas for a PR campaign and accompanying visual designs
Building AI Safely and Responsibly
We believe our approach to AI must be both bold and responsible. To us, that means developing AI in a way that maximizes the positive benefits to society while addressing the challenges. Guided by ourAI Principles, Google designs AI systems with robust security measures and strong safety guardrails, and we continuously test the security and safety of our models to improve them. Our policy guidelines and prohibited use policies prioritize safety and responsible use of Google’s generative AI tools. Google’s policy development process includes identifying emerging trends, thinking end-to-end, and designing for safety. We continuously enhance safeguards in our products to offer scaled protections to users across the globe.
At Google, we leverage threat intelligence to disrupt adversary operations. We investigate abuse of our products, services, users and platforms, including malicious cyber activities by government-backed threat actors, and work with law enforcement when appropriate. Moreover, our learnings from countering malicious activities are fed back into our product development to improve safety and security for our AI models. Google DeepMind also develops threat models for generative AI to identify potential vulnerabilities, and creates new evaluation and training techniques to address misuse caused by them. In conjunction with this research, DeepMind has shared how they’re actively deploying defenses within AI systems along with measurement and monitoring tools, one of which is a robust evaluation framework used to automatically red team an AI system’s vulnerability to indirect prompt injection attacks. Our AI development and Trust & Safety teams also work closely with our threat intelligence, security, and modelling teams to stem misuse.
The potential of AI, especially generative AI, is immense. As innovation moves forward, the industry needs security standards for building and deploying AI responsibly. That’s why we introduced the Secure AI Framework (SAIF), a conceptual framework to secure AI systems. We’ve shared a comprehensive toolkit for developers with resources and guidance for designing, building, and evaluating AI models responsibly. We’ve also shared best practices for implementing safeguards, evaluating model safety, and red teaming to test and secure AI systems.
About the Authors
Google Threat Intelligence Group brings together the Mandiant Intelligence and Threat Analysis Group (TAG) teams, and focuses on identifying, analyzing, mitigating, and eliminating entire classes of cyber threats against Alphabet, our users, and our customers. Our work includes countering threats from government-backed attackers, targeted 0-day exploits, coordinated information operations (IO), and serious cyber crime networks. We apply our intelligence to improve Google’s defenses and protect our users and customers.
Editor’s note: Today’s post is by Travis Naraine, IT Infrastructure Engineer, and Harel Shaked, Director of IT Services and Support, both for Outbrain, a leading technology platform that drives business results by engaging people across the open internet. Outbrain adopted Chrome Enterprise and integrations from Spin.AI to create policies for secure app and extension use and manage automatic updates for its dispersed workforce.
With a workforce as dispersed as ours, security is always a challenge. We standardized on Chrome Enterprise browser two years ago, and it’s become the linchpin of our cloud-first strategy, giving us a way to manage all of our users and stay secure. But we had concerns about browser extensions and we felt it was time to find a solution.
The value of extension management
We know people like to use browser extensions to improve their productivity and to access the tools and features they need to do their jobs. We also know there are malicious extensions available online. But vetting, testing, and blocking extensions manually was time-consuming and not 100% effective because it didn’t give us visibility into which extensions and apps were already in our environment.
Our process was reactive instead of proactive, raising concerns over missed opportunities to detect and block risky extensions. We needed a more automated way to enable employees to safely install Chrome Enterprise extensions.
Tools for extension risk assessment
As we explored solutions for another security project, we came across Spin.AI’s SpinOne platform, which includes the SaaS Security Posture Management (SSPM) solution for third-party application security. SSPM had several points in its favor, including features for continuous app assessment for browser extensions and the ability to easily integrate with Chrome Enterprise. The SpinOne platform met several of our SaaS security needs, and we like to stay with one vendor whenever possible.
Now we use Chrome Enterprise extension risk assessment, powered by Spin.AI, to generate risk scores and comprehensive risk assessment reports that assist in decisions about allowing or blocking extensions. In addition, with Chrome Enterprise Core‘s extension workflow, Outbrain employees can easily submit extension requests for IT and security teams to review and allow or deny use of the extensions.
The automated process through Chrome Enterprise saves significant time compared with manual reviews. The new policies and the Chrome Enterprise and Spin.AI solution has created an environment that nudges users to think more about anything they were installing—extensions, and other apps as well.
Using extensions securely and safely
Chrome Enterprise makes management and control easy, enforcing policies for the browser and extensions with less complexity. We even develop our own in-house extensions for Chrome Enterprise for tasks like inspecting widgets within the company.
In addition to setting browser policies through the Google Admin console, we can manage automatic updates to ensure our employees are using the newest version of Chrome with the latest security patches, further reducing our exposure to vulnerabilities.
We definitely have fewer worries about browser security today. We know that Spin.AI and Chrome Enterprise are doing their job in the background, so we’re not constantly concerned that a user is installing something malicious. We can set it and forget it.
AWS Marketplace now offers a self-service listing experience for sellers listing or managing Amazon Machine Image (AMI) products with CloudFormation templates (CFT). This launch expands the self-service listing capability previously available for single-AMI, software as a service (SaaS), and container products.
With this release, sellers can now create and manage AMI with CloudFormation listings using a new UI experience, replacing the manual spreadsheet process. During listing creation, sellers are guided through a step-by-step workflow to fill in required information about their listings. All changes are initially visible only to the sellers, allowing them to preview and test the product. Sellers who are ready to publish a product publicly can request a visibility change through the UI, prompting a final validation review by the AWS Marketplace team.
Sellers can access this experience through the AWS Marketplace Management Portal or they can programmatically access the new functionality through AWS Marketplace Catalog API. For many submitted requests, such as an update to a product description, the AWS Marketplace catalog system automatically validates the requested changes and updates the listings.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) M7g instances are available in the AWS Middle East (UAE) region. These instances are powered by AWS Graviton3 processors that provide up to 25% better compute performance compared to AWS Graviton2 processors, and built on top of the the AWS Nitro System, a collection of AWS designed innovations that deliver efficient, flexible, and secure cloud services with isolated multi-tenancy, private networking, and fast local storage.
Amazon EC2 Graviton3 instances also use up to 60% less energy to reduce your cloud carbon footprint for the same performance than comparable EC2 instances. For increased scalability, these instances are available in 9 different instance sizes, including bare metal, and offer up to 30 Gbps networking bandwidth and up to 20 Gbps of bandwidth to the Amazon Elastic Block Store (EBS).
AWS CodeBuild announces support for codebuild:projectArn and codebuild:buildArn as IAM condition keys. These two new condition keys can be used in IAM policies to restrict the ARN of the project or build that originated the request. Starting today, CodeBuild will automatically add the new codebuild:projectArn and codebuild:buildArn condition keys to the request context of all AWS API calls made within the build. You can use the Condition element in your IAM policy to compare the codebuild:projectArn condition key in the request context with values that you specify in your policy.
This capability allows you to implement advanced security controls for the AWS API calls originating from within your builds. For example, you can write conditional policies using the new codebuild:projectArn condition key to grant permissions to AWS API calls only if those originate from inside a build for the specified project.
This feature is available in all regions where CodeBuild is offered. For more information about the AWS Regions where CodeBuild is available, see the AWS Regions page.
To learn more about CodeBuild’s condition keys, please visit our documentation. To learn more about how to get started with CodeBuild, visit the AWS CodeBuild product page.
Starting today, Amazon Elastic Compute Cloud (Amazon EC2) C8g instances are available in AWS Europe (Stockholm) region. These instances are powered by AWS Graviton4 processors and deliver up to 30% better performance compared to AWS Graviton3-based instances. Amazon EC2 C8g instances are built for compute-intensive workloads, such as high performance computing (HPC), batch processing, gaming, video encoding, scientific modeling, distributed analytics, CPU-based machine learning (ML) inference, and ad serving. These instances are built on the AWS Nitro System, which offloads CPU virtualization, storage, and networking functions to dedicated hardware and software to enhance the performance and security of your workloads.
AWS Graviton4-based Amazon EC2 instances deliver the best performance and energy efficiency for a broad range of workloads running on Amazon EC2. These instances offer larger instance sizes with up to 3x more vCPUs and memory compared to Graviton3-based Amazon C7g instances. AWS Graviton4 processors are up to 40% faster for databases, 30% faster for web applications, and 45% faster for large Java applications than AWS Graviton3 processors. C8g instances are available in 12 different instance sizes, including two bare metal sizes. They offer up to 50 Gbps enhanced networking bandwidth and up to 40 Gbps of bandwidth to the Amazon Elastic Block Store (Amazon EBS).
AWS DataSync now supports Kerberos authentication for self-managed file servers that use the Server Message Block (SMB) network protocol. This update provides enhanced security options for connecting to SMB file servers commonly found in Microsoft Windows environments.
DataSync is a secure, high-speed data transfer service that simplifies and accelerates moving data over a network. It automates copying files and objects between AWS Storage services, on-premises storage, and other clouds. DataSync uses protocols like SMB to transfer data to and from network storage systems. With this launch, you can configure your DataSync SMB locations to authenticate access to your storage using Kerberos, in addition to existing support for NT LAN Manager (NTLM) authentication. DataSync supports any Kerberos server, such as Microsoft Active Directory, that implements Kerberos protocol version 5.
AWS Amplify now enables developers to use the Amplify Data client within AWS Lambda functions. This new capability allows you to leverage the same type-safe data operations you use in your frontend applications directly in your Lambda functions, eliminating the need to write raw GraphQL queries.
The Amplify Data client in Lambda functions brings a consistent data access pattern across your entire application stack. Instead of managing separate GraphQL implementations, you can now use the same familiar client-side syntax to query and mutate data with full TypeScript support. This unified approach reduces development time, minimizes errors, and makes your codebase more maintainable.
This feature is now available in all AWS regions where AWS Amplify is supported.
Amazon Redshift announces enhanced security defaults to help you adhere to best practices in data security and reduce the risk of potential misconfigurations. These changes include disabling public accessibility, enabling database encryption, and enforcing secure connections by default when creating a new data warehouse.
The enhanced security defaults bring three key changes: First, public accessibility is disabled by default for all newly created provisioned clusters and clusters restored from snapshots. In this configuration, connections to clusters will only be permitted from client applications within the same Virtual Private Cloud (VPC). Second, database encryption is enabled by default for provisioned clusters. If you don’t specify an AWS KMS key when creating a provisioned cluster, the cluster is now automatically encrypted with an AWS-owned key. Third, Amazon Redshift now enforces secure, encrypted connections by default, a new default parameter group named “default.redshift-2.0” will be introduced for all newly created or restored clusters, with “require_ssl” parameter set to “true” by default. This default change will also apply to new serverless workgroups.
Review your data warehouse creation configurations, scripts, and tools to align with the new default settings to avoid any potential disruption. While these security features are enabled by default, you will still have the ability to modify cluster or workgroup settings to change the default behavior. Your existing data warehouses will not be impacted by these security enhancements.
These new default changes are implemented in all AWS regions where Amazon Redshift is available. For more information, please refer to our documentation.
We are excited to announce new capabilities for Amazon Lex Global Resiliency. Building on our existing regional replication framework, we now support existing alias replication and CloudFormation for enabling bot replication. These new features enhance the existing automation that synchronizes your Lex V2 bots, associated resources, versions, and aliases to paired AWS regions in near real-time, while maintaining hot standby resources for immediate failover or an active-active setup.
For contact center customers, this update streamlines disaster recovery by automatically keeping regional configurations in sync. The feature preserves existing alias ARNs during replication and removes the need to update contact flows in multiple places when modifying your bots. With support across the console, CLI, CDK, and CloudFormation, implementing robust disaster recovery solutions is more streamlined than ever.
Global Resiliency for Amazon Lex is available in the following AWS region pairs: us-east-1 (N. Virginia)/us-west-2 (Oregon), and eu-west-2 (London)/eu-central-1 (Frankfurt).
To get started with these new capabilities, contact your Amazon Connect Solutions Architect or Technical Account Manager. Visit the Amazon Lex Global Resiliency documentation to learn more about implementing Global Resiliency for your Lex bots.