GCP – The four building blocks of responsible generative AI in banking
The future of banking is generative. While headlines often exaggerate how generative AI (gen AI) will radically transform finance, the truth is more nuanced.
At Google Cloud, we’re optimistic about gen AI’s potential to improve the banking sector for both banks and their customers. We also believe that it can be done in a responsible manner.
There’s work to be done to ensure that this innovation is developed and applied appropriately. This is the moment to lay the groundwork and discuss—as an industry—what the building blocks for responsible gen AI should look like within the banking sector.
That’s because some concerns about gen AI’s accuracy and security are particularly acute when talking about its use in regulated industries, such as the larger banking system. In finance, any type of error can have a ripple effect, and can leave institutions open to new scrutiny from customers and regulators. It’s worth taking the extra time now to avoid a path that increases the likelihood of these negative outcomes.
While it’s important to understand the risks of gen AI, banks and technology providers can – and must – work together to mitigate rather than simply accept those risks. That’s an essential prerequisite as we look to the incredible opportunities gen AI can bring—such as enhanced productivity, immense time savings, improved customer experiences, and enhanced responsiveness to regulatory and compliance demands. Our view is that gen AI can actually herald a safer and more efficient banking system for everyone involved.
In this first piece in our series, we’ll identify and delve into what we believe are four critical building blocks for gen AI in banking: explainability, regulation, privacy, and security.
#1: Explainability
Imagine you’re an analyst conducting research or a compliance officer looking for trends among suspicious activities. You need answers that are not just backed up by evidence, but evidence that is easily retrievable and can be proven to be accurate. This requires a combination of AI and human intelligence, along with a well-thought-out risk-based approach to gen AI usage.
The good news is that we have made great strides toward AI explainability.
New gen AI tools can direct a large model—whether it be a large language model (LLM) or multimodal LM—toward a specific corpus of data and, as part of the process, show its work and its rationale. This means that for every judgment or assessment produced, models can footnote or directly link back to a piece of supporting data.
Of course, no one should take gen AI’s explanations as gospel, especially when it comes to something as critical as banking. Even explainable models require human verification. The process for this verification should be part of a robust risk management process around the use of gen AI.
For all the promise of the technology, gen AI may not be appropriate for all situations, and banks should conduct a risk-based analysis to determine when it is a good fit and when it’s not. Like any tool, it’s safest and most effective when used by the right people in the right situation.
#2 Regulation
AI will be critical to our economic future, enabling current and future generations to live in a more prosperous, healthy, secure, and sustainable world. Governments, the private sector, educational institutions, and other stakeholders must work together to capitalize on AI’s benefits.
If not developed and deployed responsibly, AI systems could amplify societal issues. Tackling these challenges will again require a multi-stakeholder approach to governance. Some of these challenges will be more appropriately addressed by standards and shared best practices, while others will require regulation – for example, requiring high-risk AI systems to undergo expert risk assessments tailored to specific applications.
Many countries and international organizations have already begun to act — the OECD has created its AI Policy Observatory and Classification Framework, the UK has advanced a pro-innovation approach to AI regulation, and Europe is progressing work on its AI Act. Similarly, Singapore has released its AI Verify framework, Brazil’s House and Senate have introduced AI bills, and Canada has introduced the AI and Data Act. In the United States, NIST has published an AI Risk Management Framework, and the National Security Commission on AI and National AI Advisory Council have issued reports.
Understanding the future role of gen AI within banking would be challenging enough if regulations were fairly clear, but there is still a great deal of uncertainty. As a result, those creating models and applications need to be mindful of changing rules and proposed regulations.
We work with policymakers to promote an enabling legal framework for AI innovation that can support our banking customers. This includes advancing regulation and policies that help support AI innovation and responsible deployment. Further, we encourage policymakers to adopt or maintain proportional privacy laws that protect personal information and enable trusted data flows across national borders.
For the past few years, federal financial regulatory agencies around the world have been gathering insight on financial institutions’ use of AI and how they might update existing Model Risk Management (MRM) guidance for any type of AI. We shared our perspective on applying existing MRM guidance in a blog post earlier this year.
In the US, the Commerce Department’s National Institute of Standards and Technology (NIST) established a Generative AI Public Working Group to provide guidance on applying the existing AI Risk Management Framework to address the risks of gen AI. Congress has also introduced various bills that address elements of the risks that gen AI might pose, but these are in relatively early stages.
Some challenges can be addressed through regulation, ensuring that AI technologies are developed and deployed in line with responsible industry practices and international standards. Others will require fundamental research to better understand AI’s benefits and risks, and how to manage them, and developing and deploying new technical innovations in areas like interpretability. And others may require new groups, organizations, and institutions – as we are seeing at agencies like NIST.
We also believe that sectoral regulators are best positioned to update existing oversight and enforcement regimes to apply to AI systems, including on how existing authorities apply to the use of AI, and how to demonstrate compliance of an AI system with existing regulations using international consensus multistakeholder standards like the ISO 42001 series. In the EU, there are enabling mechanisms to instruct regulatory agencies to issue regular reports identifying capacity gaps that make it difficult both for covered entities to comply with regulations and for regulators to conduct effective oversight.
#3: Privacy
Data is vital to the growth of gen AI because LLMs require massive amounts of it to learn. But data can often be tied to individuals and their unique behaviors or be proprietary, internal data. The access to that data is one of the most paramount concerns as banks deploy gen AI.
So how to thread the needle? Is there a way to feed models with enough data to be accurate without undercutting critical data protections?
The answer lies in transparency. In conjunction with proper data governance practices, privacy design principles, architectures with privacy safeguards, currently existing tools can help anonymize, mask, or obfuscate sensitive data, feeding into those systems and models. In enterprise gen AI implementations, banks maintain control over where their data is stored and how or if it is used. When fine tuning the data, the banks’ data remains in their own instance, whereas the LLM is “frozen.” The learning and finetuning of the model with the bank’s data is stored in the adaptive layer in its instance. It is not used to train our own models without permission. In other words, the bank’s fine-tuned data is the bank’s data.
#4: Security
For all industries, but particularly within financial services, gen AI security needs to be air-tight to prevent data leakage and interference from nefarious actors.
Dialogue on multiple levels is necessary to establish reasonable expectations and clear up any potential misconceptions about the risks that gen AI models pose. Identifying and engaging with key stakeholders in the cloud and cybersecurity space will facilitate better security requirements. The industry has a constructive role to play in fostering dialogue with various government institutions.
Central to this issue is the difference between consumer LLMs and enterprise LLMs. In the case of the former, once proprietary data or intellectual property is uploaded into an external model, retrieving or gating that information is exceptionally difficult. Conversely, with enterprise LLMs developed internally, this risk is minimized because the data is contained within the enterprise responsible for it.
Looking ahead, gen AI is likely to develop unanticipated capabilities that may affect a banks’ cybersecurity posture. These will inevitably be double-edged, both in terms of facilitating attacks and defending against them. Knowing the nature of the models and tools will only assist in bolstering defenses.
When it comes to using gen AI in highly regulated sectors like banking, the onus is on us in the industry to shape the conversation in a constructive way. And we’ve chosen the term “conversation” intentionally because partnership and dialogue between various gen AI tech providers are essential–all sides can and have learned from one another and, in doing so, help address the challenges ahead.
Overall, this is a conversation worth having as gen AI continues to drive public discourse. By laying out the fundamental building blocks of explainability, regulation, privacy and security, we hope to take a critical step together in conveying how gen AI can be a transformative force for good in the world of banking.
Read More for the details.