Shadow AI in banking: What financial institutions must know now
Shadow AI in banking: What financial institutions must know now
Published by Wanda Rich
Posted on November 26, 2025

Published by Wanda Rich
Posted on November 26, 2025

Across the banking sector, artificial intelligence is now embedded throughout daily operations processes. From improving customer interactions to supporting credit decisions and detecting fraud, financial institutions are increasingly relying on AI to automate processes and enhance decision-making. However, much of this AI activity takes place out of sight, running in the background and outside officially approved systems.
The hidden use of AI, now commonly referred to as shadow AI, comes with real risks. Banks face potential data leaks, regulatory violations and operational blind spots whenever employees use AI without oversight, leaving institutions exposed in ways management may not even realize.
Global Banking & Finance Review recently sat down with Ofer Klein, CEO of Reco, to discuss the hidden risks of unregulated AI in banking and financial services.
How would you describe the current use of AI in the operations of banks and other financial services companies today?
AI has become essential in financial services, with most organizations integrating AI into their systems for fraud detection, credit decisions and customer service. However, what is concerning is the split between governed and ungoverned adoption.
While institutions carefully vet official AI systems, we see that over a third of client interactions are already AI-powered, much of this is happening outside official channels.
AI is no longer coming to banking, it is already embedded throughout operations, often in ways leadership does not fully understand.
How are banking and financial services professionals commonly using AI in their daily operations — both formally and informally?
Formally, sanctioned applications like automated credit scoring and fraud detection go through rigorous vetting. Informally, we see that nearly two-thirds of UK financial services professionals admit employees are using unapproved AI tools to communicate with customers. Some familiar examples include analysts summarizing reports with ChatGPT, compliance officers drafting policies with AI and relationship managers using AI transcription in client meetings.
What are the main risks banks and financial institutions face when employees use AI tools without authorization or oversight?
There are three critical risks: data exposure, regulatory violations and operational integrity.
Many unaware employees regularly transfer company data in AI tools. Since GenAI models learn from every interaction, there is a risk they will expose sensitive information to unauthorized users.
At Reco, we recently discovered a Fortune 100 firm with over 1,000 unauthorized AI integrations, including a transcription tool recording every customer call for months.
From a regulatory standpoint, what obligations do financial institutions have when AI platforms process, store or learn from sensitive data?
Financial institutions must comply with existing frameworks now applied to AI. For instance, the SEC will assess whether firms have adequate policies to monitor and supervise AI use in trading, record-keeping, fraud prevention, back-office operations and anti-money laundering, while the OCC requires examiners to assess explainability if banks use AI models in risk assessment.
In Europe, the EU AI Act categorizes credit scoring as high-risk, requiring the highest compliance level, and DORA, effective January 2025, requires monitoring, logging and reporting of ICT-related incidents, including AI. The EU AI Act carries fines up to €40 million or 7% of revenue, with shadow AI driving fines averaging €5M across Europe.
The fundamental truth is this: if AI makes decisions affecting customers' financial lives, banks must explain those decisions, prove fairness, protect data and maintain comprehensive records. Shadow AI makes this impossible.
Many productivity and collaboration apps now include embedded or default AI features. How should financial institutions evaluate the hidden risks lurking inside the tools they already use?
This is the most insidious shadow AI: authorized SaaS applications that integrate new AI features without security review. This creates invisible data exposure paths when tools like Salesforce, Microsoft Copilot and Zoom add AI capabilities to previously approved applications.
Financial institutions must shift from point-in-time assessments to continuous monitoring, asking: What AI capabilities have been added? What permissions do they request? Where is data processed?
The 'we approved this three years ago' approach is dangerously obsolete. Real-time visibility into how applications behave today is needed.
What cybersecurity and data-privacy challenges arise when AI models have access to proprietary data, customer information or internal systems?
AI creates fundamentally different security challenges. The most common is the fact that GenAI tools retain conversations for model training, meaning sensitive data shared with a chatbot could reappear in future interactions available to other users. Further to this point, many shadow AI tools embed themselves in approved applications via assistants and agents, making them more difficult to discover because they share IP addresses with approved applications. AI also introduces new vulnerabilities like prompt injection and training data poisoning, and once proprietary data is incorporated into a model's training, the data cannot simply be deleted and the knowledge becomes embedded. Many AI platforms also operate across jurisdictions with varying data residency requirements, potentially creating compliance violations institutions do not know are happening.
Where does Reco fit into this evolving AI-risk and governance ecosystem?
Reco addresses the fundamental challenge: you can't govern what you can't see and we estimate that 91% of AI tools operate without IT oversight.
We use AI-based graph technology to discover shadow AI by integrating with Active Directory and analyzing email metadata to detect unauthorized tools. We then continuously scan for OAuth grants, third-party apps and browser extensions, showing which users installed them, what permissions they hold and whether behavior looks suspicious.
For financial institutions, this means finally answering regulators' questions: What AI tools are operating? What data do they access? Can you prove compliance? We're not blocking AI, we're enabling institutions to embrace it safely by providing the governance layer that makes responsible adoption possible.
What strategic advantages can financial institutions unlock by embracing AI safely and responsibly instead of trying to block or restrict it?
Institutions that get this right gain significant advantages. When employees have approved AI tools, they respond to market opportunities faster, while banks that lock down AI get outpaced by competitors.
Here's the paradox: financial institutions trying hardest to block AI often have the most shadow AI because employees find workarounds.
The benefits of generative AI outweigh the risks when managed effectively through governance, risk assessment and ethical implementation. The strategic advantage goes to institutions treating AI governance as an innovation enabler and not an impediment.
Do you have any final thoughts for financial institutions looking to address shadow AI in a way that supports innovation while maintaining trust, security and compliance?
Shadow AI is not a problem to solve, rather it's a symptom of unmet needs. So our advice is to provide better alternatives rather than cracking down.
Start by discovering the scope and then prioritize by data sensitivity. From here, we recommend providing sanctioned alternatives with proper controls and create clear guardrails through education. Continuous monitoring must also be implemented as regulators emphasize organizations must prepare for AI-related incidents through regular risk assessments and response protocols. Remember AI capabilities evolve monthly, so governance must be equally dynamic.
The banks and financial institutions that will thrive view AI governance not as a block to innovation, but as the enabler that makes bold innovation possible.

Explore more articles in the Interviews category











