Editorial & Advertiser disclosure

Global Banking and Finance Review is an online platform offering news, analysis, and opinion on the latest trends, developments, and innovations in the banking and finance industry worldwide. The platform covers a diverse range of topics, including banking, insurance, investment, wealth management, fintech, and regulatory issues. The website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website.

Technology

Posted By Jessica Weisman-Pitts

Posted on May 15, 2024

The AI Data Gap causes problems for Small Financial Institutions

The AI Data Gap causes problems for Small Financial Institutions

By Al Pascual, Cybercrime Expert, Founder, Inventor, and Advisor for BioCatch

Artificial intelligence (AI) has been a crucial tool for financial institutions (FIs) in the fight against fraud for the last thirty years. Once the purview of card networks and payment processors decisioning transactions, AI is now delivering value in detecting malicious transactions of all types across a multitude of different organisations and institutions. Yet as highlighted in a recent study, not all organisations are equal in their ability to deploy and leverage AI.

This gap in capabilities makes it more difficult to cost-effectively detect and prevent fraud for smaller FIs as criminals move downstream. Subsequently, these FIs are forced to turn to vendors that have significant volumes of data from across their client base, even if it isn’t exactly right for their institution.

With the advent of new AI tools that are just starting to be leveraged by bad actors, this disparity in capability will expose smaller FIs to an even more lopsided degree of risk – which is driving government and vendors alike to push for AI to be better utilised by FIs of all sizes. Worst of all is that the consequences of this threat will extend to not only fraud, but also financial crime creating an outsized risk of fraud losses and fines for the institutions that can least afford it.

The Haves and the Have Nots

According to a recent study by BioCatch, 73% of FIs globally use AI for fraud detection. But small FIs have a chicken and egg problem as AI is only as effective as the data used to train it, and as such, smaller institutions simply have less to work with. And with less data than their peers, the imperative to prioritise investments in the internal development of AI is simply less than it would be otherwise.

This in turn drives smaller FIs to rely much more on third party providers to apply AI to detect fraud and financial crime as both are on the rise. In some ways, this gap – let’s call it the AI Data Gap – is similar to the wealth-gap in that lower income consumers are forced to turn to more expensive credit options than affluent consumers who have access to better terms by virtue of their wealth. This dynamic shows no sign of changing as about half of FIs expect fraud and financial crime to increase relative to 2023, inevitably resulting in many smaller FIs directing an increasing amount of their budget to third party AI companies.

Adversarial AI Makes Bankers Sweat

One of the largest benefits of AI for an FI is the ability to detect activity that would otherwise be missed by a human being. It is this fact that makes the level of interest that criminals have displayed in new AI tools, such as generative AI, so disconcerting. These tools have demonstrated an immense potential for greatly improving the quality and quantity of malicious activities, a fact not lost on bankers.

Fraud and financial crime professionals recognise that not only will AI contribute to activities that increase the rate of fraud, but also the more difficult challenge of scams:

  • 45% expect scam tactics to become more automated
  • 42% expect AI to be used to locate more customer PII
  • 36% expect that scam messages will become more convincing

This doesn’t include other threats, such as the use of deepfake tools to create images, voices, or videos for use in bypassing identity and authentication controls. Artificial intelligence is a force multiplier across the board for criminals – meaning that the volume of all types of attacks will increase.

In the face of growing AI adoption by criminals, smaller FIs will suffer the ironic indignity of being far less likely to have enough data to make a significant investment in internal AI resources. Without the ability to bolster the use of internally-developed AI, smaller institutions will feel the adverse effects of AI-enhanced fraud, scams, and financial crime more so than their larger peers who are collecting far more data, far faster – enabling them to detect and mitigate more quickly.

What Has to Happen

To be clear, this isn’t an argument for reducing reliance on AI to detect malicious activity, but rather supplementing it with tools that are agnostic to the use cases to which they are applied, as well as more effective at addressing the threats created by adversarial AI. That can only happen by taking a closer look at fraud- and financial crime-fighting budgets and making decisions that take the long view with anticipated effects of adversarial AI in mind.

Consider that the investments smaller FIs may be considering on newer identity verification and authentication controls may be obsolete sooner rather than later. Instead, bankers should turn to solutions such as behavioural biometric intelligence which can be applied to fraud and scam detection. Further still, despite the advances that generative AI will bring to criminal capabilities, none of them give the criminal an advantage over behavioural biometric intelligence – leaving the bad guys with their new AI toys worse off than they were yesterday.

Quality Over Quantity

The AI Data Gap is real, and the consequences of it will become more dire as the criminal application of AI technology grows. For smaller FIs, their choices are to invest even more in third-party AI solutions and watch their other investments be rendered obsolete, or they can adapt. Applying behavioural biometric intelligence helps level the playing field, making smaller FIs harder targets. And in the difference between the AI haves and have nots, it is the results that really matter – not the hype.

Recommended for you

  • Russia’s inflation reaches 9.5% this year, weekly data shows

  • Thriving in Uncertainty: How IA Is Turning Challenges to Sustained Growth for Financial Services

  • Factbox-What does Len Blavatnik’s streaming platform DAZN do?