Editorial & Advertiser disclosure

Global Banking and Finance Review is an online platform offering news, analysis, and opinion on the latest trends, developments, and innovations in the banking and finance industry worldwide. The platform covers a diverse range of topics, including banking, insurance, investment, wealth management, fintech, and regulatory issues. The website publishes news, press releases, opinion and advertorials on various financial organizations, products and services which are commissioned from various Companies, Organizations, PR agencies, Bloggers etc. These commissioned articles are commercial in nature. This is not to be considered as financial advice and should be considered only for information purposes. It does not reflect the views or opinion of our website and is not to be considered an endorsement or a recommendation. We cannot guarantee the accuracy or applicability of any information provided with respect to your individual or personal circumstances. Please seek Professional advice from a qualified professional before making any financial decisions. We link to various third-party websites, affiliate sales networks, and to our advertising partners websites. When you view or click on certain links available on our articles, our partners may compensate us for displaying the content to you or make a purchase or fill a form. This will not incur any additional charges to you. To make things simpler for you to identity or distinguish advertised or sponsored articles or links, you may consider all articles or links hosted on our site as a commercial article placement. We will not be responsible for any loss you may suffer as a result of any omission or inaccuracy on the website.

Technology

Posted By Jessica Weisman-Pitts

Posted on June 28, 2024

Compliance Constraints: Why the finance sector is unable to leverage GenAI

Compliance Constraints: Why the finance sector is unable to leverage GenAI

By James Sherlow, Systems Engineering Director, EMEA, Cequence Security

We’ve seen generative AI (GenAI) deployed in the finance sector across numerous business use cases. It’s being used for document preparation, aggregation and analysis, in customer-facing processes such as through chatbots to alleviate loads on customer service representatives, in internal processes where it can be used to summarise and present possible course of action, and finally in a cybersecurity context to detect suspicious and potentially fraudulent activity. In fact McKinsey states the technology could see banking alone benefit from a boost of $200-$340bn a year if these use cases are applied.

However, the sector is also facing some considerable obstacles in deploying the technology due to the requirements of industry regulations. Regulators demand transparency and traceability which currently isn’t possible in most AI outputs. If we consider bank loan approval processes, for example, these need to be able to demonstrate that Know Your Customer (KYC) processes have been observed to prove affordability which means the KYC system must be able to explain and justify how a decision was reached. This creates a conflict that can only be resolved by resorting to pre-AI processes, causing organisations to resort to deactivating AI functionality.

Not just a compliance issue

Switching of AI processing will hobble the ability of the sector to benefit from AI and could see it fall behind the curve in adoption. This isn’t just a concern in terms of productivity but also how it can learn to use AI in a safe and ethical manner, and it has repercussions for the way in which the sector defends against AI-driven attacks, which are now becoming more prevalent. We’ve already seen a spate of deep fake attacks and business email compromise aka CFO fraud such as in the case of the UK engineering firm, Arup, where a finance worker based in Hong Kong transferred £20m after taking part in a deep fake video call.

According to the NCSC, we are just 18 months away from the near certainty of an increase in the volume and impact of cyber attacks which will be fuelled by AI. At the present time, GenAI is the preserve of malicious actors with access to quality datasets with which to train AI, requiring significant expenditure and resource but in less than two years it will become more widely available and commercialised, placing offensive AI within the reach for organised criminal gangs (OCGs) and nation state actors. The concern is that the finance sector in particular will be a key target due to the rewards on offer, with the US Department of the Treasury issuing a warning to this effect earlier this year.

We’ve seen some efforts made to regulate AI risk in the form of the EU’s AI Act but this is not expected to come into force in 2026. That now leaves financial organisations in the position of having to wait while the AI threat escalates. Indeed, suspicions are that we could well see high volume self-learning attacks by year-end, which are often targeted at Application Programming Interfaces (APIs) which are the glue used to connect applications and services in the digital economy.

APIs as a prime target

Defending against automated attacks is already problematic because most security and bot management solutions will simply ban the IP addresses of the attackers. However, because today’s attackers often use residential IPs they’ve compromised, blocking those has the potential to lock out customers. In order to detect and block attacks like these, its necessary to go beyond simple identifiers like IP addresses and to look at the tools or software, infrastructure and credentials being used, as well as the attacker’s behaviour.

Aside from volumetric attacks, we can also expect AI attacks against APIs to be crafted to fly under the radar of security solutions, using reconnaissance and engineering techniques to hone and focus on specific targets. Often APIs are not exploited due to poor coding but through the attacker studying the role of the API, the calls it makes and information it is able to access. Known as business logic abuse, this enables the attacker to subvert the API’s legitimate processes and to use it to perform content scraping and commit fraud through attacks such as Account TakeOver (ATO), for example. Such attacks are unlikely to trigger any conventional detection mechanisms and can only be spotted by monitoring activity with that API. Following detection, attacks can then be blocked, throttled or deception techniques employed to frustrate and exhaust the attacker’s resources, even in the event AI is used to alter the course of i.e. pivot the attack.

Going forward it’s clear that the financial sector will need to make significant changes to both harness AI and defend against attacks. The US Department of the Treasury has made a series of recommendations in this regard, including calling for data sharing to build anti-fraud AI models to level up the gap that has emerged between the fraud detection capabilities of small versus large institutions, for example. It also calls for the NIST AI Risk Management Framework to be revised to incorporate a section specific to the financial sector on AI governance.

Disabling AI functionality in business processes therefore has some very real ramifications for how the sector moves forward. It’s a backwards step and sends the wrong signal to malicious actors who will see it as an indicator that financial organisations are unprepared for and unable to utilise AI. It’s therefore imperative that steps are taken to make risk manage AI in the organisation in all capacities.

Recommended for you

  • Russia’s inflation reaches 9.5% this year, weekly data shows

  • Thriving in Uncertainty: How IA Is Turning Challenges to Sustained Growth for Financial Services

  • Factbox-What does Len Blavatnik’s streaming platform DAZN do?