Posted By Jessica Weisman-Pitts
Posted on February 8, 2022
By James Briers, CTO, Intelligent Delivery Solutions
The rapid advance of technology has raised the finance sector to levels only sci-fi fans could have previously envisioned. McKinsey estimates that AI technologies have the potential to deliver up to $1 trillion in added value for the global banking sector each year. The days of shouting stockbrokers on wall-street are well behind us, replacing the organic chaotic throng of the trading floor with a couple of quants and their computers. But don’t be fooled by their Harvard degrees, the real hero is the AI, and the data quality management techniques that make it so powerful.
There are a few areas in particular that have led to this seismic shift: data lakes and cloud computing, data cleansing techniques, and AI and Machine Learning. When this trio technologies come together, curated in a new world of augmented intelligence, they create a technological tour de force that no human alone could ever compete with.
The Importance of Cloud Data and Data Lakes
If you view these three technologies as a funnel, leading from the data’s ingestion across multiple platforms and channels, to its final out-put in the decisions, you can view data lakes as the initial pooling of data. Data lakes can store th data ‘raw’, without the need for it to be explicitly organized. Forget big data, we are now staring squarely into the face of infinite data streams, along with huge costs, duplicate and dirty data, and these huge volumes of data constantly shifting and moving, as transactions and decisions are made, and the data changes in real time.
This opens opportunities for highly efficient computing, with a need for accurate data to be collected and utilized in real-time – think dashboards with constantly updating dials and graphs as strategies are implemented down to the nano-second with unpresented levels of proactive control.
Data Quality and Management Techniques in Finance
While AI is the media poster child for technology advances, making the ‘final-call’ on the data fed into it, and data lakes create a space for the data to accumulate, technology for ensuring high data quality has become one of the fastest growing areas of investment in the banking and finance sector.
Though often the somewhat forgotten middle child that makes everything possible, data quality is the real unsung hero of the day. Even within the sanitary confines of the Swiss banks, so often associated with efficiency and precision, ‘dirty data’ still leads to inaccuracies, blemishing the bottom line of the balance sheets.
If the data fed into your AI algorithms is unclean, the output could lead to bad decision-making and catastrophic reputational damage if not caught early on and at the source.
Choosing a Data Quality Solution Fit for the Modern Age
The world of data quality is only becoming exponentially more complex, more highly regulated and with data parameters less easily definable. Though thanks to advances in data quality techniques, navigating this often-murky world and utilizing it to your benefit has become more accessible than ever.
Profiling tools and investigative techniques included in software such as IDS’s intuitive iData toolkit, offer an end-to-end solution incorporating everything from ingestion, ETL (extract, transform and load), migration, data obfuscation and synthesis, and test-data management, all in one tool.
With a data quality market worth billions, expanding at a rate of 18% annually, finding a data quality solution that is actually fit for purpose can be challenging, to say the least. Common functionalities of data quality software include the ability to fix structural errors, filter unwanted outliers, handle missing data, validate data and provide an indication of quality assurance of the data. But few tools are able to assure more than a tiny fraction of the data, and many require huge amounts of manual handling as data is extracted from one tool and moved to another – all of which is time-consuming, prone to error, and can expose the data to the risk of a breach.
Our proprietary technology is programmed to constantly scour data for quality breaches before they get out of control, offering peace of mind when your organization has a constant flow of data to and from multiple data lakes.
A recent Deloitte survey showed that 49% of the respondents were ‘very concerned’ about ‘risk data (data which can demonstrate levels of risk), with 69% of the respondents identifying that enhancing the quality, availability, and timelines of risk data is a top priority. With such interest in the assurance of data quality, those without an understanding of how data management techniques can benefit decision making will quickly find themselves on the losing edge.
AI and Machine Learning: the Practical Implementation of Data Quality Techniques in Banking
Few would disagree that we are now in an AI-powered digital age, facilitated by falling costs for data storage and processing, and rapid advances in AI technologies. Those that fail to embrace AI and to make efforts to make it central to strategy and operations (adopting an ‘AI-first’ approach), will be quickly overtaken by competition and deserted by their customers.
McKinsey’s Global AI Survey showed that nearly 60% of financial-services sector respondents had already embedded at least one AI capability. The most used AI technologies were robotic process automation for structed and predictable tasks, virtual assistants or conversational interfaces for customer service divisions; and machine learning techniques to quickly detect fraud and in risk management.
None of this, however, would be possible without the use of high quality, reliable data.
A regular speaker at industry events, and an expert in data assurance, James Briers worked with specialist consultancy firms and delivered on major programs for Barclays, HSBC and the NHS before launching IDS and pioneering iData and the Kovenant™ Methodology.