Posted By Gbaf News
Posted on June 16, 2015
Mike Lines, Head of Perceptiv at Recommind
The financial services industry is currently drowning in data. With increasing product complexity and multi-jurisdiction risk reporting creating major headaches for banks, the struggle to connect the dots across siloed systems and formats to drive actionable insights will only get worse. However, surfacing trusted data to uncover value and manage risk is only one part of the challenge – banks are also faced with the overwhelming task of optimising the balance sheet.
Add to that the diverse regulatory landscape– in particular, the passage of the Dodd-Frank Act and over-the-counter (OTC) clearing reforms which make further demands of financial institutions – and the data problem is only intensifying.
Between navigating siloed technology systems and strained infrastructures that are nearing capacity, the silver bullet for banks to control their information lies in implementing the right information management practices. These span data categorisation, data collection, defensible deletion and compliance monitoring, and are crucial to supporting banks in surfacing the necessary data and help them in preparations for the next wave of regulation.
Putting these processes in place requires three data management steps.
Knowing What You’ve Got
As banks develop a sustainable solution to their growing data problem, the first step is for them to identify information by determining what data can be deleted, what must be kept as a business record and which items sit within the grey area in between. For many chief information officers, organising data within three buckets represents a valuable step toward controlling the surge in data.
While historically manual classification has been the more traditional approach, automated classification systems are growing in popularity as the data volumes increase. Employees simply cannot devote the time needed to review mountains of data to make individual decisions regarding content categorisation, placement and classification. But with automated systems, banks can apply rules to their information. These rules can either provide mass classification or specific attribute capture – streamlining the data retrieval process for regulatory reporting, fire drills, eDiscovery processes and litigation.
To Keep or Not to Keep?
Once banks have categorised their data and isolated the insignificant information, the next step is to delete it. Though it’s natural for banks to want to retain all information, especially as regulations are forcing banks to become more and more transparent, data retention creates its own problem: risk. With mass data retention, more personally identifiable information is made available, leaving this data vulnerable to a data breach. Equally, retaining this data means wasting valuable time that could be spent reviewing the useful information.
Having tools in place that sort through and automatically delete petabytes of data can ensure organisations keep the correct data that will drive business insight and meet –compliance and regulatory guidelines.
Data analysis and risk management
The final step to better data management lies in surfacing valuable data for business users. Machine-learning tools enable banks to understand their most important client agreements through the collection of data from highly complex derivatives contracts, removing the need for manual processes.
Using automated collection of legal and eligibility data from OTC derivatives agreements, banks can surface highly granular data at a low cost to multiple data consumers – legal, credit, collateral and front office.
While in the past banks may have questioned whether this automation was necessary, recent events have highlighted that manual error has become too much of a risk. There have even been instances in which banks have lost over $25 million in a single trade, due to issues such as using the wrong interest rates, posting the wrong type of collateral, or being arbitraged by competitors.
Moving forward, knowing how to make the most effective use of data capture tools for data management will be critical for financial institutions to handle this risk and extract the data they need for collateral optimisation. This will enable the front office to price trades more competitively and help treasury and credit managers gain a better understanding of liquidity and the exposures that regulations require. Only through this can banks be certain that manual errors cannot creep in and put banking practice in danger.