Posted By Gbaf News
Posted on April 30, 2014
Reconfiguring old models of data management could help the financial services sector meet evolving global regulatory requirements. The trick is to shift emphasis away from building huge data repositories, and to concentrate on developing a data supply chain that gets the right data to the right place at the right time. By Richard Petti, CEO, Asset Control
The critical shift in perspective is to recognize that change is a constant and to move away from a monolithic hard-wired data warehouse model towards a dynamic data supply chain. In markets requiring constant business innovation, products, processes and organizations must become more agile in how they support the business and meet regulators’ needs. Financial data models need to be dynamic, adjusting quickly to capture new products created to solve client needs in new ways and the same data must flow quickly through the middle and back office to minimize the risk of exception bottlenecks or reconciliation errors.
Increasingly, proactive organizations are deploying strategies that regard data management as a dynamic logistics activity. The most effective have placed a data management platform at the center of the complex multi-source, multi-system distribution process – taking inputs from vendor feeds and departmental sources, testing them for quality and routing them through the platform to downstream systems and users. As data flows through the system, the platform provides the framework for auditing activity and monitoring performance against critical SLAs.
Such systems simplify the technical challenges significantly. Because they eliminate potentially hundreds of point-to-point connections, they make the administration, control and delivery of reference, market and risk data much more manageable. Moreover, workflows become more efficient, enabling organizations to save time and money. Crucially, the centralized approach built around the effective development of a data supply chain, is helping companies mitigate risk and meet the growing demands of regulatory compliance.
So how can organizations achieve this?
The first step in the process is to ask the right questions to identify and address each organization’s specific data management challenges. In understanding the challenges, the critical internal SLAs become clear and the organization will get a picture of the necessary workflows in order to get the right data to the right people at the right time. Working with a specialist data management team an organization can also be directed as to best practice in formulating the appropriate workflows and dealing with these issues.
The next step is the implementation of a robust system, supported by a dedicated data management team, to help implement the workflows that will compel organizations to address their procedural concerns and allow compliance and reporting to become more highly automated.
The Principles for effective risk data aggregation and risk reporting, known as BCBS239 within Basel III, mandate banks to impose strong data governance to assure the organization, assembly and production of risk information. The principles, similar to Dodd-Frank in the US, begin with traditional notions of soundness: risk reporting should be transparent, and the sourcing, validation, cleansing and delivery of data should be tightly controlled and auditable. But the new regulatory model also makes timeliness and adaptability fundamental requirements. This is a significant change from Basel II which addressed the formulation of risk models in detail but, in retrospect, failed to identify the need for accurate data. Without which, models and analysis tended to underestimate the frequency of major portfolio losses and underestimate resulting capital ratios.
The data supply chain approach shifts the focus away from the accumulation of data and shifts focus to delivery. Every activity becomes focused on ensuring the right package of data is delivered to the customer at the right time – everything works backwards from that primary objective. This is a challenge to incumbent models that largely focus on the aggregation and organization of huge volumes of data into a monolithic fixed schema.
In this new approach, the core components of data management – capture, validation and delivery – remain constant. But the process begins from the end-user’s perspective, with Chief Data Officers considering two key questions: 1. who am I delivering this data to? and 2. under what Service-Level Agreement (SLA)? By adopting an SLA-led approach and focusing on the end-game of delivery, it becomes much easier to work backwards and align performance (and costs) with business needs. With the overarching SLA as the start-point, data management becomes a logistics exercise whose primary objective is to get the right data, to the right people in time to meet their local SLAs – in effect, a data supply chain. The new approach makes changes to data requirements much easier to absorb; by-passing the need to change a data schema and incorporating the change directly to rules that govern the data package provides an agile and transparent mechanism for on-the-move data changes.
With the right rules and workflow in place, not only will challenges be resolved and risk mitigated, but it also places the organization in the best possible position to adopt an open and transparent enterprise-wide data management strategy that incorporates the entire data supply chain to deliver users the data they want, when and how they want it.
Although we may have survived the consequences of regulatory and information failures that characterized the financial crisis, organizations cannot afford to be complacent. A reliance on inefficient legacy models will no longer suffice.
To progress, Chief Risk Officers and Chief Data Officers must drive the reconfiguration of financial data management – and establish it as a logistical exercise.