Posted By Gbaf News
Posted on July 13, 2018
Steve Wilcockson, Financial Services Industry Lead, MathWorks
The use of machine learning in model governance is in its infancy, understandably so. While the technology presents challenges and frequently raises flags, long term advantages do and will outweigh short term obstacles.
Models run the world, and it is right, given the shocks of 2007 and 2008, that financial ones be managed and governed well. In the decade since, initiatives have been put in place to promote good model development and use, such as the ECB’s TRIM and the UK’s Prudential Regulation Authority’s recent focus on stress test model management. However, while there has been progress, financial services, including in risk management, lag other industries in ‘risk-aware’model governance. Current timescales of years are simply too long for banks and other systemically important financial institutions to submit new or changing models, occur with relative inconsistency across model proposers and supervisor reviewers, and with insufficient attention to good process compared with those in other industries.
Compared with firms working in ‘high integrity’ stringently-regulated industries, e.g. aerospace, medical, robotics, automotive,financial services regulated firms are less agile in their development, test, audit and submission processes, in part because of throw-over-the-wall processes but also their longstanding desire to play with the latest and greatest tools and technologies, adding risk and complexity. Their resource and skill-challenged regulators struggle to keep pace with those same latest and greatest technologies.
On the haphazard use of bleeding/leading-edge technologies, the industry faces paradoxes of on one hand a need to be seen to be moving with the times embracing the latest and greatest, but on the other facing continuing challenges of maintaining legacy “hip” technologies and applying them across technical and business silos. Risk models, for example, need to be assessed by lawyers, accountants, quants, data scientists and IT. In addition, courtesy of the SM & CR-likedirectives, executives with reputations stung by decades of misconduct must also show willing in understanding risk model processes.
The paradox of machine learning in risk management
At the heart of current digitisation fashions, there is a culture clash between traditional risk management and machine learning. In making the case for machine learning applicability to risk, one bullish CRO suggests machine learning can improve risk model accuracy by 25-30%, including in credit models. Other senior risk managers have purported that credit risk model shelf lives of 3 to 5 years can increase to 5 to 8 years, primarily due to model “adaptability” of machine learning.On the other hand, criticisms stem from methods reputedly being non-transparent, “black box” and un-reproducible, thus problematic in validation, audit and regulatoryscrutiny workflows. Such criticisms while true in part are over-simplistic, verging onmyth. One machine learning approach for example, so-called classification trees, have an easily explainable observable model structure.
An example
To demonstrate how machine learning models can improve accuracy while tackling the perceived challenge of replication, we present a nonlinear credit scoring application comparing a traditional scorecard model to a neural network and a classification tree. This task here was to predict default (a 0 or 1 value), and model as a classification problem. When looking at the receiver operating characteristic (ROC) curve we see the traditional scorecard model performs well when compared with a shallow neural network and a standard logistic regression. The best performing model by a healthy margin was a classification tree. This type of improvement in predictive capability is why machine learning attracts attention. Of the models explored, the neural network is the hardest to explain to regulators since the learned features (the parameters) are not easy to map to explanatory features from the data. However, unlike a neural network, the successful classification tree can be viewed and explained. For example, you can zoom in to show the logic for the different branches for a scenario where an employed homeowner would be predicted to default (1 on the leaf node). Classification trees, with a known structure, lend themselves naturally to explanations. They are easily explored for sensitivity to changes in parameters by directly examining specific regions of the tree. Thus, a classification tree’s dominant features and parameter sensitivities can be readily incorporated into risk reports for model reviewers to understand and judge the validity of the model results.
Augmenting traditional processes with machine learning
One related challenge organisations face is model bias. Suppose a recently-hired credit risk analyst studied their PhD in neural networks, techniques others on the credit risk team have little experience of. They may be disposed to defer to the expertise of the new hire, who may promote a neural network classification process as the latest, best and greatest. This hypothetical situation represents model bias and thereby risk, since the decision to use neural networks was a subjective decision of an individual with limited perspective and approved by an unengaged team. Now, this is a fictitiousexample, but you get the point. Human model bias can elevate imperfect model and feature selection, therefore a risk. Regulated firms must mitigate against it and supervisors be concerned by it.
Machine learningcan infuse objectivity into model and data governance. Model selection involves selecting a statistical model from a set of candidate models. Sitting in the machine learning selection stable are Lasso regularisation methods, which can encompass generalised linear models, least squares, and proportional hazards models, all common methods in credit risk.
Organisations could also apply ahybrid modelling approach, in which machine learning is used to help drive sound modelling feature identification for traditional scorecard models, offering better and still explainable models. In addition, the neural network approach of our enthusiastic new hire may be useful to“challenge” the dominant methodologies, and it may be appropriate for a bankModel Validation Unit to pose it as an insightful alternative. As a side effect, this activity could help the bank learn about neural networks, whether for credit risk or other tasks, without it necessarily being the mainstay approach.
New risk categories
Machine learning helps banks and other financial institutions deal with new risks such as fraud, money laundering and misconduct detection. The technology is an invaluable part of preventing banks from unknowingly servicing criminals as part of know-your-customer strategies, or also in voice recognition technologies to determine conduct issues in Skype/phone selling. Where theproblem is “new”, draws on alternative data (voice, geolocation, text, sentiment, social media) and big data, machine learning and deep learning naturally forms part of the toolkit. Try opening a bank account in a foreign country, and watch those algorithms scour your submitted details, trying to work out if you really are who you say you are and if your profile might match those of irreputable others.
It’s not all about machine learning
Let’s be clear. While this article has focussed on machine learning, it is simply one tool in the toolbox, albeit a highly fashionable one. Lasso methods, while useful, are not the be all and end all for model selection. The key thing for a regulated institution is that they have the flexibility to build, adapt and monitor multiple models consistently, identify key parameters and features with intelligence and transparency, apply appropriate models from the library, and capture comprehensive output. Ideally, the output should be a “package”, model outcomes with accompanying data and model references, including automatically compiled details of who (bot or human) did what, why, how and when. That pack can be augmented at each lifecycle stage, through validation and audit, passed onto the regulator as a cohesive entity, ideally with replication capabilities. Machine learning is not in isolation the answer, but it can help. The good news though is that the tooling that banks – and supervisors – can use to drive the model lifecycle can also invokemachine learning, help negate perceived non-transparency and replicability in some cases and in all cases a means to challenge and validate.
The 10-year anniversary of the global financial crisis offers us a stark reminder of the importance of model governance. The abuse and misuse of models within siloed, technologically complex andoverly-competitive cultures caused the crisis, but models also provided solutions. Machine learning, in the wrong hands, could be the Gaussian copula of the next crisis and we should absolutely consider the technology’s model risk. However, it can also help mitigate risk and improve model process, and it’s the model process we must continue to lavish attention on, and thus come into line with other industries.