Artificial intelligence is set to change the face of the finance industry in the years ahead whether we like it or not. David Wylie asks whether lenders are fully aware of the regulatory and reputational risks it poses.
Glance through the marketing material of software platforms and it will not be long before you notice a new emphasis on the use of artificial intelligence.
Some technology providers present it as the new frontier for finance providers looking to further optimise their decision making. What better way to introduce efficiencies, than to delegate the whole process to a hyper-sophisticated algorithm?
The marketing pitch is seductive, but it bears some investigation.
If you drill down into the products that such software companies are promoting, you often discover that they do not really amount to what most would define as artificial intelligence. It’s easy to co-opt the ‘AI’ moniker to make something appear more impressive than it actually is.
Perhaps this is just as well, because the technology is not something regulators are too enthusiastic about. In the UK, US and Australia, they have expressed misgivings about AI’s use when generating lending decisions.
Their fear is that, if done prematurely, it may actually not improve decision-making at all, and lenders had better beware of the consequences.
The US’s Consumer Financial Protection Bureau has cautioned lenders and intermediaries that ‘agency’ cannot be attributed to AI systems, given that this risks removing accountability for decision-making away from firms.
Companies are not absolved of their legal responsibilities when they let a black-box model make lending decisions, it cautions. The law gives every applicant the right to a specific explanation if their application for credit is denied, and that right is not diminished simply because a company takes credit decisions using a complex algorithm that it doesn’t understand.
The bottom line is that complex algorithms must provide specific and accurate explanations for denying credit applications, it says.
Reading between the lines, the implication is that many AI platforms cannot do this and therefore run the risk of future liability claims.
In the UK, this issue was identified in a recent Bank of England/FCA report, which suggested ‘lack of AI explainability’ posed a potential reputational and regulatory hazard. [Bank of England/FCA: Machine Learning in UK Financial Institutions]. The implicit question again posed is: would a company be able to justify its decision when facing a mis-selling claim?
It is not that the AI necessarily made the wrong decision - it may well be the right decision - it is whether the lender is able to demonstrate to a client how the decision was arrived at in the first place.
That the decision is comprehensively evidenced is particularly important because AI is already known to be prone to what is referred to as AI bias, AI model risk, or, in everyday parlance, the law of unintended consequences. Model bias occurs during the AI training process and can ‘bake-in’ certain outcomes. Automated model-selection tools can exacerbate risk, as can incomplete datasets.
For example, the historical gender data gap that has given us more male-oriented data than female, could well lead to skewed lender affordability decisions based on sex.
Furthermore, the monitoring of such risk within AI is introduced ‘post-implementation’, where ‘hyper-tuned’ models can be highly susceptible to data drift and lack of meaningful oversight.
These biases would be next to impossible to defend if they were found to underpin poor decision making. Should their existence be established, for example via a mis-selling tribunal, they would make any lender vulnerable to additional actions, where all those that feel similarly impacted would have a right to redress.
For these and other reasons, the FCA has intimated that it may not be possible to manage AI within the existing regulatory framework. Fine-tuning what we have already may not be enough, it suggests, so a new approach could be needed.
It seems to be making it clear that while lenders might like the idea of AI, they should be very careful to ensure they do not lose the ability to explain to the regulator and the customer why precisely they were (or were not) granted credit. Trying to reverse-engineer an AI algorithm-based decision in front of a tribunal will not cut it.
Given the level of uncertainty surrounding the use of AI, we at LendingMetrics certainly think caution should be exercised. It may be wise to wait for more visibility around the level of risk posed by the technology, and further clarity from the regulator.
Lending that is not underpinned by rigorous, documentable decision making has always been unwise. The finance industry has had to learn that lesson the hard way.
It is undoubtedly one we should not forget.