The latest Economist Intelligence Unit (EIU) report: ‘Overseeing AI: Governing artificial intelligence in banking’ sponsored by Temenos, takes a deep dive into the complex world of Artificial Intelligence (AI). Like many new technologies before it, AI is still at the stage where it has to prove itself to be useful to bank customers. There is still healthy scepticism in the minds of customers around the use of AI by banks, and it will take time for that to be broken down.
The current COVID-19 has seen banks’ use of AI increasing because they had to enhance their abilities to provide customers with essential services while adapting to the difficulties of home-working and closed branches. There were just not enough people to go around. But the disruption to businesses and households has only just begun, and banks will need to adapt to rapidly changing customer needs.
The criticality of AI adoption is only likely to increase in the post-pandemic era: its safe and ethical deployment is now more urgent than ever. However, paradoxically, banks have to tread warily and keep an eye on the regulators.
The ability to extract value from artificial intelligence (AI) will sort the winners from the losers in banking, according to 77% of bank executives surveyed by The EIU in February and March 2020. AI platforms were the second-highest priority area of technology investment, the survey found, behind only cybersecurity.
The main findings of the report are:
- AI will separate the winning banks from the losers and 77% of executives in the industry agree
- Covid-19 may intensify the use of AI, making effective governance more urgent
- A review of regulatory guidance reveals significant concerns including data bias, “black box” risk and a lack of human oversight
- Guidance and regulation has so far been “light touch” but firmer rules may be required as the use of AI intensifies
Excluding humans from processes involving AI weakens their monitoring and could threaten the integrity of models. At the root of these risks is AI’s increasing complexity, says Prag Sharma, senior vice-president and emerging technology lead at Citi Innovation Labs: “Some AI models can look at millions or sometimes billions of parameters to reach a decision,” he said to the EUI. “Such models have a complexity that many organisations, including banks, have never seen before.”
Andreas Papaetis, a policy expert with the European Banking Authority (EBA), believes this complexity—and especially the obstacles it poses to explainability—are among the chief constraints on European banks’ use of AI to date.
For regulators, supreme among the ethical standards must be fairness—ensuring that decisions in lending and other areas do not unjustly discriminate against individuals or specific groups of people. Concerns about this can only have increased since the huge UK exam system debacle where an algorithm discriminated against students depending on where they lived and what school they went to.
De Nederlandsche Bank (or DNB, the central bank of the Netherlands) emphasises the need for regular reviews of AI model decisions by domain experts, what they call the “human in the loop,” to help guard against unintentional bias. The Hong Kong Monetary Authority (HKMA) advises that model data be tested and evaluated regularly, including with the use of bias-detection software.
In Europe, bank adoption of AI-based systems is described by the EIU as ‘broad but shallow.’ The European Banking Authority (EBA) found that about two-thirds of the 60 largest EU banks had begun deploying AI but in a limited fashion and “with a focus on predictive analytics that rely on simple models”. This is one reason why Andreas Papaetis, a policy expert with the EBA, believes it is too early to consider developing new rules of governance for the EU’s banks that focus on AI.
Explainable AI could be as revolutionary as Blockchain technology – but both are at an early stage and both need to prove their worth to the people who matter – customers.