A recent Economist Intelligence Unit (EIU) report: ‘Overseeing AI: Governing artificial intelligence in banking’ sponsored by Temenos, takes a deep dive into the complex world of Artificial Intelligence (AI). Like many new technologies before it, AI is still at the stage where it has to prove itself to be useful to bank customers. There is still healthy scepticism in the minds of customers around the use of AI by banks, and it will take time for that to be broken down.
The current COVID-19 has seen banks’ use of AI increasing because they had to very quickly enhance their abilities to provide customers with essential services while adapting to the difficulties of home-working and closed branches. There were just not enough people to go around. But the disruption to businesses and households has only just begun, and banks will need to adapt to rapidly changing customer needs or be left behind by their customers.
The criticality of AI adoption is only likely to increase in the post-pandemic era: its safe and ethical deployment is now more urgent than ever. However, paradoxically, banks have to also tread warily and keep an [DJ1] eye on the regulators. In Europe, bank adoption of AI-based systems is described by the EIU as ‘broad but shallow.’ The European Banking Authority (EBA) found that about two-thirds of the 60 largest EU banks had begun deploying AI but in a limited fashion and “with a focus on predictive analytics that rely on simple models”. This is one reason why Andreas Papaetis, a policy expert with the EBA, believes it is too early to consider developing new rules of governance for the EU’s banks that focus on AI.
The ability to extract value from artificial intelligence (AI) will sort the winners from the losers in banking, according to 77% of bank executives surveyed by The EIU in February and March 2020. AI platforms were the second-highest priority area of technology investment, the survey found, behind only cybersecurity.
The Main Findings of the Report are:
- AI will separate the winning banks from the losers and 77% of executives in the industry agree
- Covid-19 may intensify the use of AI, making effective governance more urgent
- A review of regulatory guidance reveals significant concerns including data bias, “black box” risk and a lack of human oversight
- Guidance and regulation has so far been “light touch” but firmer rules may be required as the use of AI intensifies
Excluding humans from processes involving AI weakens their monitoring and could threaten the integrity of models. At the root of these risks is AI’s increasing complexity, says Prag Sharma, senior vice-president and emerging technology lead at Citi Innovation Labs: “Some AI models can look at millions or sometimes billions of parameters to reach a decision,” he said to the EUI. “Such models have a complexity that many organisations, including banks, have never seen before.”
For regulators, supreme among the ethical standards must be fairness, ensuring that decisions in lending and other areas do not unjustly discriminate against individuals or specific groups of people.
De Nederlandsche Bank (or DNB, the central bank of the Netherlands) emphasises the need for regular reviews of AI model decisions by domain experts, what they call the “human in the loop,” to help guard against unintentional bias. The Hong Kong Monetary Authority (HKMA) advises that model data be tested and evaluated regularly, including with the use of bias-detection software.
Handelsbanken’s Chief Digital Officer, Stephan Erne, talking to PA Consulting recently, said: “If you centralise, you ensure efforts aren’t duplicated and the infrastructure is efficient. But if you are too centralised, you risk stifling innovation and can be slow to react. The only answer to this dilemma, Erne says, is to be very transparent. “You have to create ways of interacting and knowledge sharing. For example, we use digital tools like community sites when we scan fintech start-ups, everyone inputs who they have met and what their thoughts were.”
He continues: “The worst thing you can do is to take away responsibility from people. If you put in a central steering model, you will kill the engagement. From my experience, this is the biggest problem in a lot of companies.” At Handelsbanken, Erne feels that engagement is already high, so the challenge is to find the best ways to channel the energy.
Erne’s challenge is to keep the bank focused through this cycle of hype and frustration: “The problem is often not solved by the technology itself. Instead, emerging technologies like AI and blockchain are a catalyst to challenge existing processes, responsibilities and ways of working. It is beyond the technology. It is a real business transformation question.”
At Temenos we see explainable artificial intelligence (XAI) as providing the actional insights and transparency that allows a bank’s customers and staff to make informed decisions and enables smart automation. Hani Hagras, Temenos’ Chief Science Officer said: “As more employees are stretched further to cope with illness and childcare, freeing up their time to focus on the work that cannot be undertaken by explainable artificial intelligence (XAI) will be crucial to business continuity. By adopting new XAI technologies today, businesses aren’t only investing in their future, they are investing in their bottom line now and enabling their human workforce and their business’ resilience.”
Many organisations are realising that XAI is not a technology for their business’ future, it is a vital technology for their businesses today — and one that can play a vital role in mitigating these turbulent times. XAI not only supports increased efficiency and automation but, by virtue of being entirely transparent, provides a model that businesses can safely trust to support their operations.