The latest Economist Intelligence Unit (EIU) report: ‘Overseeing AI: Governing artificial intelligence in banking’ sponsored by Temenos, looks in-depth into the complex world of Artificial Intelligence (AI). As I pointed out in my previous blog on the report, there is still healthy scepticism in the minds of customers around the use of AI by banks, and it will take time for that to be broken down.
Data bias, ‘black box’ risk, and a lack of human oversight are some of the main governance issues for banks using AI, according to the Economist Intelligence Unit (EIU) report “Overseeing AI: Governing artificial intelligence in banking”.
The report is based on a review of global regulatory guidance on AI risks and governance in banking carried out by the EIU on behalf of Temenos.
The report says that the guidance that regulators have offered so far can be described as “light touch”, taking the form of information and recommendations rather than rules or standards. This is a potential minefield for banks which think AI is a type of wonder-solution. The report postulates that this ‘light-touch’ approach is to avoid drying up innovation. Another reason how AI will evolve. AI is still in its early years and not many vendors really understand its potential or likely direction. As the report points out, there is no single expert who can answer every query about how AI works.
The documents that banking regulators have published on AI range from the concise (an 11-page statement of principles by MAS – the Monetary Authority of Singapore) to the comprehensive – a 195-page report by Germany’s BaFin – it’s Federal Financial Supervisory Authority) – but the guidance they offer is similar.
Banks are advised to establish ethical standards for their use of AI and check that their models comply. The European Banking Authority (EBA) suggests using an “ethical by design” approach to embedding these principles in AI projects. It recommends establishing an ethics committee to validate AI use cases and monitor their adherence to ethical standards.
A recent KPMG report said: “As with any innovative technologies used by banks, regulators are keen to understand how banks are managing the associated risks and creating human teams with the skills required to tackle them.”
AI and Cybercrime
The rapid advance of digitalisation into our everyday lives means that individuals and institutions are becoming more and more vulnerable to cybercrime.
The ability of AI to quickly spot patterns in large and unstructured datasets has huge potential to enhance the accuracy of crime detection, but to also vastly enhance data-intensive activities such as regulatory reporting. This lowers risks while reducing costs.
The European Central Bank (ECB) has set up its own dedicated SupTech Hub to supervise the use of AI. The Hub is designed to connect internal and external stakeholders, helping national supervisors to understand AI’s newest developments.
As KPMG point out in a recent report on AI, the use of any new technology poses potential challenges, and understanding how banks are actually using AI is high on the ECB’s agenda. So far, most banking applications for AI have focused on automating repetitive processes such as data reconciliations.
However, the regulators are concerned about the widening the use of deep learning, which allows algorithms to change the way banks operate with limited human input. Some of the potential risks that could arise from AI – whichever uses it is put to – according to KPMG include:
- Data bias: The risk of errors or interference arising from the inherent features of datasets.
- Privacy breaches: The desire to reduce risks must not override the protection of sensitive personal and commercial data.
- Data loss: Shared criteria for data preservation will be vital in maintaining the accessibility of big data.
- Regulation: GDPR sets out limitations on automated decision-making, which could limit the efficiency of AI.
- Malicious manipulation: AI increases the potential for malicious manipulation of big datasets may also increase.
- Opacity: The more advanced AI algorithms become, the harder it can be to understand and monitor the conclusions that they draw.
Managing these risks demands banks have a practical governance framework and can build teams with strong scientific, engineering, and economic skills.
AI may still be in its early days, but using and managing it effectively will be of growing importance for banks and supervisors during the decade that lies ahead.
Most regulators consider banks’ existing governance guidelines to be enough to address the issues raised by AI. Rather than creating new AI-specific regimes, most regulators agree that it is more important to focus on updating their governance practices and structures to deal with AI. Ensuring that those responsible for oversight have sufficient AI expertise will be vital, as will the choice of AI software to deploy. Getting this one wrong could have major effects on a banks’ regulatory and customer relations.