Artificial Intelligence use is now widespread amongst financial institutions globally, but a major gap has opened up between the use of AI* and the governance of these sometimes 'life changing' algorithms, according to a new report from leading global law firm Baker McKenzie.
Ghosts in the Machine: Revisited is based on a survey of 355 financial services leaders** from around the world, and is a follow up to Baker McKenzie's 2016 research Ghosts in the Machine: Artificial intelligence, risks and regulation in financial markets. By comparing data across both surveys, the research has derived unique insights on the development of AI in financial markets to date and its potential for the future.
The research finds a financial sector more confident in the transformative powers of AI than in 2016. More than half (52%) of respondents expect to see increased efficiency in their organisations as a result of AI, compared to 39% in 2016, and the number of FI leaders who expect the customer experience to improve has more than doubled, from 20% to 42%. Interestingly, the confidence in AI to improve risk management remains about the same, 41% in 2016, 40% this year.
According to the report, vast improvements in processing power mean that the performance of today's machine neural networks far surpass those of previous algorithms. This has coincided with the availability of vast amounts of data, facilitated by rapid advances in the ability of banks, insurers and other FIs to store this data cheaply.
“A seismic shift has occurred over the last two years in terms of what it takes to succeed as a financial business,” Jesse McWaters, Financial Innovation Lead at the World Economic Forum, says. “We are seeing a shift from scale of capital being critically important to scale of data.”
In many financial institutions AI-driven models have now moved out of the research phase and into the real world. By harnessing data, AI and machine learning technology has come to play an integral role in a range of operations, from portfolio management to fraud prevention. It is also transforming the customer experience. Chatbots are interacting with customers and solving problems before human staff get involved. Automation is also being used to execute trades more efficiently and progress is being made towards a future where customisable solutions will be the norm.
One sector where this new wall of data is turbocharging the uptick in AI is in insurance.
“The big difference now is the sheer volume of the data available and the complexity of some of that data,” says Andrew Rear, Chief Executive of Digital Partners at Munich Re. “Using unstructured data to underwrite insurance forces you to use technologies at the cutting edge of data science.”
However, insurers, banks and other FIs have also come to recognise the current limitations of AI and machine learning solutions, and the fact legal teams and regulators have a long way to go to catch up with current and future usage.
Only 32% of survey respondents believe financial regulators have sufficient understanding of financial technologies and their impact on the current financial services sector. When asked if existing regulation is sufficient to address the issues posed by AI and machine learning, a majority (59%) of survey respondents see gaps. On a more positive note, this represents a 10% decrease compared with the previous survey.
Meanwhile, only around a third (38%) of financial institutions are confident they themselves even understand the legal risks of the AI technologies they are adopting, and less than one in ten (8%) have a policy regarding the ethics of AI tech.
Sue McLean, Partner, Technology at Baker McKenzie, said: "Even for those financial institutions who have created ethical principles for AI, the challenge is how to operationalise those principles. Firms are looking at how to embed those values in an effective manner, for example by carrying out ethical impact assessments when they kick off an AI project and carrying out ongoing monitoring and testing of their AI solutions. They are also looking for suppliers who share this commitment to responsible AI."
Practical challenges, from a shortage of skilled AI developers to the cost of implementation, have also inhibited the roll-out of AI solutions, while cyber security and data protection also remain a major concern.
In fact, there is a striking contrast between small to mid sized companies and those with a turnover of more than $2 billion in terms of barriers to AI roll out.
For smaller companies, the biggest obstacle to adopting AI is cost. Conversely, for larger companies it is securing the right talent. This means that large companies do have a natural advantage if they are willing to invest in proven AI platforms to gain the competitive edge, as more become available off the shelf, at a time when smaller disruptors may be limited to developing their own technology.
Notes to editors:
*AI is an umbrella term encompassing several fields of research in computer science, all of which seek to enable computer systems to perform tasks normally requiring human intelligence, such as visual perception and decision-making. Machine learning is a branch of AI that provides computer systems with the ability to learn and adapt independently, based on algorithms and the analysis of data.
**For this report, commissioned by Baker McKenzie, Euromoney Thought Leadership Consulting surveyed 355 senior executives working for financial institutions globally. Fifty-nine percent of respondents work in C-level positions, the others are senior decision makers in a variety of roles including data, technology, legal and compliance.