While AI has been around for 20 years or so, its time has come in capital markets with Generative AI and large language models (LLMs) able to handle vast volumes of compliance data and achieve outcomes that cannot be reached by humans. GenAI apps are not, however, a silver bullet, and compliance teams are not yet ready, on the whole, to use the technologies.
A panel session at A-Team Group’s recent RegTech Summit London considered these issues in the context of the risks, challenges and opportunities of GenAI and LLMs. The panel was moderated by Andrew Delaney, president and chief content officer at A-Team Group, and joined by Marili Anderson, chief compliance officer at William Blair International; Chris Beevor, UK MLRO and group compliance COO at GAM; Vall Herard, co-founder and CEO at Saifr; and Shaun Hurst, principal regulatory advisor at Smarsh.
The panel tracked the history of AI, noting the incremental increase in its capabilities and the real-world potential offered by GenAI and LLMs. “This is an opportunity to upscale compliance and develop skills other than understanding data,” said one panellist. Another commented: “AI and compliance go hand in hand. Compliance officers’ rote is to find exceptions. AI can do this and will have a big impact in compliance.”Use cases
The panel noted early use cases of GenAI including transaction and communications surveillance, financial crime issues such as anti-money laundering, customer onboarding and screening, KYC, and financial advisory chat bots.
Explaining the onboarding use case, a speaker said: “If you give the name of a person to a GenAI tool you should be able to spontaneously surface whether you should be doing business with this person.” Another added: “The core is finding focused models. GenAI models can do cool things, but you need something specific to test them, perhaps communications surveillance, where you can interrogate the data faster than ever before.”
Risks
Looking at the risks of GenAI and LLMs, an audience poll asked Summit delegates, ‘What do you consider to be the biggest risk around adopting GenAI and LLMs?’ More than half the delegates (52%) noted explainability as the biggest risk. This was followed by potential misuse/risk of misinformation, data quality, data privacy and managing bias.
The speakers concurred with the poll results, highlighting the need for explainability, but also problems of achieving it. “Models with many parameters cannot explain everything they do,” said a speaker. Another added: “Explainability is very important, but getting a framework and model governance right is a struggle. Then you need to make sure the model doesn’t drift.”
Also, “Vendors need to make more responsibility for AI, they need to make models explainabile, black boxes are no good anymore.”
Data quality was acknowledged as a common challenge across financial institutions, yet key to ensuring AI models learn from the right data. Be it internal or external data it also needs to be trusted. One solution is to continue using a lexicon to look at words and AI to understand their sentiment. Bias can begin to be addressed by including a diversity of people in labelling data to be used by AI models and keeping a human in the loop when building the models.Acknowledging that AI is a journey, the panel noted that you must get the fundamentals in place before building models. Transparent policy and procedures around AI are key, along with understanding the business case, selecting trusted data, testing the data, putting governance in place, and being ready to scrap models and start again when necessary.
In conclusion, it said GenAI and LLMs will offer massive benefits in the long term, but there is still a lot to learn. Don’t rush to be first to market and, as one speaker put it: “If you are thinkg of putting the technology in, but don’t understand the data – stop.”
Subscribe to our newsletter