UK Finance has today published a report examining how generative Artificial Intelligence (AI) is being utilised by financial services firms to enhance efficiency, improve customer engagement, and foster innovation. The report also outlines the sector’s commitment to managing the associated risks responsibly.
The financial services sector has demonstrated a strong track record in responsibly adopting new technologies, and this report highlights how firms are applying generative AI across seven key areas: customer engagement and personalised marketing, knowledge management, software development, intelligent workflows, fraud prevention, legal analysis, and productivity tools.
Jana Mackintosh, managing director of payments and innovation at UK Finance, said: “Generative AI has created a lot of interest among the public and policymakers. It is an exciting new technology that has real potential, but also brings potential risks that will need to be managed. The financial services sector is currently focused on areas that involve active human oversight and are taking a careful approach. The good news is the sector has a strong track record of innovating responsibly with new technologies, positioning it well to harness the potential of generative AI.”
The report identifies three primary risks linked to generative AI and how firms are mitigating them:
Reliability of outputs: Generative AI models, particularly Large Language Models (LLMs), can produce biased, erroneous, or inappropriate outputs. Firms are addressing this by selecting models carefully, fine-tuning them with specific datasets, and continuously testing outputs.
Data privacy and security: Generative AI systems carry risks such as the potential misuse of personal data. Firms are implementing strong data protection and cybersecurity measures, including personal information filters, to safeguard privacy.
Third-party considerations: Reliance on external AI providers can reduce control over operations. Firms are strengthening their third-party risk management to address this challenge.
The report also stresses the importance of active human oversight in areas such as model training, decision-making, and interpretation. Transparency, high-quality data, and customer trust are highlighted as essential factors for maximising the potential of AI while maintaining compliance and ethical standards.
Looking ahead, the report underscores the need for ongoing collaboration with regulators, government, and customers to support innovation. Building trust through transparency, education, and ethical practices will be critical to fully realising the opportunities presented by generative AI.