By Huw Jones
LONDON (Reuters) – Banks and investment firms in the European Union cannot shirk boardroom responsibility and a legal obligation to protect customers when using artificial intelligence (AI), the bloc’s securities watchdog said in its first statement on AI.
The European Securities and Markets Authority (ESMA) on Thursday set out how financial firms regulated in the 27-country bloc can use AI in day-to-day operations without falling foul of the EU’s MiFID securities law.
While AI holds promise in enhancing investment strategies and client services, it also presents inherent risks, and the potential impact on retail investor protection is likely to be significant, ESMA said.
“Importantly, firms’ decisions remain the responsibility of management bodies, irrespective of whether those decisions are taken by people or AI based tools,” ESMA said.
“Central to the use of AI in investment services is the unwavering commitment to act in clients’ best interest, an overarching requirement which applies irrespective of the tools that the firm decides to adopt in the provision of services.”
The statement covers not just instances where AI tools are developed or adopted by a bank or investment firm itself, but also the use of third-party AI technologies, such as ChatGPT and Google Bard, with or without the direct knowledge and approval of senior management, ESMA said.
“The firm’s management body should have an appropriate understanding of how AI technologies are applied and used within their firm and should ensure appropriate oversight of these technologies,” ESMA said.
The statement focuses on compliance with MiFID, and is separate from the EU’s landmark rules on AI that come into force next month, setting a potential global benchmark for a technology used in business and everyday life.
Efforts are also underway at the global level by the Group of Seven economies (G7) to put in place guardrails to develop the rapidly evolving technology safely.
(Reporting by Huw Jones; Editing by Sharon Singleton)
Comments