About a-team Marketing Services

A-Team Insight Blogs

EU Parliament Approves Landmark Artificial Intelligence Act

Subscribe to our newsletter

The EU Parliament has approved the Artificial Intelligence Act, marking the world’s first regulation of AI. The regulation establishes obligations for AI based on its potential risks and level of impact and is designed to ensure safety and compliance with fundamental rights, democracy, the rule of law and environmental sustainability, while boosting innovation.

The act needs to be formally endorsed by the European Council and will come into force 20 days after its publication in the Official Journal. It will be applicable 24 months later except for  bans on prohibited practices, which will apply six months after the regulation comes into force; codes of practice that will come in after nine months; general-purpose AI rules including governance that will come in after a year; and obligations for high-risk systems that will follow in three years.

The regulation covers all types of AI including generative AI and is, no doubt, being scrutinised by capital markets participants as they continue to extend their use of the technology – more on this coming soon.

The act sets out key measures including:

  • Safeguards on general purpose artificial intelligence
  • Limits on the use of biometric identification systems by law enforcement
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities
  • Right of consumers to launch complaints and receive meaningful explanations

It also covers high-risk AI systems that are not specifically identified but are likely to include those used in capital markets. These systems must assess and reduce risks, maintain use logs, be transparent and accurate, and ensure human oversight. Citizens will have a right to submit complaints about AI systems and receive explanations about decisions based on high-risk AI systems that affect their rights.

To encourage innovation across the board, regulatory sandboxes and real-world testing will have to be established at the national level and made accessible to SMEs and start-ups to develop and train innovative AI before it goes to market.

Subscribe to our newsletter

Related content

WEBINAR

Recorded Webinar: How to leverage Generative AI and Large Language Models for regulatory compliance

Generative AI (GenAI) and Large Language Models (LLMs) offer huge potential for change across capital markets, not least in regulatory compliance where they have the capability to help firms understand and interpret regulations, automate compliance, monitor transactions in real time, and flag anomalies in the same timeframe. They also present challenges including explainability, responsibility, model...

BLOG

S&P Global’s Cappitech Extends Pirum Collaboration for SEC Rule 10c-1a

Pirum, the London based, post-trade services and securities financing automation vendor and the Cappitech unit of S&P Global Market Intelligence have extended the functionality of their joint Securities Financing Transaction Reporting (SFTR) solution to address the upcoming SEC rule 10c-1a, which covers the reporting of securities lending transactions. The two originally partnered in 2021 to...

EVENT

TradingTech Summit London

Now in its 14th year the TradingTech Summit London brings together the European trading technology capital markets industry and examines the latest changes and innovations in trading technology and explores how technology is being deployed to create an edge in sell side and buy side capital markets financial institutions.

GUIDE

Entity Data Management Handbook – Fourth Edition

Welcome to the fourth edition of A-Team Group’s Entity Data Management Handbook sponsored by entity data specialist Bureau van Dijk, a Moody’s Analytics company. As entity data takes a central role in business strategies dedicated to making the customer experience markedly better, this handbook delves into the detail of everything you need to do to...