How to comply with the EU AI Act

How to comply with the EU AI Act
Amanda Khatri

Amanda Khatri

Editorial Manager

The European Union’s (EU) Artificial Intelligence Act, hailed as the world’s first harmonised framework for intelligent technology, ushers in a new era of governance and stringent compliance measures.


The Act will serve as an “effective, proportionate” regulation for tackling risks, said Thierry Breton, the EU’s internal market commissioner, with guardrails for businesses deploying AI across their products and services.


Breaching the requirements could lead to fines of up to €40m or 7% of annual turnover (whichever is higher), which is more severe than penalties imposed under the onerous General Data Protection Regulation (GDPR). 


The EU has “taken an important step” to ensure businesses use AI responsibly and “put people first”, added Margrethe Vestager, EU competition commissioner and digital age executive vice-president. 


What is the aim of the EU AI Act?


AI is rapidly transforming many aspects of financial services from fraud detection, process automation and chatbots to helping broader organisational decision-making


However, EU leaders feel the developments have also created new risks for consumers, businesses and the wider market, including data protection issues and algorithm biases leading to discrimination. 


The EU’s legislation aims to address these risks and increase trust and acceptance of AI by consumers.  


Under the Act, AI systems are placed into categories based on their risk levels. 


Unacceptable risk systems are considered a threat to people and will be banned. High risk systems refer to systems that negatively impact safety or rights and must follow strict requirements before they can be put on the market. 


In financial services, there are two high risk use cases: systems which assess creditworthiness and risk assessments and pricing for life and health insurances.  


The EU AI Act summarised 


The AI Act introduces a risk-based approach and looks at two high-risk cases for financial services firms: 


  • The creditworthiness of a person.
  • Risk assessments and pricing for life and health insurances for people


The term ‘AI system’ is defined in the AI Act as: “a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”


AI systems will be placed in one of these four categories: 


  1. Unacceptable: An outright ban for certain AI systems that pose an unacceptable risk. 
  2. High risk: Stringent requirements for AI systems classified as high risk (Parliament have proposed detecting fraud should not be considered high risk). These are systems that could potentially have a detrimental impact on people’s safety or fundamental rights, for example, credit scoring or risk assessments. 
  3. Limited risk: AI systems that come with far less risks but there is a lack of transparency in AI usage. For example, chatbots using AI must make it clear to humans that they are communicating with a machine. AI-generated text published must be labelled as artificially generated. 
  4. Minimal risk: The EU is allowing the free use of minimal-risk AI. This refers to AI-enabled video games or spam filters. If individuals can stop using the AI, for example a chatbot, then it isn’t considered high-risk.  


High-risk systems will be judged on various factors including: 


  • The quality of data sets used
  • Technical documentation 
  • Record-keeping 
  • Transparency and the provision of information to users 
  • Human oversight 
  • Robustness, accuracy and cybersecurity


Low and minimal risk systems will not need to follow additional requirements obligations under the Act, however, will need to comply with a theoretical “code of conduct”. 


How to kickstart compliance with the EU AI Act 


Identify where AI is used currently 


The first step is mapping out the current areas where AI is embedded at your business to build a holistic view of your strategy.  


Begin by identifying whether your AI systems are classed as high-risk or not, focusing on whether they are designed with robustness, cybersecurity and accuracy in mind. 


High-risk systems are required to report serious incidents from their AI system, achieve a conformity assessment certification before going to market with the product or service and provide relevant technical documentation. 


Digital Operational Resilience Act’s (DORA) requirements 


Consider how DORA’s mandatory requirements align and interact with the AI Act’s rules, focusing on governance and management of ICT risks including third-party risk management. 


Financial services organisations' reliance on third party ICT services will need to be examined in the context of the AI regulation. 


With AI-powered regulatory technologies, businesses are equipped with the knowledge to achieve compliance with rules without duplicating efforts and seamlessly juggle multiple requirements. 


CUBE’s RegPlatform identifies and enriches every single AI-related regulatory requirement, providing insights so compliance teams can stay ahead of changes. 


Reporting  


As with any new regulatory change, new reporting obligations arise. The AI Act brings with it a surge in required documentation and transparency efforts.  


Manually tackling this mammoth of a task can prove difficult for struggling and under resourced compliance teams. Outsourcing this type of work to AI can greatly reduce time spent on ensuring compliance and increases time dedicated to high level thinking like strategy. 


CUBE’s RegPlatform monitors regulations specific to AI, ensuring your business has met every requirement under the EU AI Act. It also produces timely reports with clear audit trails to demonstrate compliance.  


CUBE comment 


Compliance shouldn’t be an afterthought. Planning ahead and taking key steps to comply with the EU AI Act will reduce regulatory risks. Initial compliance with general provisions and prohibited practices in the Act is expected as early as 2 February, 2025. 


CUBE’s ARI leverages the latest technological advancements to provide a holistic approach to compliance management, from identifying relevant requirements and mapping these to your policies to flagging important updates on the horizon and boosting operational

 efficiency. 


With CUBE, your organisation can anticipate and react to emerging AI trends by capturing, translating and enriching regulatory issuances in near real-time.