Back to Posts
Post

EU Regulation 2024/1689 and How It Will Affect the Use of AI in Europe 

Artificial Intelligence has become a key tool in strategic sectors and decision-making. Until now, Europe did not have a common regulatory framework, but the approval of Regulation (EU) 2024/1689, also known as the Artificial Intelligence Regulation (AI Act), marks a historic milestone. Its objective is to combine innovation with the protection of fundamental rights. 

What Changes with the New Regulation? 

EU Regulation 2024/1689 classifies artificial intelligence according to risk level: unacceptable, high, limited, and minimal. 

Unacceptable-risk systems are prohibited, including AI that manipulates human behaviour, engages in social scoring, recognises emotions at work, or performs real-time biometric identification, except under judicial exceptions. The creation of sensitive biometric databases is also restricted. 

High-risk systems, used in critical sectors such as health, education, or security, may only operate if they comply with safety requirements and conformity assessments. Limited-risk systems must meet transparency obligations, and minimal-risk systems may be used freely, with codes of conduct recommended. 

Furthermore, all content generated or manipulated by AI, including deepfakes, must be clearly labelled. This risk-based approach protects fundamental rights and promotes safe and innovative AI in Europe. 

Key dates and penalties under the EU AI Act 

The EU AI Act establishes a clear timeline: from February 2025, practices incompatible with democratic values are prohibited; in August 2026, obligations for high-risk systems come into effect; and from August 2027, the public registry of these systems will be mandatory. 

Each Member State may set its own fines and penalties, designating at least one national authority responsible for supervising the application of the regulation and market monitoring, ensuring compliance with the AI Act. 

How should companies using AI Systems Act? 

Companies implementing artificial intelligence systems must establish robust governance and risk management frameworks based on ethics, efficiency, and legality. This is articulated through the GRC model (Governance, Risk, and Compliance)

  • Governance: Align processes and decisions with the organisation’s strategic objectives. 
  • Risk: Identify, assess, and manage risks associated with AI. 
  • Compliance: Ensure that all activities comply with legal and regulatory requirements. 

For high-risk systems, the regulation additionally requires: 

  • Up-to-date technical documentation and automatic activity logs. 
  • Clear information to users about capabilities, requirements, scope, accuracy, and human oversight. 
  • The possibility of human oversight during use. 
  • An appropriate level of accuracy, robustness, and cybersecurity, reflected in the documentation. 

It is crucial to foster a risk-aware culture, training and raising awareness among all stakeholders about the potential impacts of AI. Companies must know exactly where their systems fit, apply the corresponding governance frameworks, and rigorously comply with the obligations established by the EU AI Act.