The European Union’s AI Act entered into force on 1 August 2024, and enforcement is now under way. This article explains the law’s risk-based approach, new obligations for general‑purpose AI models, and what businesses need to know.
EU AI Act Enforcement Begins: New Rules for General-Purpose Models
Europe’s AI Act, the world’s first comprehensive AI regulation, has officially moved from negotiation to enforcement. High-risk systems and general-purpose models now face strict requirements aimed at ensuring safety, transparency, and the protection of fundamental rights.
Overview of the EU AI Act
On 1 August 2024, the European Union’s Artificial Intelligence Act (AI Act) came into force. This landmark legislation establishes a common legal framework for AI across Europe.
The law applies broadly across sectors but excludes:
Military and national security
Scientific research
Non-professional and private uses
By creating a unified framework, the EU aims to both safeguard citizens and provide regulatory clarity for businesses.
A Risk-Based Approach
The AI Act introduces a tiered system of risk classification:
Unacceptable Risk: AI practices such as social scoring or manipulative behavior are banned outright.
High Risk: Systems used in sensitive areas (e.g., health, law enforcement, finance) must meet strict requirements for:
Transparency
Human oversight
Risk assessment
Technical documentation
Limited Risk: Applications must provide clear disclosures to users when AI is being used.
Minimal Risk: Everyday AI tools face few obligations.
This flexible approach ensures regulation is proportionate to the potential harm posed.
General-Purpose AI Models
A key innovation of the Act is its treatment of general-purpose AI models (GPAIs).
These models are granted a 24-month transitional period from August 2024.
During this period, providers must prepare for compliance with obligations such as:
Disclosing training data sources
Reporting energy consumption
Outlining potential risks and limitations
The goal is to ensure transparency and accountability without stifling innovation in foundational AI technologies.
Scope and Provider Responsibilities
Unlike consumer-rights-focused laws, the AI Act primarily regulates providers and deployers of AI systems.
Key responsibilities include:
Ongoing monitoring of deployed AI systems
Maintaining risk and compliance documentation
Providing clear information to end users about AI involvement in decision-making
The law emphasizes protecting fundamental rights while still enabling innovation and competitiveness within the EU.
Implications and Next Steps
The enforcement of the AI Act marks a global precedent in AI regulation.
For Europe: It signals a strong commitment to mitigating risks while fostering trust in AI adoption.
For Businesses: Companies developing general-purpose models must urgently prepare compliance roadmaps.
For the World: Other jurisdictions are expected to look to the EU’s framework as a model for their own regulations.
Over the next two years, further regulatory guidance and technical standards will likely emerge, shaping how AI technologies are developed, deployed, and monitored in Europe and beyond.