Ai NewsBREAKING NEWS

EU AI Act enforcement begins: new rules for general-purpose AI models

Exploring the EU AI Act’s new obligations for general‑purpose AI models entering force in August 2025

ai.webp
ai.webp

EU AI Act enters force for General‑Purpose AI models

On 2 August 2025, the European Union brought the second wave of its AI Act into force. The new provisions apply specifically to general‑purpose AI (GPAI) models—large models trained on enormous data sets that are capable of performing many different tasks. Companies developing such models must meet a set of binding rules aimed at boosting transparency and accountability.

Under the AI Act, a GPAI model is one trained using more than 10²³ floating‑point operations and capable of generating text, code, images or other outputs for a wide variety of applications. These foundational models underpin chatbots, coding assistants and other generative AI tools used across industries

 

brain.webp




Key obligations for GPAI providers

  • Provide detailed technical documentation and instructions for integrators and developers, so downstream developers understand capabilities, limitations and appropriate uses.

  • Publish a policy respecting EU copyright law and summarise how training data were collected.

  • Summarise the training data used and implement risk‑management, incident reporting and cybersecurity measures to mitigate potential harms.

Compliance timeline and deadlines

The obligations took effect on 2 August 2025, but developers whose GPAI models were already on the market before that date have until 2 August 2027 to complydigital-strategy.ec.europa.eu. To smooth the transition, the European Commission published a voluntary Code of Practice that providers can sign to demonstrate responsible conduct. Enforcement will be risk‑based, with regulators giving priority to high‑impact models and focusing on transparency and safety measures.

Additional obligations for systemic‑risk models

These are the largest foundation models that use more than 10²⁵ floating‑point operations and can influence markets or public discourse. Providers of systemic‑risk models must carry out rigorous independent testing, conduct detailed risk assessments, implement incident reporting and share summaries of any serious mishaps. They must also perform “red‑teaming” exercises to expose vulnerabilities and demonstrate that safeguards are in place to prevent misuse or hallucinations.

In summary, the EU AI Act aims to create a balanced framework that encourages innovation while ensuring that AI systems are transparent, safe and accountable. Its staged implementation signals Europe’s commitment to regulating general‑purpose AI and may influence how other regions address powerful generative models.

Topics:
AI Act, EU Regulation, General-Purpose AI, AI Compliance, AI Policy

More Ai News

Top Ai Stories