The World's First Comprehensive AI Law Takes Effect

The European Union has officially begun enforcement of the AI Act, the world's first comprehensive legislation governing artificial intelligence, with compliance auditors now conducting assessments of major technology companies operating in the EU market. The European AI Office, established last year to oversee implementation, confirmed that formal audit processes are underway for more than 40 companies, with the first enforcement actions and potential fines expected by July 2026.

The AI Act, which was adopted in 2024, establishes a risk-based regulatory framework that categorizes AI systems by their potential for harm. Systems classified as "high-risk" — including those used in healthcare, law enforcement, education, and employment — face the strictest requirements, including mandatory conformity assessments, transparency obligations, and human oversight provisions.

Key Requirements Now Enforceable

The first phase of enforcement, which began on April 1, 2026, focuses on the following obligations:

"The AI Act is not about slowing innovation — it is about ensuring that AI innovation serves European citizens and respects their fundamental rights. Companies that have prepared will find compliance manageable. Those that have not should prepare for consequences," said Thierry Breton, European Commissioner for Internal Market.

Companies Under Scrutiny

The European AI Office has confirmed that compliance audits are underway for all major AI providers operating in the EU market, including OpenAI, Google DeepMind, Meta, Anthropic, and Microsoft. The audits focus initially on general-purpose AI model compliance, including transparency requirements for training data, model capabilities documentation, and systemic risk assessments.

Companies classified as providers of "general-purpose AI with systemic risk" — a category that includes the most powerful foundation models — face additional obligations including adversarial testing, incident reporting, and cybersecurity measures. The threshold for systemic risk classification is models trained with more than 10^25 FLOPs of compute.

Industry Response

Technology companies have had mixed responses to the enforcement timeline. Some, particularly European-based AI companies, have publicly supported the regulation as creating a level playing field and building public trust. Others, especially US-based companies, have expressed concerns about compliance costs, competitive disadvantages relative to unregulated markets, and the challenge of meeting transparency requirements without revealing trade secrets.

OpenAI has established a dedicated EU compliance team of over 50 people and published a detailed compliance report for GPT-5. Google has created a public AI Act compliance dashboard showing the status of its various AI services. Meta has expressed concerns about the transparency requirements' impact on its open-source Llama model strategy, arguing that some requirements are better suited to proprietary systems.

Potential Fines

The AI Act includes substantial penalty provisions that give the regulation teeth. Maximum fines can reach 7% of global annual turnover for the most serious violations, or up to 35 million euros for smaller companies. For prohibited AI practices, fines can reach 35 million euros or 7% of global turnover, whichever is higher.

For context, a 7% fine applied to the largest AI companies could exceed $10 billion for companies like Google or Microsoft. While initial enforcement actions are expected to be more moderate, regulators have signaled that they intend to demonstrate the regulation's teeth early to encourage industry-wide compliance.

Global Implications

The EU's enforcement of the AI Act is being closely watched by regulators worldwide. The United States, which has taken a lighter regulatory approach through executive orders and voluntary commitments, faces growing pressure to develop comparable legislation. The UK, Japan, Canada, and Brazil are all developing their own AI regulatory frameworks, with many drawing inspiration from the EU model.

The concept of a "Brussels Effect" — where EU regulations become de facto global standards because companies find it easier to comply globally than to maintain separate products for different markets — is expected to apply to the AI Act just as it has to GDPR for data protection.

As enforcement begins in earnest, the AI Act will serve as the world's first real-world test of whether comprehensive AI regulation can effectively govern a technology that is evolving faster than any regulatory framework in history.