EU’s AI Act Takes Effect: First Compliance Deadline Enforces Strict Bans on High-Risk AI

European Union AI Act enforcement concept with digital law symbols

Starting this Sunday, regulators in the European Union will have the authority to prohibit AI systems deemed to pose an “unacceptable risk” or cause harm under the new AI Act.

February 2 marks the first compliance deadline for the EU’s AI Act, a landmark regulatory framework that was approved by the European Parliament last March following years of deliberation. The act officially became law on August 1, and now, the first set of enforcement measures is being implemented.

Outlined in Article 5, the AI Act provides guidelines for various AI applications, spanning consumer tools to industrial use cases. The legislation classifies AI risk levels into four distinct categories:

  1. Minimal risk – Includes AI applications such as spam filters, which require no regulatory intervention.
  2. Limited risk – Covers AI-powered tools like customer service chatbots, which must adhere to transparency requirements.
  3. High risk – Encompasses critical applications such as AI used in medical diagnoses or financial assessments, which will be subject to stringent regulatory scrutiny.
  4. Unacceptable risk – AI systems falling under this category are now banned outright under the new compliance requirements.

AI Activities Deemed Unacceptable

The legislation explicitly prohibits AI applications that could potentially manipulate individuals, exploit vulnerabilities, or pose significant ethical concerns. These include:

  • AI-driven social scoring systems that assess individuals based on behavioral data.
  • AI techniques designed to subliminally influence or deceive individuals.
  • AI that exploits people’s vulnerabilities related to age, disability, or economic status.
  • Predictive AI models that attempt to forecast criminal behavior based on a person’s appearance.
  • Biometric AI systems that infer characteristics such as sexual orientation.
  • AI tools that collect real-time biometric data in public spaces for law enforcement purposes.
  • Emotion recognition AI deployed in schools or workplaces.
  • AI systems that expand facial recognition databases by scraping images from online sources or surveillance cameras.

Organizations found using these banned AI applications within the EU will face heavy penalties, regardless of their headquarters’ location. The fines could reach up to €35 million (~$36 million) or 7% of a company’s global revenue from the previous fiscal year, whichever amount is higher.

When Will the Penalties Take Effect?

Despite the February 2 compliance deadline, fines and enforcement measures will not be immediate. Rob Sumroy, head of technology at UK law firm Slaughter and May, emphasized in an interview with TechCrunch that the next major deadline falls in August. By then, the EU will have designated the authorities responsible for enforcement, and financial penalties will begin to apply.

Voluntary Compliance and Industry Response

In anticipation of the AI Act’s implementation, over 100 companies, including Amazon, Google, and OpenAI, voluntarily signed the EU AI Pact last September. This non-binding agreement encourages early adoption of the AI Act’s principles, particularly regarding high-risk AI classification. However, notable companies such as Apple, Meta, and French AI startup Mistral chose not to sign the pact, despite being vocal critics of the regulation.

That said, organizations that did not join the pact are still required to comply with the AI Act’s restrictions. According to Sumroy, the prohibited AI practices outlined in the act are generally not used by most companies, minimizing potential violations.

Exceptions to the AI Act’s Restrictions

While the AI Act imposes strict limitations, there are certain exemptions. Law enforcement agencies can deploy AI-driven biometric surveillance tools in public spaces if they are conducting a “targeted search” for missing persons or addressing imminent threats to life. However, such uses require prior authorization and cannot lead to legal decisions solely based on AI outputs.

Additionally, AI-driven emotion recognition systems may be permitted in workplaces and educational institutions if there is a valid medical or safety justification, such as therapeutic applications.

Looking Ahead: Unanswered Questions and Future Regulations

The European Commission has pledged to release further guidelines in early 2025 following consultations with stakeholders last November. However, these additional directives have yet to be published.

A key concern for organizations remains the interaction between the AI Act and existing legal frameworks, including the General Data Protection Regulation (GDPR), the NIS2 Directive on cybersecurity, and the Digital Operational Resilience Act (DORA). Sumroy highlighted the importance of understanding how these regulations overlap, especially regarding incident reporting requirements and compliance obligations.

As enforcement deadlines approach, companies operating in the EU must stay vigilant and ensure their AI deployments align with the evolving regulatory landscape. The coming months will clarify how the AI Act integrates with broader digital governance policies, setting the tone for future AI regulations worldwide.