EU AI Act Will Be World’s First Comprehensive AI Law

Article

On March 13, 2024, the European Union’s parliament formally approved the EU AI Act (pdf), making it the world’s first major set of regulatory ground rules to govern generative artificial intelligence (AI) technology. The EU AI Act, after passing final checks and receiving endorsement from the European Council, is expected to become law in spring 2024, likely May or June.

The EU AI Act will have a phased-in approach. For example, regulations governing providers of generative AI systems are expected to go into effect one year after the regulation becomes law, while prohibitions on AI systems posing an “unacceptable risk” to the health, safety, or fundamental rights of the public will go into effect six months after the implementation date. The complete set of regulations in the EU AI Act are expected to be in force by mid-2026.

Organizations Subject to the EU Act

Even if your organization does not have a physical presence in the EU, the act could still apply.

While the EU AI Act impacts businesses operating within the EU, including providers, users, importers, distributors, and/or manufacturers of AI systems, it also applies to businesses providing services or processing data to EU citizens. Drawing from the General Data Protection Regulation (GDPR), the law applies extraterritorially to companies that supply goods or services to EU consumers or process data relating to individuals located in the EU. As a result, companies operating outside the EU may be subject to the compliance requirements imposed by the law if they carry out AI-related activities that involve EU users or data.

Key Compliance Requirements

Under the EU AI Act, compliance and IT executives within an organization will be responsible for the AI models they develop and deploy. The law calls for a heightened level of transparency and disclosure concerning the risks AI models present, as well as the governance and oversight that will be applied when the models are in operation.

Businesses subject to the EU AI Act will need to take an inventory of their current AI models and classify them in accordance with associated risk ratings set forth in the law (more on the risk ratings in the next section). Other compliance requirements include:

  • Conducting AI system assessments
  • Implementing AI system safeguards
  • Establishing effective governance mechanisms
  • Adhering to transparency and disclosure requirements

EU AI Act’s Regulatory Framework

The EU AI Act takes a “risk-based approach” to AI-focused products and services. Generally, the riskier the AI application, the stricter the rules and regulatory requirements imposed under the law. The risk-based approach is also evident in the law’s categorization model for AI systems. Effectively, AI systems will be classified by the level of risk they pose. The law establishes four main risk categories, including:

  • Unacceptable risk
  • High risk
  • General-Purpose AI Models
  • Minimal Risk

Let’s discuss each.

AI Systems Posing an Unacceptable Risk

The new law bans certain AI applications that pose an unacceptable risk to the fundamental rights of EU citizens. Examples of AI systems posing such an unacceptable risk include:

  • Social scoring systems
  • Biometric categorization systems based on sensitive characteristics
  • Untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases
  • Emotion recognition in the workplace and schools
  • Predictive policing (when based solely on profiling an individual or assessing their characteristics)
  • AI applications capable of manipulating human behavior or exploiting people’s vulnerabilities

Despite biometric categorization systems being banned an unacceptably risky, the EU AI Act contains a law enforcement exemption where “real-time” biometric categorization can be deployed by law enforcement when strict safeguards are met. For example, law enforcement must ensure its use is limited in time and geographic scope and subject to specific prior judicial or administrative authorization.

AI Systems Posing a High Risk

Under the EU AI Act, an AI system is considered high risk in two scenarios:

  • The AI system is a product covered by certain EU harmonization legislation or is a safety component of such covered product (for example, toys, medical devices, or machinery) that mandates a third-party conformity assessment
  • The AI system falls within a list of presumed high risk uses, such as biometrics, safety components for critical infrastructure, and education or employment-related systems. It is worth noting, especially for organizations operating in the financial services industry, that credit scoring and pricing for life and/or health insurance are classified as high risk under the law.

If an organization’s AI system is classified as high risk because it falls within a “presumed” high risk use, it is possible to rebut this presumption if the AI system does not pose a significant risk of harm to people’s health, safety, or fundamental rights. If your AI system cannot rebut the presumption and remains classified as high risk, it will need to comply with a myriad of regulatory requirements, including:

  • Maintaining comprehensive technical documentation
  • Maintaining risk and quality management systems throughout an AI system’s lifecycle
  • Utilization of quality datasets
  • Ensuring the AI system is capable of automatic event recording (for traceability and monitoring)

In addition to the requirements listed above, high-risk AI systems will need to complete a conformity assessment before being placed on the EU market.

If a company makes specific modifications to a high-risk AI system or places its name or trademark on an existing high-risk AI system, that company could be categorized as a high-risk AI system provider and therefore subject to the above-described compliance requirements.

General Purpose AI Models (GPAI)

The EU AI Act establishes a separate risk category for GPAI, which include large-scale generative AI models. Providers of GPAI are required to maintain detailed technical documentation related to their AI model. They must also share certain information with providers of AI systems who intend to integrate the GPAI model into their AI system.

In addition, GPAI providers must be prepared to disclose information related to the content used to train the AI model. Compliance with EU copyright law is also necessary for GPAI providers. These requirements are intended to bolster protection for copyright holders. Free open source GPAI models (specifically those that do not posse any systemic risks) are exempted from most of the above-described obligations.

Minimal Risk AI

AI systems that present minimal or no risk (e.g., AI-enabled recommender systems and spam filters) will need only to meet specific transparency obligations, such as informing consumers that they are interacting with an AI system or flagging artificially generated content. In addition, organizations deploying minimal risk AI systems will need to ensure the personnel managing their AI systems possess sufficient AI literacy.

Enforcement Authority

When it comes to enforcing the requirements imposed on various AI systems under the EU AI Act, each EU country is expected to designate their own AI “watchdog” where citizens will be able to file complaints if they believe they have been the victim of a regulatory violation.

In addition, a new body within the EU Commission – the European AI Office – is expected to manage various administrative, standard setting and enforcement tasks related to the law.

The European AI Board, comprised of member states’ representatives, will serve as a coordination platform and to advise the EU Commission

Penalties for Noncompliance

Companies subject to the EU AI Act deemed to be out of compliance receive hefty fines. The scope of the monetary penalty will depend on the type of AI system found to violate the law, the relative size of the company, and the severity of infringement. Noncompliance penalties are expected to be within the following range:

  • For violations of the EU AI Act’s obligations, the penalty could be 15 million euros or 3% of a company’s total worldwide annual turnover, whichever is higher
  • For violations involving banned AI applications, the penalty could be 35 million euros or 7% of a company’s total worldwide annual turnover, whichever is higher)
  • If a company responds to a formal request from a designated regulatory body with incorrect, incomplete, or misleading information, the penalty could be 7.5 million euros or 1.5% of a company’s total worldwide annual turnover, whichever is higher)

As mentioned, the EU AI Act calls for a case-by-case evaluation when determining whether a company should be penalized. Multiple factors are to be considered in the evaluation, including:

  • The nature, gravity, and duration of the violation
  • The intentional or negligent character of the infringement(s)
  • Any actions taken by the company to mitigate the adverse effects of the violation
  • Any history of prior penalties
  • The size, annual turnover, and market share of the company
  • Any financial gain or loss that resulted from the violation
  • Whether the use of the AI system was for professional or personal activity

The EU AI Act enables individuals to report instances of non-compliance to a relevant market surveillance authority.

How to Comply with the EU AI Act

The time is now for organizations subject to the EU AI Act to assess their AI systems and gauge their level of compliance with the law. Below are recommended action items for strengthening your organization’s compliance posture with the EU AI Act:

  • Take a global approach to compliance. The EU AI Act is leading the way with establishing a formal regulatory framework for AI technology, but it likely won’t be the only global law that tackles AI-related risks. Using the act (similar to what many companies did with their implementation of GDPR-related requirements) to set the baseline for your compliance program globally may save you from headaches that can emerge with multiple laws being proposed.
  • Compile existing AI models into a repository. Your organization may be able to use an existing catalogue of software/applications to begin this process. If such a catalogue is not available, consider conducting surveys among different departments in your organization, notably IT and risk departments.
  • Implement an AI governance strategy that aligns with your organization’s key objectives and identifies areas within the organization where AI will most benefit strategic goals. A robust AI governance strategy will also necessitate aligning with initiatives focused on managing both personal and non-personal data assets.
  • Establish policies, procedures, and internal trainings for the assessment of new AI models and systems. Assessments should include properly identifying and mitigating risks with sufficient monitoring throughout the AI system lifecycle. Your organization may be able to take existing risk management processes (e.g., data protection risk assessments, vendor due diligence, audits, etc.) and tailor them to properly assess risks associated with new AI models and systems.
  • Determine what resources, both internal and external, will be required to support your organizations’ AI governance activities.

AI governance is a new and quickly changing area of law. If you need assistance with AI-related issues, contact a member of the WRVB Cybersecurity & Data Privacy team.

Team

Jump to Page