An Introduction to the EU AI Act 

In October 2020, the European Council held a special meeting to address the EU’s digital transition, among other topics, and emphasized the importance of defining high-risk artificial intelligence (AI) systems within a clear regulatory framework. This laid the groundwork for the European Commission’s proposal for harmonized rules on artificial intelligence, which was published on April 21, 2021. After extensive deliberations and revisions, the European Council developed and then formally approved the EU AI Act on May 21, 2024, marking a major milestone in establishing a comprehensive AI regulatory framework across Europe. 

The EU AI Act aims to harmonize AI regulations across the EU, applying directly to all member states without the need for transposition into national laws. It follows a risk-based approach, meaning that the higher the risk of harm to society, the stricter the rules.  

Overview of the EU AI Act 

The EU AI Act officially took effect on August 1, 2024, though most provisions will only be enforced starting August 2, 2026. Some key provisions will take effect sooner, starting February 2, 2025, including the prohibition of specific high-risk AI practices and requirements for general-purpose AI models. 

Key aspects of the EU AI Act include: 

  • Prohibition of Dangerous AI Applications: Certain AI systems, identified as posing unacceptable risks, are banned under the Act. 
  • Technical Standards for High-Risk AI: The Act mandates strict technical requirements for high-risk AI systems, focusing on security, transparency, and accuracy. 
  • Obligations for Supply Chain Participants: Specific role players within the AI supply chain—such as providers, deployers, and distributors—must also meet various compliance obligations. 
  • Rules for General-Purpose AI Models: The Act introduces transparency obligations for all general-purpose AI models and additional risk management obligations (e.g. self-assessments, incident reporting, cybersecurity requirements) for very capable and impactful models. 
  • Transparency for Interactive AI: Transparency requirements apply to AI systems that generate content or engage with human users, ensuring users are aware of AI interactions. 

What Is an AI System? 

An AI system is defined as a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. 

To understand an AI system, it’s helpful to start with its core component: the AI model. This model is the “brain” that processes data to identify patterns, make predictions, or extract insights. While the model focuses on data interpretation, the full AI system includes additional components—such as data input, processing, and output delivery—that enable these insights to drive actions or results in real or digital environments. In essence, an AI system uses one or more models to produce meaningful results or actions. 

In simpler terms, an AI system is a machine-based tool capable of operating autonomously to some extent. These systems can often adjust or adapt after deployment, learning from inputs to generate outputs like predictions, content, recommendations, or decisions that impact real or digital environments. To be classified as an AI system under the Act, it must infer insights or outcomes from its inputs—meaning that automation software does not qualify as AI unless it interprets or analyzes data to create new outputs. 

Who Does the EU AI Act Apply To? 

The Act applies to providers, deployers, importers, and distributors of AI systems when there is a connection with the EU. Specifically, it applies to: 

  • Providers that place on the market or put into service AI systems or place on the market general-purpose AI models in the EU. This is regardless of whether a provider is established or located within the EU or in another country; 
  • Deployers of AI systems that have their place of establishment or are located within the EU; 
  • Providers and deployers of AI systems that have their place of establishment or are located outside the EU, where the output produced by the AI system is used in the EU; 
  • Importers and distributors of AI systems; 
  • Product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark; 
  • Authorized representatives of providers, which are not established in the EU; 
  • Affected persons that are located in the EU. 

Extraterritorial Compliance Requirements 

The EU AI Act imposes compliance obligations on certain organizations outside the EU. Foreign companies that place AI systems or general-purpose AI models on the EU market, or put them into service within the Union, must adhere to the Act’s requirements. Additionally, if an AI system’s output is used within the EU, the provider or deployer of that system—regardless of location—must also ensure compliance with the Act. This extraterritorial reach is designed to protect EU users and uphold standards across borders. The Act’s territorial scope is anticipated to be broadly interpreted, similar to the approach taken with the GDPR. 

Exceptions to the EU AI Act 

The EU AI Act contains some exceptions. For example, it does not apply to AI systems that are marketed, put into service, or used exclusively for military, defense, or national security purposes, regardless of the type of entity carrying out those activities. The Act does not apply to AI systems or models specifically developed and put into service for the sole purpose of scientific research and development. There is also an exception for open-source models, depending on the model’s specific application and associated risks. 

Understanding Risk Levels 

The EU AI Act covers the following risk levels: 

  • Unacceptable Risk (Prohibited Systems): AI systems that negatively affect safety or fundamental rights. They include systems that are designed for social scoring or predictive policing or that employ certain types of manipulative behavior. The category also includes systems used for untargeted scraping of facial images as well as biometric categorization and identification using sensitive characteristics in certain circumstances. 
  • High-Risk: AI systems that are intended for use as safety components of products or AI systems that themselves are covered by EU harmonization legislation, and that must undergo third-party conformity assessments. High-risk systems also include other systems specifically listed by the AI Act, including certain systems used in critical infrastructure, migration, motor vehicles, education, employment, essential private or public services and benefits, the administration of justice or democratic processes, or law enforcement. 
  • Limited Risk: This category includes interactive systems like chatbots as well as systems that generate or manipulate content (for example, ChatGPT). 
  • Minimal or No Risk: This includes systems such as spam filters, video games, online shopping recommendations, or inventory management systems. 

Rules That Apply to Each Risk Level 

The AI Act covers the following risk levels: 

  • Unacceptable Risk (Prohibited Systems): These systems are banned. 
  • High-Risk: Adequate risk assessments must be done on these systems and mitigation systems must be put in place. Input datasets must be of high quality. Activity must be logged to ensure that results are traceable. There must be detailed documentation regarding the system and its purpose so that authorities can assess its compliance. Appropriate human oversight is required to limit risk. These systems must involve a high level of robustness, security, and accuracy. 
  • Limited Risk: Transparency obligations apply. For example, AI-generated content must be labeled as such, and users must be notified that they are interacting with AI systems. 
  • Minimal or No Risk: There are little or no restrictions on these systems. 

Key Obligations Regarding High-Risk AI Systems 

Providers of high-risk AI systems must make sure that these systems undergo conformity assessments and comply with the EU AI Act’s requirements. In some cases, they need to involve conformity assessment bodies. Standalone AI systems will also need to be registered in a dedicated EU database. Before a high-risk AI system can be placed on the market, a provider will need to sign a declaration of conformity, and the system must bear the CE mark. 

Deployers of high-risk AI systems have specific obligations too. They must take appropriate technical and organizational measures to ensure the system is used in accordance with the provider’s documented instructions. They are also required to monitor the functioning of the system and keep records of this. They must also ensure that persons assigned to implement the instructions and provide human oversight over the system have the necessary competence to do so. Deployers must also notify persons who are subject to use of the system when making decisions related to those persons. In addition, deployers are required to carry out a data protection impact assessment as defined by Article 35 of the GDPR

EU AI Act Representative 

If a provider of a high-risk AI system or general-purpose AI model is not established in the EU, it will need to appoint a representative that is established in the EU. The representative can be an individual or an entity such as a company. 

The representative’s main role will be to serve as a contact point for EU authorities. It will need to make sure that all required documentation is prepared, and that compliance remains in place. This requirement is comparable to the GDPR’s provision that mandates entities outside the EU to appoint an EU representative to ensure accountability and compliance with its regulations. 

Next Steps for Business 

If an organization develops, supplies, imports, or uses AI, it should ascertain whether or not the EU AI Act applies to it. If the Act applies, it must determine which risk category its AI system falls into. The organization should then develop a plan to ensure that it complies with the Act’s requirements and that compliance is maintained.  

Organizations outside the EU should also determine whether they will need to appoint representatives in the EU. VeraSafe can help you interpret the EU AI Act and its requirements. Contact us to learn more

You may also like:
EU Digital Services Act: Role of the Legal Representative
Decoding the EU-U.S. Data Privacy Framework
How to Implement the New EU Standard Contractual Clauses (SCCs)

Related topic(s): EU Privacy Laws, AI

Contact VeraSafe to discuss your data security management and privacy program today.