Last updated: 1 January 2026
Overview
The EU's Artificial Intelligence Act (Regulation (EU) 2024/1689) (“AI Act”) is a comprehensive legal framework aiming to enhance the development of AI and support its ethical use. The AI Act is now in force. However, in general, its provisions will not apply before 2 August 2026. On the other hand, the provisions under Chapter I (“General Provisions”) and Chapter II (“Prohibited AI Practices”) of the AI Act have been applicable since 2 February 2025. Additionally, some other chapters (e.g., Chapter V on general-purpose AI models) have been applicable since 2 August 2025. As long as the AI Act is not yet fully applicable, the European Commission is encouraging companies to sign voluntary pledges in connection with the AI Act (see some brief comments on the AI Pact below).
The AI Act will lead to significant changes to the way in which companies develop, market and use smart digital technologies that include AI. Given AI’s reliance on data, the proposed regulation has drawn extensively from existing data protection and cybersecurity rules (most notably, the General Data Protection Regulation), echoing, among other concepts, data processing transparency, data retention limits, implementation of appropriate safeguards to protect data and data breach notification duties.
Who does the AI Act apply to?
The AI Act has a broad scope, including some extraterritorial effect, and applies to the following:
(i) Providers placing on the market or putting into service AI systems, or placing on the market general-purpose AI models, in the EU, irrespective of whether the providers are established or located within the EU or not
(ii) Deployers of AI systems that have their place of establishment in the EU or that are located within the EU
(iii) Providers or deployers of AI systems that have their establishment or location outside the EU where the output produced by the AI system is used in the EU
(iv) Importers and distributors of AI systems
(v) Product manufacturers placing on the market or putting into service AI systems, together with their product and under their own name or trademark
(vi) Authorized representatives or providers that are not established in the EU
(vii) Affected persons that are located in the EU
This is not a complete list of all of the obligations set out in the AI Act. Instead, we have highlighted some of the key obligations in the AI Act, which apply specifically to providers of AI systems.
The AI Act does not affect the application of the provisions on liability of providers of intermediary services set out in the Digital Services Act (Regulation 2022/2065). Further, the AI Act is without prejudice to any applicable EU rules on consumer protection and product safety, which companies will need to ensure they comply with.
AI Act in focus
Definition: Article 3(1) of the AI Act defines an AI system as the following:
A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
This definition is intentionally very broad, with the aim of helping to “future proof” the applicability of the regulation, and differentiating AI from other software where output is already pre-determined.
Risk based approach: The AI Act takes a risk-based approach to regulating AI. Article 5 of the AI Act sets out a number of AI practices that are considered too high risk and are prohibited.
These include, among others, the following:
i) AI systems that exploit peoples vulnerabilities due to their age, disability or social or economic situations
ii) AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage
iii) Real-time remote biometric identification in publicly accessible places by law enforcement (with some exceptions that apply)
iv) AI systems that deploy subliminal techniques beyond a person’s consciousness or use purposefully manipulative or deceptive techniques
The AI Act further distinguishes between high-risk, limited-risk and minimal- or no-risk AI systems.
- High-risk AI systems are the most heavily regulated systems under the AI Act and are subject to strict obligations before they can be put on the market.
- Limited-risk AI systems are subject to transparency obligations (e.g., deployers of an AI system that generates or manipulates image, audio or video content constituting a deep fake are required to disclose that the content has been artificially generated or manipulated).
- AI literacy obligations, for example, apply to minimal- or no-risk AI systems.
Regulatory framework for high-risk AI systems: An AI system is considered high risk when both of the following conditions are met:
i) The AI system is intended to be used as a safety component of a product, or the AI system itself is a product, which is covered by legislation in Annex I (including machinery, toys and medical devices).
ii) The product is required to undergo a third-party conformity assessment to place it on the market or to put it into service under the legislation in Annex I (see above).
AI systems in Annex III are also considered high risk (e.g., systems relating to management of critical infrastructure) if they pose “significant risk” of harm to the health, safety and fundamental rights of individuals (e.g., this would not include an AI system that is intended to perform a narrow procedural task). This relates to AI systems deployed in the following specific areas: biometrics, critical infrastructure, education, employment, access to self-employment, access to public services, law enforcement, migration and border control, and the administration of justice and democratic processes.
Inevitably, there are heightened requirements for high-risk AI systems. For example, these high-risk AI systems must meet the following requirements:
- They must have risk management systems in place.
- They must be developed based on training, validation and testing data that meets certain quality criteria (if they make use of techniques involving the training of AI models with data).
- They must have technical documentation drawn up before they are placed on the market or put into service (and this documentation must be kept up to date).
- They must be designed and developed in such a way as to ensure that their operation is sufficiently transparent so that deployers can interpret its output and use it appropriately.
- They must be subject to human oversight.
- They must allow automatic recording of events over the lifetime of the AI system (e.g., to identify situations that may result in a substantial modification).
- They must achieve an appropriate level of accuracy, robustness and cybersecurity.
Regulatory sandboxes: Article 57 of the AI Act provides that EU Member States shall ensure that their competent authorities establish at least one AI regulatory sandbox at national level to foster AI innovation. This will provide a controlled environment to facilitate the development, testing and validation of innovative AI systems for a limited time before being placed on the market or being put into service.
High-potential penalties: Each member state is required to set out rules on penalties and enforcement measures. However, the AI Act sets out some requirements in relation to the penalties that can be imposed. For example, for noncompliance with the prohibition on AI practices in Article 5 of the AI Act, a company can be subject to a fine of up to EUR 35 million or 7% of a company’s worldwide turnover for the preceding financial year.
Obligations of “providers” in relation to high-risk AI systems
A “provider” is defined to include a person that develops an AI system or a general-purpose AI model, or has one developed and places it on the market or puts it into service under its name/trademark. Pursuant to the AI Act, a provider of an AI system has various obligations in relation to high-risk AI systems, including the following (this is not a complete list):
- To ensure that high-risk AI systems comply with the requirements set out in the AI Act (see, for example, the requirements set out above)
- To put a quality management system in place (e.g., with a strategy for regulatory compliance)
- To keep technical and other documentation and records (e.g., logs automatically generated by a high-risk AI system)
- To ensure that the systems undergo the conformity assessment procedure, to draw up a declaration of conformity and to affix a CE marking• To comply with registration obligations
- To inform users that they are interacting with an AI system (unless this is obvious)
- To establish and document a post-marketing monitoring system in a manner that is proportionate to the nature of the AI technologies and the risks of the high-risk AI system
- To take corrective action if a high-risk AI system is not in conformity with the AI Act — if the provider becomes aware that a high-risk AI system presents a risk, it is required to immediately investigate the causes and inform the market surveillance authority of the nature of the noncompliance and any relevant corrective action taken
- To report any serious incident to the market surveillance authorities of the Member States where the incident occurred (no later than 15 days after the provider or deployer becomes aware of the serious incident) — a “serious incident” means an incident or malfunctioning of an AI system that directly or indirectly leads to: (i) the death of a person or serious harm to a person’s health; (ii) serious and irreversible disruption of the management or operation of critical infrastructure; (iii) infringement of obligations under EU law intended to protect fundamental rights; or (iv) serious harm to property or the environment
What is the current status?
The AI Act entered into force on 1 August 2024. Many provisions will apply from August 2026. However, the following is important to note:
- Some rules (e.g., those on prohibited practices) have been applicable since 2 February 2025.
- Some rules have been applicable since August 2025 (e.g., those relating to most penalties).
- Some rules relating to high-risk AI systems will apply from August 2027.
In September 2024, the European Commission announced that over 100 companies joined the AI Pact and agreed to voluntarily work towards future compliance with the AI Act (e.g., by identifying AI systems likely to be categorized as high risk under the AI Act and promoting AI literacy and awareness among staff to ensure ethical and responsible AI development).
On 4 February 2025, the European Commission published guidelines on prohibited AI practices.
On 19 November 2025, the European Commission published a proposal regarding the simplification of the implementation of harmonized rules on AI: the “Digital Omnibus on AI”. Major elements of this proposal are as follows:
- Simplified implementation measures: Align high-risk AI rules with the availability of standards and support tools, and clarify the interplay between the AI Act and other EU legislation
- Extended SME benefits: Expand regulatory simplifications (e.g., lighter documentation, lenient penalties, etc.) to small mid-cap companies (SMCs)
- Flexibility and reduced burden: Remove mandatory harmonized post-market monitoring plans and ease registration for providers whose AI systems are used in high-risk areas but only for narrow tasks
- Centralized oversight and compliance support: Assign oversight of general-purpose AI and large platforms to the AI Office, and allow processing of special categories of personal data for bias detection with safeguards
- Innovation facilitation: Broaden AI regulatory sandboxes and real-world testing, including an EU-level sandbox from 2028, benefiting key industries such as automotive.
This proposal is currently being debated by EU legislators.
Next steps
Businesses that are involved in the development of AI systems (of any sort) should consider what action they need to take to comply with any applicable provisions of the AI Act (particularly given the high penalties that can apply for noncompliance).
Discover important legal developments in product regulatory and liability risk to help navigate this increasingly challenging landscape.