The EU AI Act: Where Do We Stand in 2025?

BSR discusses the status of the EU AI Act—in terms of provisions that have come into force and those yet to do so—as well as potential changes to the AI Act in the future.

Foto: iStock

08.05.2025

Sponseret

Richard Wingfield, BSR

The European Union’s Artificial Intelligence Act (EU AI Act) is the most significant piece of legislation in the world regulating artificial intelligence. In 2024, we published a two-part series on the AI Act looking at what it means for businesses and recommendations to help companies start preparing. Twelve months on, this new blog post looks at the status of the AI Act—both in terms of provisions that have come into force and those yet to do so—as well as potential changes to the AI Act in the future. 

What is the status of the EU AI Act? 

The EU AI Act officially entered into force in August 2024; however, the provisions did not immediately apply. Instead, the legislation took a staggered approach, with different requirements coming into force at different times over a three-year period. 

The first provisions to come into effect, on February 2, 2025, prohibited certain AI practices. Most concerned more hypothetical uses of AI, rather than ones that were widespread within the EU, such as the use of AI with manipulative or deceptive techniques aimed at changing people’s behavior to cause harm, or the use of AI to predict the likelihood of a person committing a criminal offense. Many others focused on AI that might be used by government actors, including law enforcement agencies, rather than by the private sector.  

A provision that is more relevant for companies prohibits the use of AI systems to determine or predict people's emotions in workplace settings, with exceptions for safety reasons, where commercially available software that can do this already exists. In February 2025, the EU Commission published guidelines on these prohibited AI practices. These provide further clarity on all prohibited AI practices, including workplace emotional detection or prediction (e.g., clarifying that “emotions” does not include gestures, facial expressions, or whether a person might be in pain or tired). Companies using any sort of technology or software that determines or predicts people’s emotions should have stopped this already, or risk fines of up to €35 million or 7 percent of global annual turnover of the preceding year, whichever is higher. 

None of the remaining provisions are yet in force, and the next set of requirements to come into force will not do so until August 2, 2025. First, EU member states will at that point need to designate the independent organizations (“notified bodies”) responsible for assessing the conformity of high-risk AI systems before they can be placed in the EU market. 

Second, there will be new rules for General-Purpose AI (GPAI) models, namely models (like large language models) that can be adapted to a wide range of tasks. Providers will need to keep up-to-date technical documentation, provide information and documentation to downstream providers integrating models into their own AI systems, establish a policy to respect EU copyright law (particularly regarding training data), and publish a detailed summary of the content used for training. Additional, more stringent obligations will apply to GPAI models identified as having systemic risks, defined as including “actual or reasonably foreseeable negative effects on...fundamental rights.” These include requirements for model evaluation (e.g., identifying and mitigating systemic risks), assessing and mitigating possible systemic risks, ensuring adequate cybersecurity protection, and reporting serious incidents.

Third, the EU will establish an AI Office and European Artificial Intelligence Board to oversee the enforcement of the legislation, and each member state will designate a national authority with the competence to enforce the legislation at the national level.

After this, the third wave of requirements will come into force on August 2, 2026. These cover a broad range of areas, including measures to promote innovation (such as regulatory sandboxes, where innovators can test new products or services in a controlled environment under the supervision of regulators) and the establishment of an EU database of high-risk AI systems. Two requirements in particular will be important to companies. 

First, there will be new requirements on transparency aimed at ensuring that individuals are aware when they are interacting with AI. These include informing people interacting with AI systems (e.g., chatbots) that they are doing so, labeling synthetic content generated by AI (such as text, images, video or audio) as well as deepfakes, and ensuring that individuals subjected to emotional recognition or biometric categorizations are informed. These requirements will be particularly relevant for companies using emotional recognition or biometric categorizations outside of the workplace (such as in stores) or creating content using generative AI. 

Second, the AI Office and member states will encourage and facilitate the development of codes of conduct for AI systems that are not high risk. While voluntary, these will aim to encourage the adoption of similar measures that would be taken for high-risk systems, including the ethical development and use of AI, and assessing and minimizing the environmental impact of AI systems. Companies looking to position themselves as leaders when it comes to responsible AI may want to ensure compliance with these voluntary codes of conduct.  

The final requirements will come into force on August 2, 2027. These relate solely to high-risk AI systems, which include biometrics and the use of AI in critical infrastructure, education and vocational training, employment, essential private and public services, law enforcement, and the administration of justice. For these, there will be a range of requirements relating to their development, including establishing risk management systems, maintaining technical documentation, and ensuring accuracy and robustness, as well as human oversight. There are also rules relating to placing these systems in the EU market, putting them into service, and using them, such as establishing quality management systems, keeping documentation, cooperating with national authorities, and complying with conformity assessments. Part two of our series last year highlighted the many ways that the requirements for high-risk AI systems necessitated a human rights-based approach, with many of the requirements explicitly incorporating consideration of potential risks to human rights posed by the system. 

Might anything change? 

The EU has shown itself willing to modify even recently adopted regulations when faced with pressure by EU member states and other stakeholders. Notably, a range of sustainability-focused regulations are likely to be amended, limiting their applicability, scope and requirements, through the Omnibus Simplification Package

In recent months, there have been similar calls for the EU’s AI Act to be amended to reduce its requirements. The new U.S. administration has been critical of much of the EU’s regulation of technology given the impacts they have on U.S.-based technology companies. While the EU has not made any formal statements on whether amendments to the AI Act are planned, the EU Commission has indicated that there will be a public consultation on challenges in the Act’s implementation process, a “fitness check” on legislation in the area of digital policy, and a “simplification digital package” by the end of 2025. Depending on the outcome of the consultation and the “fitness check,” it is possible that amendments could be made to the AI Act, either removing requirements or the scope of companies who need to comply or extending existing deadlines to give more companies to prepare. 

For more information on the AI Act or to discuss its implications for your business, please contact our Tech and Human Rights team

This article was originally published at the BSR website "Sustainability Insights" and is written by Richard Wingfield, Director, Technology Sectors, at BSR.

08.05.2025BSR

Sponseret

The EU AI Act: Where Do We Stand in 2025?

01.05.2025BSR

Sponseret

Board Oversight Amid Uncertainty: Doubling Down on Strategic Sustainability

01.05.2025BSR

Sponseret

Board Oversight Amid Uncertainty: Doubling Down on Strategic Sustainability

28.04.2025BSR

Sponseret

Climate and Nature Integration 101: In Conversation with Helen Crowley

24.04.2025BSR

Sponseret

Earth Day: Centering Health Equity in Climate Action

10.04.2025BSR

Sponseret

Maintaining Sustainability and Human Rights Principles as Supply Chains Upend