Breaking down the EU Artificial Intelligence Act: What Businesses Need to Know and Do

On March 13, 2024, the European Union passed the Artificial Intelligence Act (AI Act), a first-of-its-kind, AI-specific legislation that aims to regulate the development and use of artificial intelligence. The act specifies clear requirements and obligations related to specific uses of AI, including defined risk thresholds ranging from minimal or no risk to unacceptable risk. The act entered into force on August 1, 2024, with the roll-out happening in phases, including the first milestones going into effect earlier this year. Noncompliance with the act can result in hefty fines: up to 35 million euros or 7% of a company’s global revenue.
If this feels like déjà vu, you’re not wrong. Similar to the General Data Protection Regulation (GDPR) which went into effect in 2018, the EU is once again paving the way for other nations to implement similar regulations and penalties. And just like GDPR impacted any company doing business in the EU, complying with the AI Act and other related regulations will require developers and deployers of AI systems in the EU, as well as developers and deployers of AI systems where the output is used in the EU, to take measures now so they’re prepared to comply in advance of the deadlines.
The EU AI Act in Brief
First introduced in 2021, the EU AI Act aims to “foster trustworthy AI in Europe.” The goal is to establish “a clear set of risk-based rules for AI developers and deployers regarding specific uses of AI.” Together with the AI Innovation Package, AI Factories, and Coordinated Plan on AI, this set of measures seeks to “guarantee safety, fundamental rights and human-centric AI, and strengthen uptake, investment, and innovation in AI across the EU.”
The AI Act went into effect on August 1, 2024, with full compliance required by August 2, 2026, with a few notable exceptions:
- February 2, 2025: Prohibitions and AI literacy obligations entered into application.
- August 2, 2025: Governance rules and obligations for general-purpose AI (GPAI) took effect.
- August 2, 2027: Extended transition period for the rules for high-risk AI systems, embedded into regulated products.
At its core, the EU AI Act establishes risk thresholds that define the level of oversight needed to ensure regulatory compliance. Companies who develop or use AI must understand where their applications fall within the regulatory framework so they can take steps to comply with the respective requirements and obligations aligned with their risk level.
How to Prepare Your Organization to Comply with the EU AI Act
Just like GDPR prompted organizations to rethink and retool their internal processes and policies, complying with the AI Act will require many of the same actions. Organizations will need to evaluate and revise existing policies—or in some cases, establish new policies—related to the use of AI-powered technologies, as well as data governance, data quality, and data processes. And, they will need to assess exposure to potential ethical and reputational risks associated with using AI throughout the organization. Simple, right? Not exactly.
While establishing a plan to comply with the AI Act may seem daunting, the best place to start is with a few foundational steps.
Assess the risk level for your AI apps
First and foremost, it’s important to understand where your AI applications fall within the regulatory framework. Not only will this assessment define what you need to do, but it will also guide when it must be done.
The four risk levels include minimal risk, limited risk, high risk, and unacceptable, and each comes with its own set of regulatory obligations.
- Unacceptable: AI applications that are considered a threat to safety, livelihood, and human rights. Examples include AI systems that categorize individuals based on sensitive biometric characteristics such as race, religion, or political views or systems that use social scoring that leads to unfavorable treatment of individuals.
- High risk: AI applications that can put health, safety, or fundamental human rights at risk. Examples include AI systems used in medical devices, for recruitment and performance evaluations, and to manage critical infrastructure such as energy grids or transportation systems.
- Limited risk: AI applications that require transparency and identification related to the use of AI. Examples include chatbots, generative AI content, and AI systems that manipulate images, audio, or video.
- Minimal risk: All other low-risk AI applications such as AI-enhanced video games and spam filters.
You can see a full description of each risk level and a comprehensive list of examples here.
Based on the defined classification, your organization will need to take specific steps in order to comply, with the exception of minimal risk applications, which are unregulated.
Further, the European Commission enacted new transparency and copyright-related rules aimed at GPAI models in August 2025. The AI Act defines a GPAI model as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.”
For providers that use GPAI models, they should assess their applications against these rules and the potential for systemic risk using three key tools provided by the Commission:
- Guidelines on the scope of obligations for providers of GPAI models
- GPAI code of practice
- Template for the public summary of training content
Complete a gap analysis
If your assessment deems your systems limited or high risk, you’ll need to establish a plan of action to meet the required obligations according to the defined timelines. But before you begin to craft a plan, it’s important to audit your current policies and processes. Determine what exists today that works as-is, what is in place but needs modification, and what is missing altogether.
Once you have a handle on your current state, the next step is to decide how in-depth your plan will be. For some organizations, their approach will be to exert the minimal amount of time and effort to comply with the act, while for others, compliance will serve as an opportunity to establish a more defined, in-depth set of AI policies for use across the organization.
Decide who is in charge and who needs to be involved
Adhering to regulations such as the EU AI Act requires cross-functional collaboration between data and analytics, IT, risk and compliance, and potentially others such as finance and product development, depending on the nature of your organization. But it also requires someone to lead the effort and hold the team accountable for reaching its milestones within the defined timelines.
As AI becomes more pervasive throughout organizations, we’re seeing a new role emerge: the chief artificial intelligence officer (CAIO). This individual is often responsible for demonstrating where and how AI can add value, defining the appropriate measures and oversights to ensure ethical use, and overseeing AI-related compliance efforts.
Depending on your organization, it may or may not be ideal for your chief data officer (CDO) to also take on the role of the CAIO. Since it’s likely that your organization has already undertaken GDPR compliance efforts, your CDO should be familiar with the steps necessary to achieve and maintain this level of regulatory compliance.
Outline your plan and the steps you must take to operationalize it
Creating an EU AI Act compliance program involves defining not just what you need to do today, but also how you will maintain your efforts as your business, your data, and the regulations evolve. Core components of a comprehensive AI compliance program include not just the elements needed to achieve regulatory compliance, but also clear definitions of the ethical and reputational requirements your business deems important as they relate to the responsible use of data and AI. Once defined, the best way to operationalize these components is by rolling out an AI policy that everyone across the organization can reference and adopt.
Establish a process for continuous monitoring of AI throughout the organization
Once established, it’s important to define how the business will continuously monitor operations and identify new applications and areas that fall within the purview of the AI Act. For example:
- New application development or the adoption of new, AI-powered technologies, general-purpose AI models, or generative AI (GenAI) technologies
- Mergers and acquisitions
- New data and new data sources that will train AI models
Each of these scenarios necessitates an evaluation of the impact on regulatory compliance and potentially requires you to create a new plan or modify your existing plans and policies in order to remain compliant. Companies that develop or deploy AI systems should establish a process to evaluate any new or modified AI technology to ensure it remains in compliance with the EU AI Act.
Looking Ahead: Building a Culture of AI Compliance
In August 2025, the AI Act hit a major milestone with the obligations related to GPAI models going into effect, “bringing more transparency, safety and accountability to AI systems on the market.” From August 2, 2025, developers and deployers of AI systems that use GPAI models must comply with the regulations when bringing new products to market. Existing solutions on the market that use GPAI models have until August 2, 2027 to become compliant.
Further, in August 2026, the main body of the EU AI Act will come into force, including compliance requirements for developers and deployers of high-risk AI systems. That means organizations that develop and deploy these systems have less than one year to ensure their solutions will comply with the regulation.
While meeting the compliance deadlines are critical, true success comes from creating a lasting culture of compliance that ensures organizations remain accountable and ready for what’s next. If we learned anything from GDPR, it’s that navigating regulations such as the AI Act demands strong leadership, meticulous planning, and well-defined policies. By embracing a culture of compliance, you’ll not only mitigate risks when it comes to the responsible use of AI models, but you’ll also foster trust in AI throughout your organization.
Important note: The information in this post reflects a summary of the latest publicly available information on the EU AI Act. However, as with any regulation, specific details about the regulation as well as compliance timelines can change over time. Be sure to check official sources such as the EU Artificial Intelligence Act site or the European Commission site for the latest updates and details.
Get a free, no-obligation 30-minute demo of Tamr.
Discover how our AI-native MDM solution can help you master your data with ease!