Four Ways to Kickstart Compliance with the EU AI Act

May 28, 2025
U.S. companies face new obligations under the EU’s landmark AI legislation—here’s how to assess exposure, classify risk, and lay the groundwork for compliance before key deadlines hit.

The European Union’s Artificial Intelligence Act (AI Act) represents the world’s first comprehensive AI regulation, and it is poised to affect companies far beyond Europe’s borders. This complex law governs the development, deployment, and use of AI systems and general-purpose AI models within the EU. Still, it also has extraterritorial scope and will impose obligations on many U.S.-based organizations that develop, sell, or use AI technologies.

How to Prepare for U.S. Compliance

The AI Act takes a staggered approach to application. The first set of obligations, on prohibited AI practices and AI literacy, took effect in February of 2025. The requirements for providers of general-purpose AI models (GPAI) will take effect on August 2, 2025, with many of the remaining rules scheduled to take effect on August 2, 2026. Now is the time for U.S. companies to kickstart their organizations’ compliance journeys.  Below are four key steps to prepare for this new law. 

1.    Determine whether your organization is in scope

The first step is determining whether your organization falls within the scope of the EU AI Act. The AI Act applies broadly to providers, deployers, importers and distributors of AI systems operating in the EU. It also has extraterritorial reach, extending to organizations outside the EU that either (1) place AI systems on the EU market or put them into service within the EU, or (2) enable AI outputs to be used by individuals located in the EU (subject to certain exceptions). Given this expansive scope, compliance obligations may apply to a wide range of U.S.-based entities across various industries, including cloud service providers, security vendors, and companies offering services such as identity verification, HR tools, customer service chatbots, threat detection, and AI-driven decision-making systems.

2.    Identify where your AI systems land on the risk spectrum and whether they are GPAI

If your organization is covered, the next step is to understand how your product fits within the AI Act’s risk spectrum, which categorizes AI systems based on whether they pose unacceptable risk, high risk, limited risk, or minimal risk. This classification is essential because each tier has distinct obligations. Organizations should inventory the AI systems currently in use or under development and assess which systems fall into which tier to identify the applicable requirements. Organizations should take special care to flag products that could be considered high-risk or potentially prohibited under the Act.

A brief description of the tiers and their respective obligations follows:

  • The Act prohibits the use of AI systems in connection with specific practices that entail unacceptable risk, such as social scoring by governments or certain real-time biometric surveillance.
  • High-risk systems, such as those used in critical infrastructure, recruitment, employment, education, life insurance, law enforcement, or identity verification, are subject to strict regulatory requirements under the AI Act. Obligations for providers of these systems include mandatory risk assessments, detailed technical documentation, human oversight mechanisms, and cybersecurity and data governance controls.
  • Certain limited-risk AI systems need to comply with minimal transparency and/or marking requirements. These obligations primarily fall on AI systems designed to interact directly with natural persons.
  • Minimal risk AI systems may be covered by future voluntary codes of conduct to be established. 

The Act also has specific rules for GPAI, which take effect on August 2, 2025, and apply to even those models placed on the market or put into service before that date. These models are trained with a large amount of data using self-supervision at scale, which displays significant generality and is capable of competently performing a wide range of distinct tasks. The AI Act imposes stricter rules (e.g., model evaluations, risk mitigation plans, incident reporting, and enhanced cybersecurity measures) for models that pose “systemic risks.” 

The U.S. does not yet have a national AI regulation analogous to the AI Act; however, that does not mean companies can ignore domestic developments.

3.    Design a governance program and take steps toward compliance

With the applicability analysis and AI system inventory complete, your organization should perform a gap assessment against its existing compliance measures. From there, you can identify the steps that still need to be taken. Creating cross-functional playbooks for classification, transparency, and oversight will pay dividends as the Act takes effect and enforcement begins. Examples of the steps that organizations should undertake for compliance readiness include:    

  • Internal Governance: Stand up internal AI governance committees to track use cases, update risk registers, engage legal, compliance, and security teams.
  • Risk Documentation and Technical Controls: Maintain detailed documentation for AI systems, particularly those categorized as high-risk. Implement technical controls and perform regular risk assessments in line with the AI Act’s requirements.
  • Human Oversight Mechanisms: Ensure qualified personnel can understand, monitor, and, where necessary, override automated decision-making processes to fulfill the AI Act’s human oversight requirements.
  • Third-Party Oversight and AI Literacy: Engage vendors, partners, and contractors to confirm that they maintain appropriate levels of AI governance and literacy, especially where their tools or services are either in-scope of the AI Act or are integrated within your own AI systems or GPAI.
  • Training and Awareness Programs: Implement an organization-wide AI training program with enhanced modules tailored to employees directly involved in AI development, deployment, or oversight.
  • Cyber readiness: Although the Act does not specify specific data protection measures, this exercise provides a good opportunity to review and update your organization’s data and cybersecurity practices. Organizations may have existing obligations regarding EU data protection principles such as data minimization, purpose limitation, and lawful data sourcing, particularly when handling data from EU residents. Adding AI to the mix of products and services may add complexity, as additional security measures may be necessary to prevent adversarial attacks, model manipulation, and unauthorized access, particularly for high-risk systems. 

4.    Keep an eye on the U.S. (and other jurisdictions)

The U.S. does not yet have a national AI regulation analogous to the AI Act; however, that does not mean companies can ignore domestic developments. While both the Biden Administration and the Trump Administration have issued executive orders on AI, federal policymaking remains in the early stages. But there is much activity at the state level.  A robust compliance program should track emerging federal and state laws and consider how they interact with the AI Act.

One notable example of a U.S. state law is the Colorado Artificial Intelligence Act, passed in 2024 and scheduled to take effect in 2026. Like the AI Act, it uses a risk-based approach and imposes obligations on developers and deployers of high-risk AI systems. But there are also key differences with the AI Act, such as the Colorado law being more limited in scope and defining high risk more generally, rather than codifying specific uses as high risk.

Organizations should also keep an eye on other markets, as additional jurisdictions could follow the EU’s lead on regulating AI.

Conclusion

Preparation should begin now, as the August 2, 2025, effective date is soon approaching. These steps will help in-house professionals operationalize these legal requirements and ensure compliance, even amid a fast-moving legal landscape.

About the Author

Mark Brennan | partner in Hogan Lovells’ Washington, D.C. office

Mark Brennan is a partner in Hogan Lovells’ Washington, D.C., office and leads our global AI working group. In addition to AI issues, he advises clients on online safety, age verification, cloud services, calling/texting laws, and other technology and consumer protection matters. Global internet, technology, and video game companies, as well as household-name clients in the financial, health care, pharmaceutical, and automotive sectors, all rely on Mark to help solve their most pressing challenges.

About the Author

Dan Whitehead | partner and leading practitioner in the fields of AI, cybersecurity and privacy regulation at Hogan Lovell

Dan Whitehead is a partner and leading practitioner in the fields of AI, cybersecurity and privacy regulation at Hogan Lovells. He acts for many of the world’s largest technology companies, alongside leading life sciences and financial services clients. Advising them on digital regulations, including the AI Act, NIS2 and GDPR and supporting organizations with developing digital governance solutions, responding to security incidents and addressing regulatory investigations.

About the Author

Katy Milner | partner in Hogan Lovells' Global Regulatory practice in Washington, DC.

Katy Milner is a partner in Hogan Lovells' Global Regulatory practice in Washington, DC. Katy has a wealth of experience providing clients with practical guidance and valuable counsel on communications and technology policy issues, including cutting-edge topics such as AI. Katy also assists clients with matters involving wireless services and satellite regulation, cybersecurity and data privacy, spectrum acquisition and utilization, public safety, international telecommunications, and broadband and Internet policy.

About the Author

Ryan Thompson | Senior Associate in Hogan Lovells’ Global Regulatory practice in Washington, DC,

Ryan Thompson is a Senior Associate in Hogan Lovells’ Global Regulatory practice in Washington, DC, focusing on emerging technology policy, including AI. He advises leading tech companies on regulatory strategy and legal risk across federal agencies, Capitol Hill, and at the state level. His practice also includes representing wireless operators, satellite providers, and other telecom clients before the FCC, NTIA, FTC, and DOJ.