The European Union's (“EU”) Artificial Intelligence Act (the “AI Act” or “Act”), enacted in Aug. 2024, is the first comprehensive legal framework governing artificial intelligence having global reach. The Act aims to ensure that AI systems are human-centric, trustworthy and safe, with particular emphasis on impacts to health, safety and fundamental human rights. The Act takes a risk based approach, with obligations varying based on whether an AI system is considered to pose minimal, limited, high or unacceptable risk. Integral aims of the AI Act include transparency and accountability in all stages and uses of AI systems. Organizations both located within and outside the EU Area must adopt new or preserve existing practices to ensure data quality, reduce bias and protect user privacy.

Once Again EU Regulation Has Extraterritorial Impact Having déjà vu moments related to GDPR? Just like the EU’s groundbreaking data privacy regulation, their AI Act applies to organizations based in the EU, as well as the U.S. and other non-EU organizations if AI systems or outputs are used within the EU. Relevant to U.S. business, this includes AI services hosted in the U.S. but accessible to EU users and systems where automated outputs are used in the EU. In short, organizations based in New York and other states in the U.S. are not necessarily exempt from compliance just because they are not physically within the EU’s borders.

In Feb. 2025, the European Commission issued additional clarifying guidance on the definition of “AI system” and prohibited AI practices. Additional guidance and technical standards are forthcoming.

Categorical Risk Breakdown

The AI Act adopts a tiered, risk based approach to regulation of AI development, use and deployment.

  1. Unacceptable Risk: Certain AI applications are banned outright, such as real time biometric surveillance in public spaces, social scoring, manipulative behavioral targeting and exploitative tools that harm vulnerable groups (e.g., minors).
  2. High Risk: Systems deemed high-risk include those used in education, employment, healthcare, law enforcement and critical infrastructure—fields and contexts in which socioeconomic decisions are routinely made. Because these systems process sensitive data, they must meet rigorous requirements around risk management, transparency, data quality, non-discrimination, technical documentation and human oversight. They are subject to continuous monitoring after meeting a conformity assessment.
  3. Limited Risk: These AI systems must meet transparency obligations, including disclosures when users interact with AI (e.g., chatbots or deepfake generators). Content generated by AI must be labeled as such.
  4. Minimal or No Risk: Most AI systems (like email spam filters) fall into this category and are not subject to new obligations.

Implementation and Enforcement TimelineImplementation and enforcement of the AI Act is already underway with key dates still on the horizon and subject to change:

Industry Impacts for U.S. Organizations

How can your organization stay ahead of compliance deadlines?

While the world awaits further EU guidance that specifically spells out what organizations must do to fulfill their obligations under the AI Act, here are some actions you can take now:

Looking Beyond Europe

Much like how the GDPR transformed global data privacy, the EU AI Act is likely to influence U.S. regulations as well. States like California, Colorado and New York have already enacted AI specific regulations, some of which mirror the EU’s risk based approach. U.S. businesses that proactively align with the AI Act’s principles will be well positioned to comply with future domestic laws and maintain competitive access to the global market.

The EU AI Act is a landmark regulation that reshapes the landscape for AI use and deployment. U.S. companies, particularly those in health care, manufacturing, financial services and education, must begin evaluating and updating their AI governance programs now. With steep financial penalties for noncompliance and broad extraterritorial reach, early and decisive action is imperative.

For more information or assistance with AI governance and privacy compliance, contact Bond Schoeneck & King PLLC's artificial intelligence or cybersecurity and data privacy practice groups.

*Special thanks to Summer Law Clerk Sarah Jiva for her assistance in the preparation of this memo.