The EU AI Act is the European Union's comprehensive AI regulation, in force since August 2024 with phased application through 2027. It is the first horizontal AI law in any major jurisdiction, and like GDPR before it, its territorial scope sweeps in non-EU companies whose AI systems are used in the EU or whose outputs are intended for use in the EU.
The Act classifies AI systems by risk. Prohibited practices are banned outright (social scoring, real-time biometric identification in public spaces with limited exceptions, manipulative AI that exploits vulnerabilities). High-risk systems (used in employment, education, essential services, law enforcement, biometric categorization, and a list of safety-critical product areas) face heavy compliance obligations: risk management, data governance, technical documentation, transparency, human oversight, accuracy and robustness testing, conformity assessments, and registration in an EU database. Limited-risk systems face transparency obligations (you must tell users they are interacting with AI). Minimal-risk systems face essentially no new obligations.
The Act also regulates general-purpose AI models (the foundation models behind tools like ChatGPT, Claude, Gemini). Providers of GPAI models face documentation, training-data summary, copyright-policy, and transparency obligations. Models deemed to present "systemic risk" face additional requirements around evaluations, incident reporting, and cybersecurity.
For SaaS vendors building on AI, the practical question is whether any of their offerings fall into the high-risk classification. Most do not, but some that operate at the boundary (HR tools, education tools, credit and insurance underwriting tools) will. The compliance lift for high-risk systems is significant, and starting the analysis early is the only way to avoid a scramble against a regulatory deadline.