The EU AI Act (2024) is the world's first comprehensive regulatory framework for artificial intelligence, classifying AI systems into risk categories: unacceptable (banned), high-risk (heavily regulated), limited risk (transparency obligations), and minimal risk (largely unregulated). Prohibited practices — social scoring, real-time biometric surveillance — took effect February 2025.
The Act requires high-risk AI systems (used in hiring, credit scoring, law enforcement, healthcare) to meet standards for transparency, human oversight, data quality, and documentation. Foundation model providers must disclose training data summaries, compute used, and benchmark results.
As with GDPR, the Brussels Effect is already visible: companies building AI systems for global markets are designing to EU standards rather than maintaining separate compliant and non-compliant versions. Canada, Brazil, and other jurisdictions are studying the AI Act as a template for their own regulations. The EU is again exporting regulatory technology — shaping how AI is developed and deployed globally by setting the most demanding compliance bar.