Regulatory Landscape: EU AI Act Post-2026
The EU AI Act took full effect through 2025-2026. Here is what it actually requires for general-purpose AI providers and the companies building on them.
What the Act covers
The EU AI Act regulates AI systems placed on the EU market. It applies to providers (who develop and sell), deployers (who use), and importers/distributors. The Act is technology-neutral but specific about use cases.
The risk tiers
- Unacceptable: social scoring, predictive policing of individuals, certain biometric uses. Banned.
- High-risk: hiring, credit, education, critical infrastructure, medical devices. Heavy obligations: risk management, documentation, transparency, human oversight, accuracy testing.
- Limited risk: chatbots, generated content. Disclosure obligations.
- Minimal risk: most consumer AI features. Largely unregulated.
Foundation-model rules
General-purpose AI providers (Anthropic, OpenAI, Mistral, etc.) face a separate tier. Disclosure of training data sources, copyright compliance, technical documentation, evals against systemic risks. For models above 10^25 FLOPs, deeper safety testing.
Practical compliance for builders
If you’re building on top of an LLM API:
- Determine your application’s risk tier. Most are limited or minimal.
- If high-risk: document the system, evaluate accuracy, ensure human oversight is real, register with the relevant authority.
- If limited risk: add disclosure (“you are interacting with an AI”) and content labels for AI-generated media.
The practical 2026 stance: most apps don’t need heavy compliance. The ones that do (HR, healthcare, lending) need real workstream investment.