2026-04-01
EU AI Act: What It Means for SaaS Buyers
On 2 August 2026, the EU AI Act's rules for high-risk AI systems kick in. If you buy SaaS with AI features, you already have obligations.
The EU AI Act has been in force since August 2024. Most of it has been quiet. That changes on 2 August 2026, when the rules for high-risk AI systems under Annex III become enforceable. If you're a business that buys SaaS, this matters — even if you never wrote a line of AI code.
The timeline in brief
The Act rolled out in phases:
- 2 February 2025 — Prohibited AI practices banned (e.g. social scoring, subliminal manipulation). Also when AI literacy obligations kicked in.
- 2 August 2025 — Rules for general-purpose AI (GPAI) models became applicable.
- 2 August 2026 — High-risk AI systems under Annex III must comply. This is the deadline most B2B software buyers need to care about.
- 2 August 2027 — Extended deadline for high-risk AI embedded in regulated products (medical devices, machinery, vehicles).
Provider vs deployer: which one are you?
The Act assigns obligations based on your role.
A provider is an entity that places an AI system on the market under their own name — your CRM vendor with a built-in lead scoring model, your recruitment platform with automated CV screening, your credit tool. They built it. They're responsible for conformity assessments, CE marking, registering in the EU database, and providing technical documentation.
A deployer is an entity that uses an AI system in a professional context. That's most SaaS buyers. You didn't build the model, but you're using it in your business operations.
Being a deployer doesn't mean you have no obligations. It means your obligations are lighter — but not zero.
What deployers must do for high-risk systems
If a SaaS tool you're using qualifies as high-risk under Annex III, you have obligations under Article 26:
- Follow the provider's instructions for use
- Keep logs of AI system operation for at least 6 months
- Inform and train employees who interact with the system
- Maintain meaningful human oversight — a human must be able to review, override, or stop decisions
- Not use the system for purposes beyond what the provider intended
Public bodies and private entities operating critical infrastructure or essential services also have to complete a fundamental rights impact assessment before deploying a high-risk system (Article 27). This covers how the system might affect people's rights, who it affects, and how oversight will work.
Which Annex III categories hit SaaS buyers
Annex III lists eight domains where AI is classified as high-risk. Several of these map directly onto common SaaS tools:
Employment and HR — AI for screening job applications, evaluating candidates in interviews or tests, making promotion or termination decisions. If your HR platform uses automated scoring, it's high-risk.
Essential private services — AI that evaluates creditworthiness or calculates credit scores. If your fintech or lending software uses AI for risk scoring, it's in scope.
Education and vocational training — AI that determines access to education, evaluates student performance, or monitors behavior during exams.
Biometrics — Remote biometric identification systems and AI that categorizes people by sensitive attributes. Exceptions exist for pure identity verification.
If you're uncertain whether a tool qualifies, the European Commission's AI Act Service Desk is the right place to start.
Article 50: transparency for all AI tools
Even if none of your tools are high-risk, Article 50 applies from 2 August 2026 to any AI that interacts with users or generates content:
- Chatbots deployed on your site must tell users they're talking to an AI
- AI-generated text, images, or video must be labeled as AI-generated
- Voice synthesis that imitates a real human voice must be identified as synthetic
If you're embedding third-party chatbots or using AI content tools, check what disclosure controls the vendor provides.
What to ask your SaaS vendors now
Before August 2026, work through your AI-enabled tools and ask each vendor:
- Is this system classified as high-risk under Annex III? They should know.
- Have you completed a conformity assessment? High-risk providers are required to do this.
- Is the system CE-marked and registered in the EU AI database? Required for high-risk systems.
- Can you provide technical documentation under Annex IV? You're entitled to documentation that supports your own compliance.
- What instructions for use have you published? Article 13 requires providers to document intended use and limitations.
- Are you compliant with GPAI obligations? If the vendor uses an underlying model (GPT-4, Claude, Mistral, etc.) and has substantially customized it, they may have taken on GPAI provider obligations.
If a vendor can't answer these questions, that's useful information.
The penalty structure
Non-compliance fines exceed GDPR levels:
- Up to €35 million or 7% of global annual turnover for violations of prohibited AI practices
- Up to €15 million or 3% of global annual turnover for non-compliance with high-risk obligations
- Up to €7.5 million or 1.5% of global annual turnover for providing incorrect information to authorities
Enforcement is handled by national market surveillance authorities in each EU member state, with the European Commission's AI Office overseeing GPAI models directly.
European AI vendors and the Act
European AI companies — Mistral (France), Aleph Alpha (Germany), SambaNova's European operations — are building their products with the Act in mind from the start. That's not an automatic compliance guarantee, but it does mean product decisions around documentation, oversight, and transparency are shaped by the regulatory context from the beginning, rather than retrofitted later.
For buyers evaluating AI-powered SaaS, asking European vendors these compliance questions often produces clearer answers than asking US-headquartered vendors who are adjusting to EU law as a secondary requirement.
The full text of the Act is at artificialintelligenceact.eu, which is the most readable version of the official legislation.