TLDR
The EU AI Act’s high-risk obligations become enforceable on 2 August 2026. If your business uses AI for hiring, credit scoring, customer service chatbots, or automated decision-making, you likely have compliance obligations. Here’s what SMEs need to know and do before the deadline hits.
The Clock Is Ticking
On 2 August 2026, the EU AI Act’s high-risk provisions become enforceable. That’s less than five months away. If your business operates in the EU, sells to EU customers, or processes EU citizen data with AI systems, this affects you.
The EU AI Act is the world’s first comprehensive legal framework for artificial intelligence. Adopted in May 2024, it takes a risk-based approach: the higher the potential harm of your AI system, the stricter the rules. Banned practices (like social scoring) are already prohibited as of February 2025. General-purpose AI model obligations kicked in August 2025. Now comes the big one: high-risk AI systems.
Most SMEs we speak to have one of two reactions: “This doesn’t apply to us” or “We’ll deal with it when it matters.” Both are dangerous assumptions.
Does This Apply to Your Business?
If you use AI in any of these ways, you almost certainly have obligations under the Act:
- Hiring and recruitment: AI-powered CV screening, candidate ranking, or automated interview analysis
- Credit and insurance: Automated credit scoring, risk assessment, or insurance pricing
- Customer service: AI chatbots that make decisions affecting customer outcomes (not just FAQ bots)
- Access to services: AI systems that determine eligibility for services, benefits, or opportunities
- Workplace monitoring: AI-driven performance evaluation or behaviour monitoring
- Biometric identification: Facial recognition, emotion detection, or biometric categorisation
The Act classifies these as “high-risk” because they directly impact people’s fundamental rights, employment, financial access, or safety. Even if you’re using a third-party AI tool (not building your own), you may still be classified as a “deployer” with specific obligations.
What’s Actually Required
The high-risk requirements fall into several categories. Here’s what matters most for SMEs:
1. Risk Management System
You need a documented process for identifying, analysing, and mitigating risks associated with your AI system. This isn’t a one-off exercise. It must be continuously updated throughout the system’s lifecycle.
In practice: document what your AI system does, what could go wrong, who’s affected, and what safeguards you have in place.
2. Data Governance
Training data must meet quality criteria. You need to demonstrate that the data used to train or fine-tune your AI systems is relevant, representative, and free from biases that could lead to discriminatory outcomes.
If you’re using off-the-shelf AI tools, you’ll need documentation from your provider about their training data practices. Start asking for this now.
3. Transparency and Documentation
Users must be informed when they’re interacting with an AI system. Technical documentation must be maintained, including the system’s intended purpose, limitations, accuracy metrics, and foreseeable risks.
This means your chatbot needs to identify itself as AI. Your automated hiring tool needs to tell candidates that AI is involved in the process. No pretending.
4. Human Oversight
High-risk AI systems must be designed to allow meaningful human oversight. A human must be able to understand the system’s outputs, override decisions, and intervene when necessary.
Fully autonomous decision-making with no human in the loop is essentially off the table for high-risk applications.
5. Accuracy, Robustness, and Cybersecurity
Systems must achieve appropriate levels of accuracy. They must be resilient against errors and attacks. Cybersecurity measures must be proportionate to the risk level.
The Penalties Are Serious
Non-compliance isn’t a slap on the wrist. Fines scale based on the violation:
- Prohibited practices: Up to €35 million or 7% of global annual turnover
- High-risk non-compliance: Up to €15 million or 3% of global annual turnover
- Providing incorrect information: Up to €7.5 million or 1% of global annual turnover
For SMEs and startups, the Act does include proportional thresholds, but even the lower bounds are significant enough to warrant attention.
The Omnibus Complication
There’s been some chatter about the EU’s Omnibus Simplification Package potentially delaying certain obligations. The reality is nuanced: while some high-risk requirements for systems listed in Annex III may see conditional deferrals, the core framework and prohibited practices are firmly in place.
Don’t bank on a delay. The businesses that prepared for GDPR early were in far better shape than those that scrambled at the last minute. The same pattern will play out here.
A Practical Compliance Checklist for SMEs
Here’s what you should be doing right now:
- Audit your AI systems: List every AI tool, model, or automated system your business uses. Include third-party SaaS tools with AI features.
- Classify the risk level: For each system, determine whether it falls under prohibited, high-risk, limited-risk, or minimal-risk categories. When in doubt, assume high-risk and work backwards.
- Request documentation from vendors: If you use third-party AI tools, ask your providers for their EU AI Act compliance documentation now. Reputable vendors should already be preparing this.
- Implement human oversight: Ensure no high-risk AI system operates without meaningful human review. Document the oversight process.
- Update your privacy notices: Add AI-specific transparency disclosures. Tell users when and how AI is used in decisions that affect them.
- Establish a risk management process: Document risks, mitigations, and review cycles. This doesn’t need to be complex for SMEs, but it does need to exist.
- Train your team: Staff who deploy or manage AI systems need to understand their obligations. This is explicitly required under the Act.
- Appoint responsibility: Someone in your organisation needs to own AI compliance. For smaller businesses, this might be your CTO or DPO. For larger teams, consider a dedicated AI governance role.
How This Intersects with Your Tech Stack
If you’re building products with AI features, compliance needs to be baked into your architecture from the start, not bolted on afterwards. This means:
- Logging and auditability: Every AI decision that affects a user should be logged with enough context to explain why that decision was made.
- Feature flags and kill switches: You need the ability to disable AI features instantly if issues arise.
- Bias monitoring: Regular testing for discriminatory outputs, particularly in hiring, credit, and access-to-services applications.
- Documentation-as-code: Keep your AI system documentation in version control alongside your codebase. It evolves with the system.
Don’t Panic, but Do Prepare
The EU AI Act isn’t designed to kill innovation. It’s designed to ensure AI is used responsibly. For businesses already committed to ethical AI practices, much of the compliance work will formalise what you’re already doing.
But the deadline is real, the penalties are substantial, and “we didn’t know” won’t fly as an excuse. Five months is enough time to get your house in order. It’s not enough time to start from scratch and rush through it.
At REPTILEHAUS, we help businesses build AI-powered systems with compliance and governance designed in from day one. If you’re unsure where your AI systems sit under the Act, or need help building compliant architectures, get in touch.
📷 Photo by Markus Winkler on Unsplash



