I train and advise teams so they know exactly what to do, by when, for their role and risk.
I help legal, data, and product teams prepare for the EU AI Act in a way that’s practical and actionable.
Legal & Compliance → responsibilities, documentation, vendor questions
Data & Engineering → model transparency, testing, logging
Product & Leadership → what’s in scope, key dates, budget planning
Focused sessions to assess your use case, compliance risk, and readiness. Includes a mapped plan by role, with concrete next steps.
Formats:
Strategy calls
“Gap check” audits (2–4 weeks)
AI Literacy for Employees (Article 4 Compliance): 1-2 hours
EU AI Act for Developers & Data Scientists: 1 day
EU AI Act for Leaders & Executives: 2 hours
EU AI Act for Product Owners: 3-6 hours
EU AI Act for Legal & Compliance Professionals: 1 day
EU AI Act Preparedness for Business Professionals: 3-6 hours
The EU AI Act doesn’t hit all at once. Here’s a simple breakdown:
🟥 Feb 2, 2025 — Banned AI practices are already illegal
Things like social scoring, emotion detection in schools or workplaces (unless safety‑related), and scraping facial images from the internet are off-limits. These rules are live.
🟧 Aug 2, 2025 — Transparency kicks in
You’ll need to label AI-generated content (like deepfakes), and let users know when they’re interacting with AI (e.g., chatbots). Some obligations for general-purpose AI models also begin here.
🟨 Aug 2, 2026 — Main obligations start
If you're building or using high-risk AI systems (e.g., in hiring, education, finance, or public services), most of your responsibilities begin now: documentation, testing, logging, human oversight.
🟩 Aug 2, 2027 — The classification rule fully applies
The way “high-risk” AI is officially defined becomes stricter. More use cases may fall into scope starting this date.
🧾 Extra notes (if this affects you):
Already using a system? If it doesn't change after Aug 2026, it may stay out of scope.
Using open-source models? Some duties are lighter—unless the model is labelled “systemic risk.”
Large public IT systems have extra time until Dec 31, 2030.
*does not constitute legal advisory (informative character only)
What exactly is a “high-risk” AI system?
If your AI is used in sensitive areas—like hiring, credit scoring, insurance, education, policing, or healthcare—it might be considered “high-risk." There’s a formal list in Annex III, but we can check it together.
We use a US-based model. Are we still responsible?
Yes, often. If you put it on the EU market, brand it, or control how it’s used—you’re likely the “provider.” If you just use it internally, you might be the “deployer.” Each role has different rules.
Are some AI practices already banned?
Yes. For example:
Social scoring
Real-time biometric ID in public spaces
Emotion recognition at school or work
Predictive policing based on profiling
These bans already apply from Feb 2025.
Do we have to label AI-generated content or chatbots?
Yes—if it’s not obvious, you must disclose when users are talking to a machine.
Also, synthetic content (e.g. deepfakes) should be labelled unless you fall under a specific exemption.
Do open-source models have fewer rules?
Yes, if:
The model is truly open-source (weights, architecture, and license), and
It’s not designated as a “systemic risk” model by the EU
In that case, fewer obligations apply to the provider.
What’s a “fundamental rights impact assessment”?
If you're using high-risk AI in public services (e.g., education, social support, insurance), you'll need to assess how it affects people’s rights—similar to a GDPR DPIA. This applies before first use.
Can we get help checking if our AI system is in scope?
Yes. Try the Use Case Checker below or book a quick call and we’ll figure it out in 15 minutes.
See how the Act classifies your AI use case. Check whether your system is considered high-risk and what that means for your team.