EU AI Act
Compliance Testing
The EU AI Act (Regulation 2024/1689) mandates technical cybersecurity testing for high-risk AI systems. Article 15 explicitly requires resilience against adversarial attacks, data poisoning, and model manipulation. The August 2026 deadline is not far off.
Article 15 Mandates Security Testing
The EU AI Act explicitly names adversarial attacks, data poisoning, and model manipulation as threats high-risk AI systems must be resilient against. That's a penetration testing mandate written into law.
Who Needs to Comply
The EU AI Act applies to providers, deployers, importers, and distributors — anywhere AI affects EU residents, regardless of where your company is based.
AI Providers & Developers
Any company developing or commercializing an AI system deployed in the EU. You bear full responsibility for conformity assessment, technical documentation, and CE marking. Third-party testing evidence is your proof of compliance.
Enterprises Deploying AI
Organizations using third-party AI tools — for HR, credit decisioning, fraud detection, or customer service — are deployers with their own obligations. You must ensure human oversight, monitor performance, and conduct Fundamental Rights Impact Assessments.
US Companies with EU Customers
Non-EU companies are fully in scope if their AI systems affect EU residents. If you sell AI-powered products or services into the EU — even from Charlotte — you must comply or appoint an EU authorized representative.
GPAI / Foundation Model Providers
General-purpose AI model providers (LLMs, multimodal models) face a separate compliance track. Systemic risk GPAI providers must conduct explicit adversarial red-teaming before and after model release — mandated in Article 55.
Financial Services & Insurance
Credit scoring, fraud detection, insurance risk assessment, and loan underwriting AI are high-risk by default under Annex III. Full technical documentation, bias testing, and adversarial robustness assessments are mandatory.
HR & Recruitment Platforms
Any AI used for candidate screening, CV ranking, psychometric evaluation, or employment decision support is high-risk under Annex III. Bias testing across protected characteristics and adversarial robustness testing are both required.
AI Risk Classification Framework
The EU AI Act places every AI system into one of four risk tiers. Your tier determines your compliance obligations and whether you need security testing.
Prohibited — Banned Now
These systems are completely prohibited since February 2025. No compliance path exists.
- Real-time facial recognition in public spaces
- Social scoring by governments
- Subliminal behavior manipulation
- Predictive policing by profile alone
- Emotion recognition at work/school
Full Compliance Required
Most enterprise AI falls here. Full technical requirements including Article 15 cybersecurity testing apply from August 2026.
- HR/recruitment AI systems
- Credit & insurance scoring
- Law enforcement & border control
- Critical infrastructure management
- Educational assessment systems
Transparency Obligations
Must disclose AI nature to users. Deepfakes must be labeled. Chatbots must identify as AI. Lower compliance burden but obligations still apply.
- Customer service chatbots
- AI-generated synthetic content
- Deepfake video/audio systems
- Recommendation engines (some)
Voluntary Standards
Encouraged but not required to follow voluntary codes of conduct. Most consumer AI falls here with no mandatory compliance requirements.
- AI spam filters
- Basic recommendation algorithms
- AI-powered games
- Inventory management AI
What We Assess
Our EU AI Act compliance testing covers every technical requirement mandated by Chapter III — from Article 9 risk management to Article 15 cybersecurity resilience.
Adversarial Robustness Testing
Article 15(3)Direct technical testing against every AI-specific attack class cited in Article 15 and Recital 77 of the regulation.
- Adversarial example generation and evasion attacks
- Prompt injection and indirect prompt injection
- Jailbreaking and safety guardrail bypass
- Backdoor detection and trigger analysis
- Goal hijacking in agentic AI systems
- Adversarial patch and physical-world attacks
Data Pipeline Security Assessment
Article 10 + Article 9Article 10 mandates data governance for training, validation, and testing datasets. We assess the entire ML supply chain for compromise.
- Training data poisoning vulnerability assessment
- Third-party dataset and model provenance audit
- MLOps pipeline security review
- Feature store and model registry access controls
- Data preprocessing integrity verification
- Bias introduction pathway analysis
Bias & Fairness Testing
Article 10 + Article 15(1)Article 10 requires datasets be free from biases, and Article 15 requires consistent accuracy across demographic subgroups — including protected characteristics.
- Disaggregated performance benchmarking by demographic
- Statistical, historical, and proxy bias detection
- Membership inference for training data exposure
- Model inversion and privacy leakage testing
- Disparate impact analysis for high-risk decisions
- FRIA technical evidence generation (Article 27)
AI Infrastructure Penetration Testing
Article 9 Risk ManagementAI systems run on infrastructure that faces conventional cybersecurity threats. Article 9 risk management must cover the entire attack surface.
- AI inference API security testing
- Cloud ML environment assessment (SageMaker, Vertex AI, Azure ML)
- Model serving infrastructure (Kubernetes, GPU clusters)
- Model extraction and intellectual property theft testing
- Training environment access control review
- Monitoring and logging integrity assessment
GPAI & LLM Security Assessment
Article 55 — Explicit Red-TeamingArticle 55 explicitly mandates adversarial red-teaming for general-purpose AI model providers. This is not implied — it is written into law for systemic risk GPAI.
- Prompt injection and system prompt extraction
- Jailbreak resistance and safety filter bypass evaluation
- Multi-modal attack testing (vision + language)
- RAG pipeline exploitation and knowledge base attacks
- Agentic AI tool-use and function-call security
- Output integrity and downstream injection risk
Conformity Assessment Support
Articles 43–48 + Annex IVMost high-risk AI uses self-assessment, but you still need documented evidence to populate your Annex IV technical documentation package. We generate that evidence.
- Gap analysis against all Chapter III technical requirements
- Annex IV technical documentation evidence generation
- EU Declaration of Conformity support materials
- Post-market monitoring program design (Article 72)
- Incident reporting procedure development (Article 73)
- Remediation roadmap with compliance milestones
Our Assessment Process
A structured five-phase approach that produces defensible compliance evidence, not just a checkbox exercise.
Classify
Inventory your AI systems and map each to the correct Annex III risk category. Identify provider vs. deployer obligations. Determine conformity assessment pathway.
Gap Analysis
Evaluate current technical controls against Chapter III requirements — Articles 9 through 17. Identify compliance gaps, missing documentation, and untested attack surfaces.
Technical Testing
Execute adversarial ML attacks, infrastructure penetration testing, data pipeline security review, and bias testing. All testing is performed manually by certified offensive security engineers.
Evidence Package
Compile test results, findings, and remediation evidence into an Annex IV-aligned technical documentation package suitable for regulatory review and conformity assessment.
Report & Roadmap
Deliver the full Annex IV evidence package, technical testing report, and a prioritized remediation roadmap with sequenced milestones tied to your compliance deadline.
What You'll Receive
Every EU AI Act engagement produces documentation built to withstand regulatory scrutiny — and designed to be actually useful to your engineering and legal teams.
Article 15 Technical Testing Report
A full adversarial security assessment report documenting every attack technique executed, vulnerabilities discovered, exploitability ratings, and prioritized remediation guidance. Written to align directly with Article 15 language for regulatory presentation.
Annex IV Compliance Evidence Package
A structured documentation package aligned with the Annex IV technical documentation requirements. Includes testing methodology, results, risk management evidence, and validation records — ready to support your conformity assessment filing.
Bias & Fairness Testing Results
Disaggregated performance analysis across demographic subgroups with statistical significance reporting. Membership inference and model inversion results. Formatted to support Fundamental Rights Impact Assessment (FRIA) obligations under Article 27.
Compliance Gap Analysis Report
A full inventory of your AI systems with risk classifications, identified compliance gaps against each Chapter III article, and a remediation roadmap with sequenced milestones designed to achieve compliance before your applicable deadline.
Post-Market Monitoring Plan
Article 72 requires ongoing post-deployment monitoring. We design a practical monitoring architecture to detect accuracy degradation, bias emergence, and security incidents — along with triggering criteria for incident reporting under Article 73.
Direct Engineer Access
Every EU AI Act engagement includes direct access to the engineers who performed the testing — not a project manager. Ask technical questions, get clarification on findings, and work through remediation decisions without intermediaries for the duration of the engagement.
The Cost of Non-Compliance
EU AI Act fines exceed GDPR at the highest tier. The business case for compliance investment is straightforward.
Prohibited AI Violations
Using banned AI systems (social scoring, real-time facial recognition in public spaces, subliminal manipulation). This tier exceeds GDPR's 4% maximum. Enforcement began February 2025.
High-Risk System Violations
Failing to meet conformity assessment, technical documentation, or Article 15 cybersecurity requirements for high-risk AI systems. The primary compliance risk for most enterprises after August 2026.
Incorrect Information Supplied
Providing false, incomplete, or misleading information to national market surveillance authorities or the EU AI Office. Beyond fines, enforcement also includes market withdrawal, product recall, and suspension of EU market access.
EU AI Act Compliance — Frequently Asked Questions
Answers to the questions we hear most from AI providers, enterprise deployers, and US companies navigating EU AI Act cybersecurity requirements.
Does the EU AI Act require cybersecurity testing?
Yes. Article 15(3) explicitly requires that high-risk AI systems be resilient against adversarial attacks, data poisoning, model poisoning, prompt injection, and adversarial examples. Recital 77 states that high-risk AI systems should be tested against adversarial attacks specific to machine learning. This is a technical security testing mandate written directly into EU law.
What exactly does Article 15 of the EU AI Act require?
Article 15 covers accuracy, robustness, and cybersecurity for high-risk AI systems. It requires: (1) performance maintained throughout the lifecycle; (2) resilience against errors and faults; and (3) resilience against adversarial attacks, data poisoning, model poisoning, prompt injection, adversarial examples, backdoor attacks, evasion attacks, and model extraction — all named explicitly in the regulation.
When is the EU AI Act deadline for high-risk AI systems?
August 2, 2026 is the primary compliance deadline for Annex III high-risk AI systems. Legacy systems already on the market without substantial modification have until August 2, 2027. AI embedded in regulated products such as medical devices may qualify for a further extension to December 31, 2030. Prohibited AI practices have been enforced since February 2, 2025.
Do US companies need to comply with the EU AI Act?
Yes. The EU AI Act applies to any company whose AI affects EU residents — regardless of where the company is based. If you sell AI-powered products to EU customers, or operate AI that processes data about EU individuals, you are in scope. Non-EU providers must appoint an EU authorized representative before placing high-risk AI on the EU market.
What AI systems are high-risk under EU AI Act Annex III?
Eight sectors: biometrics (remote identification, emotion recognition), critical infrastructure, education and training (admissions, assessment, exam monitoring), employment (recruitment, CV screening, performance evaluation), essential services (credit scoring, insurance risk, benefit eligibility), law enforcement (profiling, risk assessment, forensics), migration and border control, and administration of justice. AI embedded in safety-critical regulated products is also high-risk.
What is an EU AI Act conformity assessment?
The process by which providers demonstrate compliance before market placement. Most high-risk AI uses self-assessment: compile an Annex IV technical documentation package covering risk management, data governance, design, testing evidence, and human oversight — then issue an EU Declaration of Conformity, register in the EU AI database, and affix CE marking. Notified body third-party assessment is required only for biometric identification systems and AI in safety-critical regulated products.
What are the penalties for EU AI Act non-compliance?
Fines are the greater of a fixed amount or a percentage of global annual turnover: up to €35M or 7% of global turnover for prohibited AI violations (exceeding GDPR's 4%); up to €15M or 3% for high-risk system violations including Article 15 failures; up to €7.5M or 1% for supplying incorrect information. Additional enforcement includes market withdrawal, deployment suspension, product recall, and public naming.
Does the EU AI Act explicitly require red-teaming for GPAI?
Yes — Article 55 explicitly mandates adversarial testing in a red-teaming format for systemic risk GPAI providers (models trained with more than 10²⁵ FLOPs, or designated by the EU AI Office). This applies before model release and on an ongoing post-deployment basis. It is one of the few places in the regulation where "red-teaming" is named directly rather than implied through technical requirements.
Related Services
Explore other security assessments that complement this service.
ML/AI Penetration Testing
Specialized offensive assessment targeting AI/ML systems, LLMs, RAG pipelines, and agent-based workflows aligned with OWASP ML Top 10.
Learn moreWeb Application Security
Comprehensive OWASP WSTG-aligned testing of authentication, authorization, business logic, and input validation vulnerabilities.
Learn moreCloud Security Assessment
Configuration review of AWS, Azure, and GCP environments — including the AI/ML services where your models live.
Learn moreAugust 2026 is Closer Than You Think
Don't start your EU AI Act compliance journey with a rush job. Get a customized assessment proposal within 24 hours and build toward the deadline with confidence.
Get Started