Aguardic logoAguardic
Article 50 Transparency Deadline (December 2, 2026) — 202 Days, 6h 8m 49s

The EU AI Act Is Already Being Enforced. Are You Ready?

Regulation (EU) 2024/1689 carries fines up to €15M or 3% of global turnover, enforced by national market surveillance authorities. Aguardic enforces your EU AI Act policies in real time and generates the Article 11 technical documentation and Article 12 logs regulators will request.

14-day free trial · No credit card · Free EU AI Act policy pack

Enforcement Timeline

What's Already in Effect — and What's Coming

February 2, 2025Already in Effect

Prohibited AI Practices

AI systems posing unacceptable risk are banned. Social scoring, manipulative AI, and most real-time biometric identification prohibited. Omnibus VII added a new prohibition on AI used to generate non-consensual sexual content and CSAM.

Penalty: Up to €35M or 7% of global revenue

August 2, 2025Already in Effect

GPAI Model Obligations

Providers of general-purpose AI models must comply with transparency requirements, copyright policies, and systemic risk assessments.

Penalty: Up to €15M or 3% of global revenue

December 2, 2026Coming in 202 days

Article 50 Transparency (tightened by Omnibus VII)

AI-generated content disclosure obligations apply with a shortened 3-month grace period (was 6 months). Deepfakes, AI-authored text, chatbots and emotion-recognition systems must mark/label their outputs.

Penalty: Up to €15M or 3% of global revenue

August 2, 2027Future

National AI Regulatory Sandboxes

Member states must operationalize their national AI regulatory sandboxes. Postponed by one year under Omnibus VII (was August 2, 2026).

December 2, 2027Future (delayed by Omnibus VII)

High-Risk Standalone AI (Annex III)

Full compliance required: quality management systems, risk management frameworks, technical documentation under Article 11, automatic logging under Article 12, conformity assessments, EU database registration, human oversight under Article 14. Postponed from August 2, 2026.

Penalty: Up to €15M or 3% of global revenue

August 2, 2028Future (delayed by Omnibus VII)

High-Risk AI Embedded in Products

High-risk AI systems built into regulated products (Annex I — toys, machinery, medical devices, aviation, marine equipment, etc.) must meet the same conformity-assessment regime. Postponed from August 2, 2027.

Penalty: Up to €15M or 3% of global revenue

Does This Apply to You?

The EU AI Act Applies If You Serve EU Users — Regardless of Where You're Based

You're a Provider if you:

  • Develop AI systems placed on the EU market
  • Build AI products used by EU customers
  • Offer GPAI models (LLMs, foundation models) to EU deployers

You're a Deployer if you:

  • Use AI systems in your EU operations
  • Deploy AI for decisions affecting EU citizens (hiring, credit, healthcare)
  • Integrate third-party AI into products serving EU users

Both providers and deployers have compliance obligations. If your AI product has a single EU user, the Act applies to you.

Organizations that proactively demonstrate EU AI Act compliance gain a competitive advantage in enterprise sales — turning regulation into a trust signal.

The Cost of Non-Compliance

EU AI Act Penalties Exceed GDPR

€35M or 7%

Maximum penalty for prohibited AI practices

Already enforceable

€15M or 3%

Maximum penalty for high-risk AI system non-compliance

August 2026

€7.5M or 1%

Maximum penalty for providing incorrect information to authorities

August 2026

Penalties are calculated as the higher of the fixed amount or the percentage of total worldwide annual turnover.

Requirements Coverage

EU AI Act Coverage Matrix

No single tool covers every EU AI Act requirement. This is the full article-to-control reference — what Aguardic enforces, the evidence it produces, and the judgment work your counsel and operators still own.

5Covered
10Partial
1Not Covered
Total: 16
Covered·

Article 9

Risk Management System

Establish and maintain a risk management system across the AI system lifecycle for high-risk AI.

How Aguardic helps

The Lifecycle Management pack's AI Risk Register rule flags registers missing any of the Article 9 iterative elements (identification, estimation, mitigation, post-deployment evaluation). AI System Registry captures risk classification and continuous policy evaluation provides the iterative control surface.

Evidence produced

Risk register completeness detections · risk classification records · continuous evaluation logs · violation trend reports

What you handle

Define the risk appetite, sign off on residual-risk acceptance, and approve mitigations outside policy-as-code.

Partial·

Article 10

Data Governance

Training, validation, and testing datasets must meet Article 10 quality and relevance criteria.

How Aguardic helps

The High-Risk Documentation pack detects missing data-governance documentation on registered AI systems. Aguardic enforces documentation discipline; your data-governance platform owns pipeline lineage and bias testing.

Evidence produced

Missing data-governance documentation detections · AI System Registry data-category exports

What you handle

Run data-governance tooling for dataset lineage, provenance, bias testing, and Article 10 quality criteria.

Partial·

Article 11

Technical Documentation (Annex IV)

Maintain Annex IV technical documentation demonstrating compliance before the AI system hits the market.

How Aguardic helps

AI System metadata (purpose, risk tier, data categories, integrations) seeds Annex IV. Training methodology and validation still need manual authorship.

Evidence produced

AI System Registry exports · policy configuration records · risk classification documentation

What you handle

Author the remaining Annex IV sections (training methodology, validation, performance metrics) in your doc system.

Covered·

Article 12

Logging and Traceability

Design high-risk AI systems to automatically log events for traceability throughout their lifetime.

How Aguardic helps

Every policy evaluation is logged with timestamp, inputs, result, violations, and trace ID. Exportable on demand.

Evidence produced

Exportable evaluation logs · violation records with full trace · decision-point audit trail

What you handle

Define the retention policy and integrate log exports with your central evidence repository.

Partial·

Article 13

Transparency for Deployers

Provide deployers with enough information to interpret outputs and use high-risk AI correctly.

How Aguardic helps

The Lifecycle Management pack's Deployer Information Package rule flags missing Article 13 elements (intended purpose, performance characteristics, limitations, oversight measures, expected lifetime). User-facing Article 13(3) disclosures still need implementation at the consumer-facing UI layer.

Evidence produced

Deployer information package detections · policy evaluation reports with decision reasoning · audit trail exports

What you handle

Draft and ship user-facing AI disclosures at every consumer-facing touchpoint the high-risk system appears in.

Covered·

Article 14

Human Oversight

Design the system so natural persons can effectively oversee high-risk AI outputs during operation.

How Aguardic helps

Warn and escalate enforcement modes route decisions to humans before actions proceed. Review events are logged with timestamps.

Evidence produced

Escalation logs · human-review timestamps · policy evaluation reports

What you handle

Staff the review queue, train reviewers on Article 14 oversight criteria, and authorize the override policy.

Partial·

Article 15

Accuracy, Robustness, Cybersecurity

Achieve appropriate accuracy, robustness, and cybersecurity for high-risk AI throughout its lifecycle.

How Aguardic helps

The High-Risk Documentation pack flags AI systems missing documented accuracy, robustness, and cybersecurity metrics. Executing the adversarial tests and red-teaming still requires a dedicated model-testing or AI red-teaming platform.

Evidence produced

Missing accuracy/robustness metrics detections · policy evaluation logs

What you handle

Run adversarial tests, penetration testing, and model-accuracy evaluations against Article 15 benchmarks. Feed results back into the documentation set.

Covered·

Article 17

Quality Management System

Put in place a quality management system for development, validation, and ongoing compliance.

How Aguardic helps

The Lifecycle Management pack's Quality Management System rule flags QMS documents missing any Article 17 element (compliance strategy, design control, data management, risk management, post-market monitoring, incident reporting, accountability framework). Continuous policy enforcement with versioning demonstrates an active QMS.

Evidence produced

QMS element-completeness detections · compliance dashboard · violation trend reports · policy version history

What you handle

Designate a QMS owner, schedule management reviews, and maintain supplier or subcontractor controls.

Partial·

Article 61

Post-Market Monitoring

Establish a documented post-market monitoring system proportionate to the AI technology and its risks.

How Aguardic helps

The Lifecycle Management pack's Post-Market Monitoring rule flags missing, insufficient, or disproportionate surveillance plans. Continuous evaluation feeds the ongoing monitoring signal; authoring and signing off on the Article 61 plan itself stays with your QMS owner.

Evidence produced

Post-market monitoring plan detections · continuous evaluation logs · anomaly detection records · violation dashboards

What you handle

Author and sign off on the Article 61 post-market monitoring plan and review cadence.

Not Covered·

Article 62

Serious Incident Reporting

Report serious incidents and malfunctions to national market surveillance authorities per Article 62.

How Aguardic helps

Aguardic surfaces violation signals but does not file Article 62 notifications. End-to-end incident workflow is on the roadmap.

What you handle

Build or procure an incident-management workflow that tracks severity and hits Article 62 notification windows.

Covered·

Article 5

Prohibited AI Practices

Prohibit AI practices involving manipulation, exploitation of vulnerabilities, social scoring, untargeted scraping, and real-time biometric identification in public spaces (with narrow exceptions).

How Aguardic helps

The EU AI Act Compliance pack's Prohibited AI Practices rule blocks AI actions that match the Article 5 prohibited-use patterns before they execute.

Evidence produced

Prohibited-practice violation logs · blocked action records · policy evaluation trail

What you handle

Review your AI portfolio against the Article 5 prohibited categories up front and confirm none of your intended uses fall inside scope.

Partial·

Article 6 / Annex III

High-Risk AI Classification

Classify AI systems as high-risk when used in the Annex III domains (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice, democracy).

How Aguardic helps

The EU AI Act Compliance pack's High-Risk AI Use Detection rule flags AI systems deployed in Annex III use cases without a classification record on file. Final classification call still belongs to your compliance team.

Evidence produced

High-risk use detections · AI System Registry classification records

What you handle

Make and document the final Article 6 classification decision for each AI system in your portfolio.

Partial·

Article 22

Authorized Representative (Non-EU Providers)

Non-EU providers of high-risk AI must appoint an authorized representative established in the Union before placing systems on the market.

How Aguardic helps

The Registration & Conformity pack flags non-EU providers operating in the EU market without an authorized representative designation. Appointing the representative is a legal engagement that lives outside the platform.

Evidence produced

Missing authorized-representative detections · provider jurisdiction records

What you handle

Engage an EU-established authorized representative and execute the mandate before any EU-market deployment.

Partial·

Article 43

Conformity Assessment

Complete a conformity assessment — internal control or notified-body assessment — before placing high-risk AI on the market.

How Aguardic helps

The Registration & Conformity pack flags AI systems missing a conformity assessment reference before market placement. The assessment itself is a pre-market procedure you run internally or via a notified body.

Evidence produced

Missing conformity-assessment detections · AI System Registry pre-market status

What you handle

Run the Article 43 conformity assessment (internal control or notified body), maintain the technical file, and retain the assessment evidence for market surveillance.

Partial·

Articles 48–49

CE Marking & EU Database Registration

Affix the CE marking before market placement and register the high-risk AI system in the EU database maintained by the Commission.

How Aguardic helps

The Registration & Conformity pack flags AI systems missing CE marking references and AI systems deployed without EU database registration. Applying the CE mark and completing the database registration are customer actions.

Evidence produced

Missing CE marking detections · missing EU database registration detections

What you handle

Affix the CE marking per Article 48, complete the EU database entry per Article 49, and keep both in sync with material modifications.

Partial·

Article 50

Transparency for Certain AI Systems

Disclose AI nature in interactions with natural persons and label AI-generated content (chatbots, synthetic media, emotion recognition, biometric categorization).

How Aguardic helps

The EU AI Act Compliance and AI Transparency packs flag undisclosed AI interactions and unlabeled AI-generated media. Product engineering still renders the user-facing disclosure text and content labels.

Evidence produced

AI disclosure detections · AI-generated media labeling detections · automated decision review detections

What you handle

Ship the user-facing AI disclosure copy and synthetic-content labels at every consumer touchpoint where Article 50 triggers.

Browse the EU AI Act Policy Pack

Coverage mappings reflect Aguardic's current product capabilities mapped to EU AI Act (Regulation (EU) 2024/1689) requirements for high-risk AI systems. Validate with qualified EU AI Act counsel for your specific use case. The EU AI Act is subject to delegated acts, implementing acts, and harmonized standards still under development.

EU vendor questionnaire?

Answer with EU AI Act controls Aguardic enforces

Upload it. We draft answers citing Art. 9 / 12 / 14 / 15 controls (risk management, logging, oversight, cybersecurity) — describing what Aguardic enforces continuously. Once your workspace is active, your answers reference live audit evidence, not hypotheticals.

Upload questionnaire

August 2026 Is 202 Days Away. Start Now.

Register your AI systems, install the EU AI Act policy pack, and start generating compliance evidence automatically.

14-day free trial
No credit card required
Free EU AI Act policy pack
Start Free Trial

Or explore the documentation

This page summarizes key provisions of the EU AI Act (Regulation (EU) 2024/1689) for informational purposes only. Aguardic is not a law firm and this is not legal advice. Consult qualified EU AI Act counsel to assess your specific compliance obligations. Coverage mappings reflect Aguardic's current product capabilities as of April 2026 and are subject to change as delegated acts, implementing acts, and harmonized standards are adopted.

EU AI Act Compliance with AI Agent Governance — Stay Current Through Omnibus VII - Aguardic