Aguardic logoAguardic

Most Companies Get Their EU AI Act Classification Wrong. This Free Tool Gets It Right.

Avoid EU AI Act misclassification in 10 minutes with a free tool that outputs verdicts, deadlines, penalties, and a board-ready PDF report.

Aguardic Team·April 16, 2026·9 min read

Most Companies Get Their EU AI Act Classification Wrong. This Free Tool Gets It Right.

There are three ways companies currently figure out where they fall under the EU AI Act. They pay a law firm between €20,000 and €40,000 for a classification memo. They read 144 pages of regulation and try to self-assess. Or they ignore it and hope for the best.

The third option is the most popular. The first option is accurate but slow and expensive. The second option produces the most dangerous outcomes, because the regulation has several classification traps that look straightforward and are not. Companies confidently conclude they are minimal risk when they are actually high risk. Companies using GPT-4 in their product incorrectly classify themselves as GPAI providers. Companies operating AI resume screeners claim the Article 6(3) exemption because "a human reviews the output" and miss the profiling disqualifier that blocks that exemption entirely.

We built a free EU AI Act classification tool that answers the question in under 10 minutes with no signup required. It gives you a classification verdict with article citations, a compliance deadline with a countdown, a readiness score with gap analysis, penalty exposure calculated to your company size, and a downloadable PDF report you can hand to your legal team or your board. Here is what it does and why the common alternatives get it wrong.

The Classification Is Not Binary

Most self-assessment checklists treat the EU AI Act as a binary question: high-risk or not high-risk. The regulation defines seven distinct categories, and the compliance obligations, deadlines, and penalties differ significantly across them.

Prohibited systems under Article 5 face immediate enforcement. That has been live since February 2, 2025. Social scoring, manipulative AI, real-time biometric identification in public spaces for law enforcement without proper authorization, and five other categories are banned outright. Penalties reach €35 million or 7% of global annual turnover, whichever is higher.

High-risk systems under Annex III cover eight areas including biometrics, critical infrastructure, education, employment, access to essential services, law enforcement, migration, and administration of justice. These face the heaviest compliance burden: quality management systems, technical documentation, human oversight, post-market monitoring, and conformity assessment. The deadline for listed high-risk systems is currently December 2, 2027 under the Parliament's proposed delay, with a hard backstop if the Council approves.

GPAI with systemic risk applies to general-purpose AI models trained with compute exceeding 10^25 FLOPs. These face the strictest GPAI obligations including adversarial testing and serious incident reporting. GPAI below the systemic threshold still has obligations around technical documentation, downstream provider information, copyright compliance, and training data summaries.

Limited-risk systems trigger Article 50 transparency obligations. But Article 50 is not a single checkbox. It contains four distinct sub-obligations that fire based on what your system does: AI interaction disclosure if the system talks to people, emotion or biometric disclosure if it categorizes people, synthetic media labeling if it generates images or video, and AI-generated text labeling if it produces text on matters of public interest. Most self-assessments treat these as one requirement. They are four separate compliance items with different technical implementations.

Minimal-risk systems have no specific obligations under the Act. Out-of-scope systems have no EU nexus under Article 2 and fall outside the regulation entirely. Knowing which category you actually belong to determines everything that follows.

Three Classification Mistakes That Cost Companies

Three errors show up repeatedly in self-assessments, and each one creates real legal exposure.

The first is the Article 6(3) exemption trap. Article 6(3) provides an exemption for certain Annex III systems that perform narrow procedural tasks, improve previously completed human activities, detect patterns without replacing human assessment, or serve as preparatory input for a human decision. Many companies with AI hiring tools or lending models claim this exemption because their system includes human review of the output.

The exemption has a disqualifier most companies miss. If the AI system profiles natural persons as defined in GDPR Article 4(4), the exemption is automatically blocked regardless of whether any of the four conditions are met. An AI resume screener that ranks candidates is profiling natural persons. A credit scoring model that evaluates borrowers is profiling natural persons. The "human in the loop" does not matter once profiling is established. This is the single most common classification error in the market right now, and it turns a company that thinks it is exempt into a company with full Annex III high-risk obligations.

The second mistake is the GPAI provider and deployer confusion. Companies building products on top of GPT-4, Claude, Gemini, or Llama routinely ask whether they need to comply with GPAI obligations under Articles 53 through 55. They do not. GPAI provider obligations apply to the organizations that develop, train, and distribute foundation models to third parties. If you are using a third-party model through an API in your product, you are a deployer. Your classification depends on your use case domain, not the underlying model. A company using Claude to power a hiring assistant is not a GPAI provider. It is a deployer of a high-risk system in the employment domain under Annex III.

The third mistake is treating Article 2 extraterritoriality as a single question. "Do you do business in the EU?" is insufficient. Article 2 defines four distinct paths to jurisdiction: providers placing AI systems on the EU market, deployers established in the EU, providers or deployers outside the EU whose system output is used in the EU, and importers or distributors. The third path is the one most non-EU companies miss. If your AI system's output reaches EU users, even if your company and your servers are entirely outside the EU, the regulation applies to you.

What the Tool Does Differently

The classification tool is a deterministic engine, not a chatbot. Every article number, obligation text, penalty figure, and deadline comes from a static article registry sourced from the EUR-Lex Official Journal text. The classification logic is pure TypeScript. No AI model is involved in determining your risk category or obligations. The only LLM-generated content is two optional prose paragraphs in the PDF report, the executive summary and business context, and even those are grounded in the deterministic output.

This matters because the worst possible outcome of a classification tool is a hallucinated article citation. If you make compliance decisions based on a fabricated regulation reference, you have worse than no assessment. You have a confidently wrong one. A deterministic engine cannot hallucinate article numbers. It can only return what the regulation actually says.

The tool implements the full classification cascade: Article 2 jurisdiction and extraterritoriality, then Article 5 prohibited practices, then Annex III high-risk domains, then the Article 6(3) exemption check with the profiling disqualifier, then GPAI detection with the 10^25 FLOPs threshold, then Article 50 transparency sub-obligations, then minimal-risk fallthrough. Each step narrows the classification with the same logic a specialized lawyer would apply, except it does it in 10 minutes instead of 10 billable hours.

The output includes the classification verdict with confidence level and the specific articles that drove it, the compliance deadline anchored to your category with a days-remaining countdown, a compliance readiness score from 0 to 100 percent based on whether you have the required systems in place, the applicable obligations mapped to your specific role and classification, penalty exposure calculated using the correct formula for your company size (SME penalties use a different calculation under Article 99(6) that is significantly more favorable), FRIA trigger analysis for deployers in public service or specific financial domains, and a usage drift warning that reminds you the classification is point-in-time and changes if the deployment context changes.

The PDF report is downloadable with no email required. You can hand it to your legal team, attach it to a board presentation, or use it as the starting point for a more detailed assessment with counsel.

When to Use This Tool and When to Call a Lawyer

This tool is a first-pass classification, not legal advice. It is accurate within the boundaries of what deterministic logic can assess: article mapping, exemption conditions, role-based obligation filtering, and penalty calculation. It does not replace counsel for ambiguous edge cases, cross-border regulatory interactions, or situations where the classification depends on facts that require legal judgment.

Use the tool when you need to answer "are we high-risk" before committing to a six-figure legal engagement. Use it when your CTO needs to understand what technical obligations apply to a specific system. Use it when a procurement team asks for your EU AI Act status and you need a structured answer in a day, not a quarter. Use it when you are a non-EU company trying to figure out whether the regulation even applies to you.

Call a lawyer when the classification comes back as high-risk and you need to design a conformity assessment strategy. Call a lawyer when you are claiming the Article 6(3) exemption and the profiling question is genuinely ambiguous for your use case. Call a lawyer when you operate in multiple EU member states and need to navigate national implementation differences.

The tool gives you the map. The lawyer helps you navigate the terrain.

Try It

The EU AI Act Classification Tool is free. No signup. No email gate. No sales follow-up. Three steps, roughly 15 questions, and you get a classification verdict with article citations, a compliance readiness score, penalty exposure, and a downloadable PDF report.

If you have already done a self-assessment, run your system through the tool and see whether the classification matches. If it does not, pay attention to where it diverges. The Article 6(3) profiling disqualifier and the GPAI provider/deployer distinction are the two most common places where self-assessments produce a different answer than the regulation requires.

The EU AI Act compliance deadline is moving, but the obligations are not. Knowing your classification is the first step to building a compliance program that survives contact with the regulation.

We're building Aguardic to enforce AI governance policies across every surface where AI work happens. The classification tool is free because knowing your risk category is step one. Step two is extracting enforceable rules from your compliance documents and turning them into checks that run continuously.

Enjoyed this post?

Subscribe to get the latest on AI governance, compliance automation, and policy-as-code delivered to your inbox.

Ready to Govern Your AI?

Start enforcing policies across code, AI outputs, and documents in minutes.