UiPath just became the first enterprise platform to achieve AIUC-1 certification. If you haven't been tracking this standard, you should be. AIUC-1 is the first security, safety, and reliability certification designed specifically for AI agents, and it's positioned to become what SOC 2 is for cloud infrastructure: the baseline trust signal that enterprise buyers require before signing a contract.
This isn't another governance framework. It's an auditable certification with third-party evaluation, quarterly re-testing, and specific technical controls for how AI agents behave in production. For anyone building or deploying AI agents in enterprise environments, AIUC-1 is about to become part of your compliance vocabulary.
What AIUC-1 Actually Is
AIUC-1 was created by the Artificial Intelligence Underwriting Company, founded by people with experience at Anthropic and developed in partnership with Orrick, Stanford, the Cloud Security Alliance, MIT, and MITRE. The standard pulls together existing frameworks like the NIST AI Risk Management Framework, the EU AI Act, and ISO 42001 into a single, agent-specific certification.
The key distinction from broader AI governance frameworks: AIUC-1 focuses on how AI agents behave under pressure and in production, not just whether an organization has governance policies on paper. ISO 42001 validates that you have the right management system in place. AIUC-1 validates that your agents actually do what they're supposed to do when handling sensitive workflows.
The certification covers critical areas including data protection (does the agent properly handle sensitive data), operational boundaries (does the agent stay within its authorized scope), attack resistance (can the agent withstand prompt injection, jailbreaks, and adversarial manipulation), and error prevention (does the agent fail safely when things go wrong).
To achieve certification, UiPath subjected its AI products to over 2,000 enterprise risk scenarios evaluated by third-party testers, with ongoing quarterly evaluations to ensure safeguards evolve alongside capabilities and threats. Schellman, the same firm that handles SOC 2 and ISO audits for major enterprises, conducted the independent assessment.
Why This Matters More Than Another Framework
The AI governance space has no shortage of frameworks. NIST AI RMF, EU AI Act, ISO 42001, OWASP Top 10 for LLMs. Each serves a purpose. But none of them were designed to answer the specific question enterprise buyers are starting to ask: "Can you prove your AI agents are safe to deploy in our environment?"
AIUC-1 answers that question with a certification, not a self-assessment. The difference matters because enterprise procurement teams are trained to evaluate certifications. They know what a SOC 2 Type II report looks like. They know what ISO 27001 certification means. They understand the difference between "we follow the framework" and "an independent auditor verified our controls."
AIUC-1 gives AI agent vendors the same kind of verifiable trust signal. When an enterprise security team asks "how do we know your agents are safe?", a certified vendor can point to an independent evaluation rather than a governance policy document.
This is the pattern every compliance standard follows. SOC 2 started as something progressive companies pursued voluntarily. Within a few years, it became table stakes for selling to enterprises. AIUC-1 is at the beginning of that same curve. Early adopters get competitive advantage. Late adopters get blocked from deals.
What AIUC-1 Tests For
Based on the published information about UiPath's certification process, AIUC-1 evaluates agents across several risk categories that map directly to the threats enterprise deployments face.
Jailbreak and prompt injection resistance. Can the agent be manipulated into ignoring its instructions or operating outside its defined scope? This is the most commonly discussed AI security risk, and AIUC-1 includes adversarial testing specifically for it.
Hallucination and fabrication controls. Does the agent generate false information, invent citations, or present uncertainty as fact? In enterprise contexts where agents handle financial data, legal documents, or healthcare information, hallucination isn't just an inconvenience. It's a liability.
Data leakage prevention. Can the agent be tricked into exposing sensitive data from its training, its context window, or the systems it has access to? This covers both direct extraction (asking the agent to reveal data) and indirect leakage (data appearing in outputs where it shouldn't).
Operational boundary enforcement. Does the agent stay within its authorized scope? If an agent is designed to process invoices, does it refuse to execute wire transfers? If it has read access to a database, can it be convinced to attempt writes? Boundary enforcement is where agent security diverges most from traditional LLM safety.
Error handling and failure modes. When the agent encounters an unexpected situation, does it fail safely? Does it escalate to a human? Does it continue operating with reduced confidence? The difference between a well-governed agent and a dangerous one often comes down to what happens when things go wrong.
The Gap Between AIUC-1 and Full Governance
AIUC-1 is a significant step forward. It creates a verifiable baseline for agent security and reliability. But it's important to be precise about what it covers and what it doesn't.
AIUC-1 evaluates the agent itself. It tests whether the agent resists attacks, stays within boundaries, handles data properly, and fails safely. This is essential and it's the right place to start.
What AIUC-1 doesn't cover is the organizational context around the agent. An agent can pass every AIUC-1 test and still violate your organization's specific policies in production. AIUC-1 tests whether an agent can be jailbroken. It doesn't test whether the agent's outputs comply with your HIPAA minimum necessary standards, your brand voice guidelines, your contractual obligations to specific clients, or your internal data handling rules.
This isn't a criticism of AIUC-1. No certification can cover organization-specific rules because those rules are different for every organization. But it means AIUC-1 certification is a necessary floor, not a sufficient ceiling.
The complete picture looks like this: AIUC-1 certifies that the agent is technically safe and reliable. ISO 42001 certifies that the organization has a governance management system. And organizational policy enforcement ensures that every agent action in production complies with that specific organization's rules, continuously, with evidence.
Each layer serves a different function. AIUC-1 is the vendor trust signal. ISO 42001 is the organizational governance signal. Policy enforcement is the operational control that makes governance real at runtime.
What This Means If You're Building AI Agents
If you're an AI vendor selling to enterprises, start tracking AIUC-1 now. The first movers (UiPath being the first) get to set the narrative. But as more companies achieve certification, enterprise security teams will start asking "are you AIUC-1 certified?" the same way they ask "do you have SOC 2?"
Map your existing security controls against AIUC-1 requirements. Based on the published framework areas (data protection, operational boundaries, attack resistance, error prevention), many organizations already have partial coverage through existing security practices. The gap analysis tells you what you need to build.
If you're already enforcing policies on your AI agents (tool-level access control, output evaluation, session-aware governance), you likely cover a significant portion of what AIUC-1 tests for. The certification process formalizes and validates what good agent governance already looks like in practice.
If you're deploying AI agents in a regulated industry, ask your vendors about AIUC-1. Even if they're not certified yet, the conversation forces them to articulate their agent security posture. And when they can't answer the questions, you'll know exactly where the governance gaps are.
The Certification Stack for AI in 2026
The compliance landscape for AI is crystallizing around a layered model. Each certification or framework addresses a different scope.
SOC 2 covers your infrastructure and data handling. You probably already have this or are working on it.
ISO 42001 covers your AI management system. It proves you have governance processes, risk assessment procedures, and accountability structures for AI.
AIUC-1 covers your AI agents specifically. It proves your agents are technically safe, reliable, and resistant to adversarial manipulation under real-world conditions.
EU AI Act compliance covers your regulatory obligations if you operate in or sell to EU markets.
Organizational policy enforcement covers the gap between all of the above and your actual business rules. It's the runtime layer that turns frameworks and certifications into continuous, enforced, auditable governance.
None of these replace the others. Together, they form the trust stack that enterprise buyers and regulators will expect. The organizations that assemble this stack early will close deals faster. The ones that treat each certification as a separate checkbox will keep getting surprised by the next question on the security questionnaire.
We're building Aguardic as the organizational policy enforcement layer for AI agents, code, documents, and messaging. AIUC-1 certifies that agents are safe. Aguardic enforces that they comply with your specific rules in production. If you're thinking about how policy enforcement fits alongside AIUC-1 in your governance stack, take a look.



