Aguardic logoAguardic
AIUC-1 · 64 Rules · 6 Domains

The AI Agent Security Standard. Now Enforceable as Code.

AIUC-1 covers 6 domains — data privacy, security, safety, reliability, accountability, and societal impact. Aguardic implements its 49 requirements as 64 enforceable rules that run automatically on every AI output, code commit, and document.

14-day free trial · No credit card · Free AIUC-1 policy pack

Requirements Coverage

AIUC-1 Coverage Matrix

AIUC-1 defines 49 active requirements across 6 domains. Aguardic implements them as 64 granular rules. The table below shows representative requirements per domain, what Aguardic enforces, the evidence it produces, and the work your team still owns.

13Covered
1Partial
3Not Covered
Total: 17
Covered·

A003

Limit Agent Data Collection

Restrict AI agent data collection to what is necessary for the task at hand.

How Aguardic helps

Input filtering policies limit what data AI agents can access and collect. Policies enforce data minimization across every connected integration.

Evidence produced

Data collection policy logs · blocked access records

What you handle

Define which data fields each agent legitimately needs for its task.

Covered·

A006

Prevent PII Leakage

Prevent personally identifiable information from being exposed in AI outputs.

How Aguardic helps

PII detection policies scan AI outputs, documents, emails, and messages. Block or redact PII before it reaches unauthorized recipients.

Evidence produced

PII detection logs · blocked output records

What you handle

Define your organization's PII classification (what counts, what's allowed where) and approve redaction rules.

Partial·

A001

Input Data Policies

Define and enforce policies governing data provided to AI systems.

How Aguardic helps

Input filtering policies govern data entering AI systems. Full data classification taxonomy requires manual definition.

Evidence produced

Input policy evaluation logs

What you handle

Define your data classification taxonomy (public / internal / confidential / restricted) that Aguardic enforces against.

Covered·

B002

Detect Adversarial Input

Detect and mitigate adversarial inputs designed to manipulate AI behavior.

How Aguardic helps

Semantic AI evaluation detects prompt injection, jailbreaking, and adversarial inputs. Deterministic rules catch known attack patterns.

Evidence produced

Adversarial detection logs · blocked input records

What you handle

Review novel adversarial patterns surfaced by Aguardic and decide how to update detection rules.

Covered·

B006

Prevent Unauthorized Agent Actions

Ensure AI agents only perform authorized actions within defined boundaries.

How Aguardic helps

Agent behavior policies restrict tool calls, API access, and actions to authorized boundaries. Unauthorized actions are blocked before execution.

Evidence produced

Agent action logs · blocked tool-call records

What you handle

Author the allow-list of tool calls and approved actions per agent.

Not Covered·

B001

Third-party Adversarial Testing

Conduct third-party adversarial testing of AI systems at defined intervals.

How Aguardic helps

Aguardic provides continuous automated testing but does not replace independent red-team assessments.

What you handle

Engage a red-team vendor on a defined cadence. Aguardic's continuous testing complements but doesn't replace third-party assessments.

Covered·

C003

Prevent Harmful Outputs

Prevent AI systems from generating harmful, dangerous, or toxic outputs.

How Aguardic helps

Output scanning policies detect and block harmful content including toxicity, dangerous instructions, and inappropriate material across every surface.

Evidence produced

Harmful content detection logs · blocked output records

What you handle

Define your org's safety policy (what's acceptable vs blocked) and approve enforcement thresholds.

Covered·

C007

Flag High-Risk Outputs

Identify and flag AI outputs that carry elevated risk for human review.

How Aguardic helps

Policy severity levels flag high-risk outputs for human review. Warn and escalate modes ensure flagged outputs receive attention.

Evidence produced

Flagged output logs · escalation records

What you handle

Staff the human review queue and set severity thresholds that require review.

Not Covered·

C001

Risk Taxonomy Definition

Define a comprehensive taxonomy of risks for AI system outputs.

How Aguardic helps

Aguardic enforces policies based on defined taxonomies but does not create the taxonomy itself.

What you handle

Author your domain-specific risk taxonomy — what categories of harm matter for your product and users.

Covered·

D001

Prevent Hallucinated Outputs

Detect and prevent AI systems from generating factually incorrect information.

How Aguardic helps

Knowledge RAG cross-references outputs against your knowledge bases. Semantic AI evaluates factual consistency. Deterministic rules catch known hallucination patterns.

Evidence produced

Hallucination detection logs · knowledge verification records

What you handle

Upload your authoritative knowledge sources to the knowledge base and keep them current.

Covered·

D003

Restrict Unsafe Tool Calls

Prevent AI agents from making unsafe or unauthorized tool calls.

How Aguardic helps

Agent behavior policies validate tool calls against allow-lists and parameter constraints. Unsafe calls are blocked before execution.

Evidence produced

Tool call validation logs · blocked action records

What you handle

Author the allow-list and parameter constraints per tool exposed to agents.

Covered·

E015

Log Model Activity

Maintain comprehensive logs of all AI model activity for audit purposes.

How Aguardic helps

Every policy evaluation is logged with timestamp, input, result, violations, and trace ID. Every AI interaction is logged and exportable.

Evidence produced

Exportable evaluation logs · model activity records · decision-point audit trail

What you handle

Define retention policies and integrate Aguardic log exports with your SIEM or evidence repository.

Covered·

E016

AI Disclosure Mechanisms

Implement mechanisms to disclose when AI is being used in interactions.

How Aguardic helps

AI disclosure policies enforce transparency requirements. Policies can require AI-generated content labeling.

Evidence produced

Disclosure enforcement logs · labeling compliance records

What you handle

Approve the disclosure copy your users see; Aguardic enforces that the disclosure is present.

Covered·

E010

Enforce Acceptable Use Policy

Enforce organizational acceptable use policies for AI systems.

How Aguardic helps

Custom policies encode acceptable use rules. Enforcement across connected integrations ensures organization-wide compliance.

Evidence produced

Acceptable use policy logs · violation records

What you handle

Author the organizational AUP; Aguardic enforces the rules you define.

Not Covered·

E001–E003

Incident Response Plans

Establish, test, and maintain incident response plans for AI-related incidents.

How Aguardic helps

Aguardic provides automated first-response but does not substitute for a documented, tested incident response program.

What you handle

Draft, test, and maintain the AI-specific incident response plan and tabletop exercises.

Covered·

F001

Prevent AI Cyber Misuse

Prevent AI systems from being used for cyberattacks or malicious purposes.

How Aguardic helps

Policies detect and block attempts to use AI for code exploitation, social engineering, and other cyber misuse. Real-time enforcement prevents misuse.

Evidence produced

Misuse detection logs · blocked attempt records

What you handle

Define what counts as misuse for your product context and keep detection rules updated.

Covered·

F002

Prevent Catastrophic Misuse

Prevent AI systems from being used for catastrophic harm.

How Aguardic helps

Critical severity policies immediately block outputs related to weapons, CBRN threats, and catastrophic harm. Zero-tolerance enforcement.

Evidence produced

Critical violation logs · blocked catastrophic content records

What you handle

Define catastrophic-misuse categories for your context (weapons, CBRN, etc.) and approve zero-tolerance thresholds.

Browse the AIUC-1 Policy Pack

Coverage mappings reflect Aguardic's current product capabilities mapped to representative AIUC-1 requirements. The full AIUC-1 pack covers all 49 active requirements via 64 granular rules. Validate these mappings for your specific deployment context.

The 6 AIUC-1 Domains

Every Domain Covered. Every Rule Enforceable.

Domain A: Data & Privacy

14 rules

Input and output data policies, limiting AI agent data collection, protecting IP and trade secrets, preventing cross-customer data exposure, PII leakage prevention, and IP violation prevention.

View in Marketplace

Domain B: Security

13 rules

Prompt injection defense, jailbreak detection, encoded payload blocking, scraping prevention, harmful input filtering, agent scope enforcement, unauthorized tool blocking, and output over-exposure limits.

View in Marketplace

Domain C: Safety

14 rules

Harmful output prevention, hostile and discriminatory content blocking, deceptive content detection, high-risk advice disclaimers, out-of-scope output prevention, output vulnerability scanning (SQLi, XSS, command injection), and risk monitoring.

View in Marketplace

Domain D: Reliability

10 rules

Hallucination prevention, fabricated citation and statistic detection, missing uncertainty flagging, unsafe tool call restrictions, privilege escalation detection, and operating schedule enforcement.

View in Marketplace

Domain E: Accountability

7 rules

Acceptable use policy enforcement, data extraction prevention, misrepresentation detection, audit trail completeness, AI disclosure in external communications, and autonomous action transparency.

View in Marketplace

Domain F: Society

6 rules

Malware generation blocking, attack planning prevention, vulnerability exploitation detection, CBRN instruction blocking, mass harm prevention, and dual-use research safeguards.

View in Marketplace

All 6 templates available as a free pack in the AIUC-1 marketplace

How It Works

From Standard to Continuous Enforcement

Step 1

Install the AIUC-1 Pack

One-click install from the marketplace. 6 templates covering all 64 rules across all AIUC-1 domains — deterministic regex patterns for speed, semantic LLM analysis for nuance.

Step 2

Connect Your AI Tools

Link OpenAI, Anthropic, Gemini, GitHub, Slack, and 10+ more integrations. Every AI output, code commit, and document is evaluated against AIUC-1 rules automatically.

Step 3

Enforce and Generate Evidence

Violations are caught in real-time with full audit trails. Each evaluation maps to specific AIUC-1 control IDs — export evidence for auditors, regulators, or internal review.

Already have internal AI governance documents? Upload them and extract enforceable rules automatically

Cross-Framework Coverage

AIUC-1 Covers What Other Frameworks Miss

AIUC-1 is comprehensive by design. Its 6 domains overlap significantly with other major AI governance frameworks — meaning one AIUC-1 implementation gives you a head start on multiple compliance requirements.

ISO 42001

Significant overlap

Annex A controls (A.7.2, A.7.3, A.8.4, A.9.3, A.9.4, A.10.2) map to AIUC-1 data privacy, safety, and accountability domains.

EU AI Act

Significant overlap

Articles 9, 11, 13, 14, 15, and 52 map to AIUC-1 across data privacy, safety, accountability, and transparency requirements.

NIST AI RMF

Significant overlap

NIST's Map, Measure, Manage, and Govern functions correspond to AIUC-1's safety, reliability, and accountability domains.

OWASP AI Security

Partial overlap

AI-specific security risks (model poisoning, prompt injection, data leakage) covered by AIUC-1 Domain B security controls.

CSA AI Control Matrix

Partial overlap

Cloud-native AI controls for data governance, model lifecycle, and operational security align with AIUC-1 Domains A and B.

Implement AIUC-1 once — get a head start on 5 major AI governance frameworks.

Audit Ready

Built for AIUC-1 Audit Readiness

Control-Mapped Evidence

Every evaluation result maps to a specific AIUC-1 control ID (A003, B002, C003, etc.). Export audit packages grouped by domain for streamlined reviews.

Continuous Compliance

No more point-in-time assessments. Every AI output, code commit, and document is evaluated against AIUC-1 rules in real-time — evidence generates itself.

Auto-Updating Rules

AIUC-1 is updated quarterly by the AIUC consortium. When the standard is updated, Aguardic updates the policy pack to match. Subscribers get new rules without manual intervention.

Frequently Asked Questions

AIUC-1 FAQ

What is AIUC-1?+

AIUC-1 is the first security, safety, and reliability standard built specifically for AI agents. Created by the Artificial Intelligence Underwriting Company (AIUC) — a consortium of 100+ Fortune 500 CISOs with technical contributors from Cisco, MITRE, Stanford, Microsoft, and Anthropic — it defines 49 active requirements across 6 domains. Certification is independently audited by Schellman.

Is the AIUC-1 policy pack free?+

Yes. The complete AIUC-1 policy pack is free on the Aguardic Marketplace. You can install all 6 domain templates with one click and start enforcing immediately. No subscription required for the policy pack itself.

How many rules does the AIUC-1 pack include?+

The pack includes 64 enforceable rules across 6 templates — one for each AIUC-1 domain. Rules use a combination of deterministic evaluation (regex pattern matching for speed) and semantic evaluation (LLM analysis for nuanced checks).

What's the difference between enforced and documented controls?+

Enforced requirements (23 of 49) can be validated technically — implemented as 64 granular rules covering output scanning, input filtering, agent behavior monitoring, and configuration checks. Documented requirements (26 of 49) are procedural (incident response plans, vendor due diligence, risk taxonomy definitions) that Aguardic tracks and reports on but cannot automatically validate.

Does AIUC-1 overlap with other frameworks?+

Yes, significantly. AIUC-1 publishes official crosswalks to ISO 42001, EU AI Act, NIST AI RMF, OWASP Top 10, and CSA AICM. Each AIUC-1 requirement page on aiuc-1.com shows the specific control mappings to these frameworks.

How do I install the AIUC-1 pack?+

Sign up for Aguardic (free trial available), navigate to the Marketplace, find the AIUC-1 category, and click Install on any or all 6 domain templates. Connect your AI tools and enforcement starts automatically.

Will the pack update when AIUC-1 is revised?+

Yes. When the AIUC consortium publishes updates to AIUC-1 or releases AIUC-2, Aguardic will update the policy pack. Subscribers receive updates automatically — no manual intervention required.

AIUC-1 vendor assessment?

Answer AIUC-1 questions with controls Aguardic enforces

Upload it. We draft answers citing the AIUC-1 control catalog — governance, data handling, model lifecycle, transparency — describing what Aguardic enforces in production, not just what your governance docs assert.

Upload questionnaire

Start enforcing AIUC-1 today.

Install the free AIUC-1 policy pack, connect your AI tools, and start generating audit-ready evidence in minutes.

14-day free trial
No credit card required
Free AIUC-1 policy pack
Start Free Trial

Or explore the documentation

AIUC-1 Compliance — AI Agent Governance Standard, Enforced as Code - Aguardic