Aguardic logoAguardic

AIUC-1 Reliability Controls

by AguardicOfficial·v1.0.0

Enforce AIUC-1 Domain D requirements — prevent hallucinations and fabricated content, restrict unsafe tool calls, and ensure reliable AI agent operation.

About This Policy Template

Comprehensive AIUC-1 Domain D compliance pack covering controls D001 and D003. Prevents AI hallucinations including fabricated citations, fake statistics, fabricated legal references, and missing uncertainty acknowledgments. Restricts unsafe agent tool calls by detecting unauthorized tool access, blocking consequential actions without human approval, flagging excessive tool call rates, preventing unauthorized cross-system data transfers, detecting privilege escalation attempts, and monitoring agent operating schedules. Critical for organizations where AI accuracy and agent reliability are paramount — healthcare, legal, financial, and enterprise automation.

Policy Rules(10)

Critical Severity

(4)

Consequential Action Without Approval

Detect consequential or irreversible agent actions without human approval (AIUC-1 D003)

AI

Fabricated Legal Citation

Detect fabricated legal cases, statutes, or regulations (AIUC-1 D001)

AI

Privilege Escalation Attempt

Detect AI agent attempting to escalate privileges or bypass restrictions (AIUC-1 D003)

AI

Unauthorized Tool Access Attempt

Detect AI agent calling tools not authorized for its role or task (AIUC-1 D003)

AI

High Severity

(4)

Cross-System Data Transfer

Detect unauthorized data transfers between systems by AI agents (AIUC-1 D003)

AI

Excessive Tool Call Rate

Detect excessive or rapid tool invocations indicating runaway behavior (AIUC-1 D003)

AI

Fabricated Citation or Source

Detect fabricated citations, references, or sources in AI output (AIUC-1 D001)

AI

Fabricated Statistics or Data

Detect precise statistics or quantitative claims that appear fabricated (AIUC-1 D001)

AI

Medium Severity

(2)

Missing Uncertainty Acknowledgment

Detect definitive claims without uncertainty acknowledgment (AIUC-1 D001)

AI

Operating Outside Authorized Schedule

Detect AI agent actions performed outside authorized operating periods (AIUC-1 D003)

AI

Enforcement by Integration

What happens when a violation is detected, based on the enforcement mode and integration type.

IntegrationBlockApprovalWarnMonitor
Version Control
GitHub, GitLab, Bitbucket
Fail check run / merge request statusPending check run — held for reviewNeutral check run / comment on PRPass check run (silent)
Email — Gmail
Gmail
Quarantine label; + violation label (outbound)Quarantine label — held for reviewAdd warning labelLog only
Email — Outlook
Outlook
Move to quarantine folder; + flag (outbound)Move to quarantine — held for reviewFlag + categorizeLog only
Messaging
Slack, Teams
Post violation warning in channelPost 'held for review' warningPost warning in channelLog only
Storage
Google Drive, Dropbox, OneDrive
Move file to quarantine folderQuarantine file — held for reviewLog onlyLog only
AI Proxy
OpenAI, Anthropic, Gemini, MCP, Agent
Block request (return 403)Hold request — return review IDAllow request + audit trailLog only
API
REST API
Return BLOCK outcome (client decides)Return APPROVAL_REQUIRED + poll URLReturn WARN outcomeLog only

Version History

1 version published

v1.0.0Active3/21/2026

Initial release

Ready to Install AIUC-1 Reliability Controls?

Get started with pre-built governance policies in minutes.