Agent Integration
Govern autonomous AI agent actions with real-time policy evaluation and enforcement.
Overview
The Agent integration lets you evaluate AI agent actions, tool calls, and decisions against your policies before they execute. This is the core integration for governing autonomous AI systems.
How It Works
Your agent sends each action to Aguardic for evaluation before executing it. Aguardic evaluates the action against bound policies and returns one of four outcomes:
- ALLOW — The action is approved. Proceed.
- WARN — The action triggered a policy but is allowed. Log for review.
- BLOCK — The action violates a policy. Do not execute.
- APPROVAL_REQUIRED — The action is held for manual review. Poll for a decision.
Setup
1. Create an Agent Integration
Navigate to Integrations in the Aguardic dashboard, click Add Integration, and select Agent. Give it a name and copy the API key.
2. Create a Session
Sessions group related evaluations together. Create one per agent conversation or task:
const sessionRes = await fetch("https://api.aguardic.com/v1/evaluation-sessions", {
method: "POST",
headers: {
Authorization: `Bearer ${API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
metadata: { agentId: "customer-support-bot" },
}),
});
const { data: session } = await sessionRes.json();If you don't provide a sessionId in the evaluate request, Agent integrations automatically create one. But explicitly creating sessions gives you more control over grouping and tracking related evaluations.
3. Evaluate Actions
Before your agent executes a tool call or action, evaluate it:
const evalRes = await fetch("https://api.aguardic.com/v1/evaluate", {
method: "POST",
headers: {
Authorization: `Bearer ${API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
sessionId: session.id,
input: {
tool: "send_email",
args: { to: "customer@example.com", subject: "Account Update", body: "..." },
},
targetKey: "send_email",
}),
});
const { data: result } = await evalRes.json();
if (result.outcome === "BLOCK") {
// Do not execute the action
console.log("Action blocked:", result.violations);
return { error: "This action violates company policy" };
}
// Safe to execute
await executeToolCall("send_email", args);4. Handle Outcomes
Build your agent to respect all four evaluation outcomes:
async function governedAction(tool: string, args: Record<string, any>) {
const evalRes = await fetch("https://api.aguardic.com/v1/evaluate", {
method: "POST",
headers: {
Authorization: `Bearer ${API_KEY}`,
"Content-Type": "application/json",
},
body: JSON.stringify({
sessionId: currentSession.id,
input: { tool, args },
targetKey: tool,
}),
});
const { data: result } = await evalRes.json();
switch (result.outcome) {
case "BLOCK":
return {
success: false,
reason: "Action blocked by governance policy",
violations: result.violations,
};
case "APPROVAL_REQUIRED":
// Poll for review decision
const decision = await pollReview(result.pollUrl);
if (decision.status === "APPROVED") {
return await executeToolCall(tool, args);
}
return { success: false, reason: "Action requires approval" };
case "WARN":
// Execute but log the warning
console.warn("Policy warning:", result.violations);
return await executeToolCall(tool, args);
case "ALLOW":
return await executeToolCall(tool, args);
}
}Policy Examples
Prevent Sensitive Data Exposure
Block agent actions that include PII:
- Field:
content - Operator:
CONTAINS - Value:
SSN,social security number,credit card - Severity: CRITICAL
Restrict Tool Access
Prevent dangerous tool calls:
- Field:
content - Operator:
MATCHES - Value:
"tool":"(delete_account|drop_table|rm -rf)" - Severity: HIGH
Limit External Communication
Warn when agents try to send external communications:
- Field:
content - Operator:
CONTAINS - Value:
send_email,send_message,post_to_slack - Severity: MEDIUM
Next Steps
- Evaluation Sessions — Learn about session management
- REST API — Use the API directly for custom integrations
- MCP Server — Integrate with MCP-compatible agents