OpenAI Integration
Evaluate OpenAI GPT model inputs and outputs against your governance policies.
Overview
The OpenAI integration lets you evaluate GPT model inputs and outputs against your governance policies using the Aguardic SDK. Your application calls OpenAI directly — Aguardic never touches your LLM traffic. Policy evaluation happens in your code before and after each LLM call.
Setup
Create integration
Bind policies
Install the SDK
npm install @aguardic/sdkAdd evaluation calls
Store your Aguardic API key securely. It is shown only once. If you lose it, regenerate it from the integration settings.
Code Example
import Aguardic from "@aguardic/sdk";
import OpenAI from "openai";
const aguardic = new Aguardic(process.env.AGUARDIC_API_KEY);
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
const messages = [
{ role: "user" as const, content: "Summarize our compliance requirements" },
];
// 1. Evaluate input before sending to OpenAI
const inputCheck = await aguardic.evaluate({
input: {
provider: "openai",
model: "gpt-4o",
messages,
},
targetKey: "chat-completion",
});
if (inputCheck.enforcementAction === "BLOCK") {
console.log("Blocked:", inputCheck.violations);
// Do not call OpenAI
} else {
// 2. Call OpenAI directly
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages,
});
const responseText = completion.choices[0]?.message?.content ?? "";
// 3. Evaluate output after receiving from OpenAI
const outputCheck = await aguardic.evaluate({
input: {
provider: "openai",
model: "gpt-4o",
response: responseText,
},
targetKey: "chat-completion-output",
});
if (outputCheck.enforcementAction === "BLOCK") {
console.log("Output blocked:", outputCheck.violations);
} else {
console.log(responseText);
}
}What Gets Evaluated
Input Evaluation
Before sending a request to OpenAI, pass the request content to aguardic.evaluate():
Output Evaluation
After receiving a response from OpenAI, evaluate the output content:
Output evaluation is optional but recommended. You control whether to block, log, or ignore output violations.
Enforcement Actions
The enforcementAction field in the evaluation response tells you what to do:
reviewRequestId for a decision.Why Sidecar Over Proxy
Unlike proxy-based approaches that route all LLM traffic through a third party, Aguardic's sidecar model means:
- No added latency — Evaluation runs in parallel, not in the request path
- No single point of failure — Your LLM calls go direct; if Aguardic is down, you decide the fallback
- Your keys stay with you — Aguardic never sees your OpenAI API key
- Full control — You choose exactly when and what to evaluate
Next Steps
- Anthropic Integration — Same pattern for Claude models
- Gemini Integration — Same pattern for Google AI
- JavaScript SDK — Full SDK reference
- Your First Policy — Create policies to evaluate against