Search documentation

Search all documentation pages

OpenAI Integration

Evaluate OpenAI GPT model inputs and outputs against your governance policies.

Overview

The OpenAI integration lets you evaluate GPT model inputs and outputs against your governance policies using the Aguardic SDK. Your application calls OpenAI directly — Aguardic never touches your LLM traffic. Policy evaluation happens in your code before and after each LLM call.

Setup

1

Create integration

Navigate to Integrations in the Aguardic dashboard, click Add Integration, and select OpenAI. Give it a name and copy the API key.
2

Bind policies

Go to Policy Bindings and bind your governance policies to the OpenAI integration.
3

Install the SDK

npm install @aguardic/sdk
4

Add evaluation calls

Use the SDK to evaluate inputs before sending them to OpenAI, and evaluate outputs after receiving them.

Store your Aguardic API key securely. It is shown only once. If you lose it, regenerate it from the integration settings.

Code Example

import Aguardic from "@aguardic/sdk";
import OpenAI from "openai";
 
const aguardic = new Aguardic(process.env.AGUARDIC_API_KEY);
const openai = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });
 
const messages = [
  { role: "user" as const, content: "Summarize our compliance requirements" },
];
 
// 1. Evaluate input before sending to OpenAI
const inputCheck = await aguardic.evaluate({
  input: {
    provider: "openai",
    model: "gpt-4o",
    messages,
  },
  targetKey: "chat-completion",
});
 
if (inputCheck.enforcementAction === "BLOCK") {
  console.log("Blocked:", inputCheck.violations);
  // Do not call OpenAI
} else {
  // 2. Call OpenAI directly
  const completion = await openai.chat.completions.create({
    model: "gpt-4o",
    messages,
  });
 
  const responseText = completion.choices[0]?.message?.content ?? "";
 
  // 3. Evaluate output after receiving from OpenAI
  const outputCheck = await aguardic.evaluate({
    input: {
      provider: "openai",
      model: "gpt-4o",
      response: responseText,
    },
    targetKey: "chat-completion-output",
  });
 
  if (outputCheck.enforcementAction === "BLOCK") {
    console.log("Output blocked:", outputCheck.violations);
  } else {
    console.log(responseText);
  }
}

What Gets Evaluated

Input Evaluation

Before sending a request to OpenAI, pass the request content to aguardic.evaluate():

MessagesUser and assistant message content
System promptThe system instruction if provided
ModelThe model name for model-specific policies
Tool callsFunction/tool definitions and arguments

Output Evaluation

After receiving a response from OpenAI, evaluate the output content:

Response textThe assistant's generated response
Tool call resultsAny tool calls the model decided to make

Output evaluation is optional but recommended. You control whether to block, log, or ignore output violations.

Enforcement Actions

The enforcementAction field in the evaluation response tells you what to do:

BLOCKDo not send the request to OpenAI (input) or do not show the response to the user (output).
APPROVAL_REQUIREDHold the request until a reviewer approves it. Poll the reviewRequestId for a decision.
WARNAllow the request but log the violation for review.
ALLOWNo violations or policy allows continuation.

Why Sidecar Over Proxy

Unlike proxy-based approaches that route all LLM traffic through a third party, Aguardic's sidecar model means:

  • No added latency — Evaluation runs in parallel, not in the request path
  • No single point of failure — Your LLM calls go direct; if Aguardic is down, you decide the fallback
  • Your keys stay with you — Aguardic never sees your OpenAI API key
  • Full control — You choose exactly when and what to evaluate

Next Steps