Gemini Integration
Evaluate Google Gemini model inputs and outputs against your governance policies.
Overview
The Gemini integration lets you evaluate Google AI model inputs and outputs against your governance policies using the Aguardic SDK. Your application calls Gemini directly — Aguardic never touches your LLM traffic. Policy evaluation happens in your code before and after each LLM call.
Setup
1
Create integration
Navigate to Integrations in the Aguardic dashboard, click Add Integration, and select Gemini. Give it a name and copy the API key.
2
Bind policies
Go to Policy Bindings and bind your governance policies to the Gemini integration.
3
Install the SDK
npm install @aguardic/sdk4
Add evaluation calls
Use the SDK to evaluate inputs before sending them to Gemini, and evaluate outputs after receiving them.
Store your Aguardic API key securely. It is shown only once. If you lose it, regenerate it from the integration settings.
Code Example
import Aguardic from "@aguardic/sdk";
import { GoogleGenerativeAI } from "@google/generative-ai";
const aguardic = new Aguardic(process.env.AGUARDIC_API_KEY);
const genAI = new GoogleGenerativeAI(process.env.GEMINI_API_KEY);
const prompt = "Summarize our compliance requirements";
// 1. Evaluate input before sending to Gemini
const inputCheck = await aguardic.evaluate({
input: {
provider: "gemini",
model: "gemini-2.0-flash",
prompt,
},
targetKey: "chat-completion",
});
if (inputCheck.enforcementAction === "BLOCK") {
console.log("Blocked:", inputCheck.violations);
} else {
// 2. Call Gemini directly
const model = genAI.getGenerativeModel({ model: "gemini-2.0-flash" });
const result = await model.generateContent(prompt);
const responseText = result.response.text();
// 3. Evaluate output
const outputCheck = await aguardic.evaluate({
input: {
provider: "gemini",
model: "gemini-2.0-flash",
response: responseText,
},
targetKey: "chat-completion-output",
});
if (outputCheck.enforcementAction === "BLOCK") {
console.log("Output blocked:", outputCheck.violations);
} else {
console.log(responseText);
}
}What Gets Evaluated
Input Evaluation
PromptUser prompt and conversation history
System instructionThe system instruction if provided
ModelThe model name for model-specific policies
Multimodal contentText parts from content arrays
Output Evaluation
Response textGenerated text from the model response
Function callsAny function calls the model produced
Enforcement Actions
The enforcementAction field in the evaluation response tells you what to do:
BLOCKDo not send the request to Gemini (input) or do not show the response to the user (output).
APPROVAL_REQUIREDHold the request until a reviewer approves it. Poll the
reviewRequestId for a decision.WARNAllow the request but log the violation for review.
ALLOWNo violations or policy allows continuation.
Why Sidecar Over Proxy
Unlike proxy-based approaches that route all LLM traffic through a third party, Aguardic's sidecar model means:
- No added latency — Evaluation runs in parallel, not in the request path
- No single point of failure — Your LLM calls go direct; if Aguardic is down, you decide the fallback
- Your keys stay with you — Aguardic never sees your Gemini API key
- Full control — You choose exactly when and what to evaluate
Next Steps
- OpenAI Integration — Same pattern for GPT models
- Anthropic Integration — Same pattern for Claude models
- JavaScript SDK — Full SDK reference
- Your First Policy — Create policies to evaluate against