Gemini Integration
Monitor and govern Google Gemini AI model usage.
Overview
The Gemini integration acts as a proxy between your application and the Google Gemini API. Requests pass through Aguardic for policy evaluation before reaching Gemini. If a request violates a policy, it can be blocked before it ever leaves your infrastructure.
The proxy supports all Gemini API endpoints. Policy evaluation is triggered on content generation endpoints (any path containing :generateContent). Other endpoints pass through without evaluation.
Setup
1. Create a Gemini Integration
Navigate to Integrations in the Aguardic dashboard, click Add Integration, and select Gemini. Provide your Google AI API key and give the integration a name.
Aguardic stores your API key securely (encrypted with AES-256-GCM). When you create the integration, you receive a proxy URL and an Aguardic API key.
Store your Aguardic API key securely. It is shown only once. If you lose it, revoke it and create a new one.
2. Bind Policies
Go to Policy Bindings and bind your governance policies to the Gemini integration.
3. Replace Your Base URL
Point your application at the Aguardic proxy URL instead of the Gemini API directly.
Proxy URL
https://api.aguardic.com/v1/integrations/gemini/proxy/{integrationId}
Replace {integrationId} with the ID returned when you created the integration.
Never expose the proxy URL in client-side code. The proxy should only be called from your server to prevent API key leakage.
How the Proxy Works
- Your application sends a request to the Aguardic proxy URL
- Aguardic extracts text content from the request (content parts)
- Input evaluation: Content is evaluated against all bound policies
- If BLOCK: Returns
403with violation details. The request never reaches Gemini. - If APPROVAL_REQUIRED: Returns
403with a review request ID for polling. - If ALLOW or WARN: The request is forwarded to Gemini with your stored API key
- The Gemini response is streamed back to your application
- Output evaluation: The response content is evaluated asynchronously (up to 1MB buffer)
Code Example
Since Aguardic stores your Gemini API key, you authenticate to the proxy with your Aguardic API key:
import { GoogleGenAI } from "@google/genai";
const ai = new GoogleGenAI({
apiKey: process.env.AGUARDIC_API_KEY, // Your Aguardic API key
httpOptions: {
baseUrl:
"https://api.aguardic.com/v1/integrations/gemini/proxy/YOUR_INTEGRATION_ID",
},
});
const response = await ai.models.generateContent({
model: "gemini-2.0-flash",
contents: "Summarize our compliance requirements",
});
console.log(response.text);Or with curl:
curl -X POST "https://api.aguardic.com/v1/integrations/gemini/proxy/YOUR_INTEGRATION_ID/v1beta/models/gemini-2.0-flash:generateContent" \
-H "Authorization: Bearer ag_live_abc123def456" \
-H "Content-Type: application/json" \
-d '{
"contents": [
{
"parts": [
{ "text": "Summarize our compliance requirements" }
]
}
]
}'Input Evaluation
Before forwarding to Gemini, Aguardic extracts text from your request:
- Contents: Text parts from the
contentsarray are concatenated and evaluated - Multimodal requests:
inline_dataparts are detected. Image MIME types are flagged ashasImages, other types ashasFiles, with specific MIME types tracked infileTypes
The evaluation input also includes the model name and endpoint path.
Output Evaluation
After Gemini responds, the output content is evaluated asynchronously:
- Response body is buffered during streaming (up to 1MB)
- Text from
candidates[0].content.parts[0].textis extracted and evaluated - For SSE streaming, text parts are assembled from each chunk
- Output violations are logged but do not block the response (already streamed)
- Output evaluation does not count against your evaluation quota
Streaming Support
Streaming responses are fully supported. Aguardic streams chunks to your application in real time while buffering for post-response evaluation. If the response exceeds 1MB, output evaluation is skipped.
Error Responses
When a request is blocked, the proxy returns a 403 in Gemini's error format:
{
"error": {
"code": 403,
"message": "Request blocked by policy. 2 violation(s) detected.",
"status": "PERMISSION_DENIED",
"run_id": "a1b2c3d4-e5f6-7890-abcd-ef1234567890",
"review_request_id": null,
"poll_url": null
}
}When approval is required, review_request_id and poll_url are populated so you can poll for the review decision.
Enforcement Modes
- BLOCK -- Returns
403before the request reaches Gemini. The request is never sent. - APPROVAL_REQUIRED -- Returns
403with a review request. The request is held until approved. - WARN -- Forwards the request to Gemini. Violations are logged in Aguardic for review.
- MONITOR_ONLY -- Forwards the request to Gemini. Violations are logged silently.
Next Steps
- OpenAI Integration -- Same proxy pattern for OpenAI GPT models
- Anthropic Integration -- Same proxy pattern for Anthropic Claude
- Your First Policy -- Create policies to evaluate against