Aguardic logoAguardic

SOC 2 for AI: What Controls Your Auditor Is About to Ask About

SOC 2 hasn't changed, but how auditors apply the Trust Services Criteria to AI features has. Here's what to expect and how to prepare your evidence for AI-specific controls across security, processing integrity, confidentiality, and privacy.

Aguardic Team·March 10, 2026·9 min read

Your SOC 2 audit is coming up. You've been through this before — access controls, encryption, incident response, change management. You know the drill. But this year, your auditor is going to ask questions you haven't heard before.

"How do you monitor the outputs of your AI systems?" "What controls govern AI-generated content in your product?" "Can you demonstrate continuous compliance for your AI features?"

SOC 2 hasn't changed. The Trust Services Criteria are the same five categories they've always been: security, availability, processing integrity, confidentiality, and privacy. What's changed is how auditors apply those criteria to companies that ship AI features. AI introduces new risk vectors that map directly to existing Trust Services Criteria — and auditors are catching up.

If you're a B2B SaaS company with AI features going through SOC 2, here's what to expect and how to prepare.

How AI Maps to Trust Services Criteria

SOC 2 isn't prescriptive about AI. There's no "AI controls" section in the framework. Instead, auditors evaluate your AI systems through the lens of existing criteria — and AI creates specific risks within each category.

Security (CC6, CC7)

The security principle requires that information and systems are protected against unauthorized access, unauthorized disclosure, and damage.

What auditors are asking about AI:

How do you prevent sensitive data from appearing in AI outputs? If your AI system processes customer data and generates responses, the risk is that the model includes sensitive information in its output that the recipient shouldn't see. An AI assistant that references one customer's data while responding to a different customer is a security failure.

How do you protect against adversarial attacks on AI systems? Prompt injection, jailbreaking, and data extraction attempts are security risks specific to AI systems. Your auditor wants to know what controls detect and mitigate these attacks.

How do you secure AI model access and API credentials? AI systems typically connect to third-party model providers via API. Those credentials need the same lifecycle management as any other system credential — rotation, least privilege, monitoring.

Evidence to prepare: AI output monitoring logs showing evaluation of outputs for sensitive data leakage. Adversarial attack detection rules and incident records. API credential management documentation for AI model providers.

Availability (A1)

The availability principle requires that information and systems are available for operation and use as committed.

What auditors are asking about AI:

What happens when your AI model provider experiences an outage? If your product depends on OpenAI, Anthropic, or another provider, how does your product behave when that provider is unavailable? Graceful degradation, fallback mechanisms, and uptime monitoring of AI dependencies are relevant controls.

How do you monitor AI system performance? Response latency, error rates, and throughput for AI components should be monitored alongside your other system components. An AI feature that's technically "up" but responding with 30-second latency is functionally unavailable.

Evidence to prepare: AI provider uptime monitoring dashboards. Fallback mechanism documentation. AI system performance metrics over the reporting period.

Processing Integrity (PI1)

The processing integrity principle requires that system processing is complete, valid, accurate, timely, and authorized.

This is where AI gets the most scrutiny.

How do you ensure AI outputs are accurate? Language models hallucinate. They generate plausible-sounding content that may be factually wrong. If your product uses AI to generate reports, summaries, recommendations, or any content that users rely on for decisions, processing integrity requires controls that verify accuracy.

How do you validate AI-generated content before delivery? What policies govern what your AI can and cannot say? Are AI outputs evaluated against accuracy standards, or are they delivered to users without verification? If your AI generates content that includes claims, numbers, or recommendations, what checks ensure those outputs are substantiated?

How do you handle AI outputs that don't meet quality standards? When a violation is detected — an inaccurate claim, an off-brand response, a hallucinated statistic — what happens? Is it blocked, flagged, or delivered anyway? Your violation workflow (detection, classification, assignment, resolution) is processing integrity evidence.

Evidence to prepare: Policy definitions that govern AI output quality and accuracy. Evaluation logs showing continuous monitoring of AI outputs. Violation records with resolution documentation. False positive tracking and policy refinement records.

Confidentiality (C1)

The confidentiality principle requires that information designated as confidential is protected as committed.

What auditors are asking about AI:

How do you prevent confidential data from leaking through AI systems? If your AI system has access to confidential customer data (for context, personalization, or processing), what prevents that data from appearing in outputs to unauthorized recipients? Cross-customer data leakage through AI is a specific confidentiality risk.

What data do you send to third-party AI providers? When your system sends a prompt to OpenAI or Anthropic, what data is included? If confidential customer data is part of the prompt context, your data processing agreements with the AI provider must cover confidentiality obligations.

Do AI model providers retain or train on your data? Document your agreements with model providers explicitly. Most enterprise-grade AI APIs have zero-retention and no-training options, but you need to confirm and document these settings.

Evidence to prepare: Data flow diagrams showing what data enters AI system prompts. Data processing agreements with AI providers confirming no-training and retention policies. Controls preventing cross-customer data inclusion in AI prompts.

Privacy (P1)

The privacy principle requires that personal information is collected, used, retained, disclosed, and disposed of in conformity with commitments.

What auditors are asking about AI:

How do you detect PII in AI-generated outputs? Your AI system might reference personal information in its responses — names, email addresses, phone numbers, or other identifiers. Detection controls that evaluate outputs for PII before delivery are relevant privacy controls.

How do you handle data subject requests for AI-processed data? If a customer exercises their right to access or delete their data, does that include data processed by your AI systems? Can you identify and delete AI interaction logs, evaluation records, and any derived data?

Does your privacy notice cover AI data processing? Your privacy policy should disclose how AI systems process personal data, including what data is sent to third-party model providers and how long interaction data is retained.

Evidence to prepare: PII detection rules and evaluation logs. Data subject request handling procedures that include AI system data. Privacy notice language covering AI data processing.

The Evidence Gap

Here's the challenge: most SaaS companies with AI features have some governance practices in place, but they don't generate the structured evidence that SOC 2 audits require.

A developer manually reviewing AI outputs before release is a control. But if there's no log of what was reviewed, what criteria were applied, and what the outcome was, it doesn't produce evidence. Your auditor can't examine a process that exists only in someone's head.

What auditors want to see is documentation that demonstrates your controls are designed effectively (they address the relevant risks), operating effectively over the reporting period (not just at a point in time), and producing evidence of their operation (logs, records, reports that can be examined).

For AI governance specifically, this means evaluation records — logs showing that AI outputs were checked against defined policies, with results and any resulting actions documented. It means violation records — instances where governance rules caught issues, with classification, assignment, and resolution details. It means policy definitions — the rules themselves, version-controlled, with evidence of review and updates. And it means monitoring metrics — evaluation volumes, violation rates, resolution times, and SLA compliance tracked over the reporting period.

Preparing for Your Audit

90 Days Before: Foundation

Inventory your AI systems. List every AI feature in your product, what models they use, what data they process, and what outputs they generate. This becomes the basis for your auditor's risk assessment.

Map AI risks to Trust Services Criteria. For each AI system, identify the specific risks it introduces under each applicable criterion. A customer-facing AI chatbot has processing integrity risks (accuracy), confidentiality risks (data leakage), and privacy risks (PII in outputs). An AI code assistant has security risks (secrets in suggestions) and processing integrity risks (code quality).

Assess your current controls. For each identified risk, document what controls exist today. Are AI outputs monitored? Is there PII detection? Are model provider agreements documented? Be honest about gaps — discovering them now gives you time to close them.

60 Days Before: Implementation

Close the evidence gaps. If you have governance practices that don't generate evidence, formalize them. This usually means implementing automated policy evaluation with logging, establishing a violation workflow with documented resolution, and creating compliance reports that aggregate governance data.

Formalize your policies. Your AI governance policies should be documented, version-controlled, and approved. An auditor should be able to see what rules govern your AI systems, when those rules were last reviewed, and who approved them.

Document your AI data flows. Create data flow diagrams that show how customer data enters your AI systems, where it goes during processing (including third-party model providers), and how it's handled in outputs. This addresses security, confidentiality, and privacy criteria simultaneously.

30 Days Before: Evidence Collection

Generate your evidence package. Pull evaluation logs, violation records, resolution documentation, and monitoring metrics for the reporting period. If you have a governance platform, this should be a report generation exercise. If not, compile manually from application logs and incident records.

Review your model provider agreements. Ensure data processing agreements with AI providers are current and cover data handling, retention, and training exclusions. Have these ready for your auditor to review.

Prepare walkthrough documentation. Your auditor will likely request a walkthrough of your AI governance controls. Prepare a demonstration that shows policy configuration, real-time evaluation of AI outputs, violation detection and workflow, and evidence retrieval.

The Ongoing Obligation

SOC 2 isn't a one-time certification — it's an ongoing commitment. Your AI governance controls need to operate continuously throughout the reporting period, not just during audit season.

This means evaluation runs every day, not just before the audit window. Violations are tracked and resolved year-round. Policies are reviewed and updated as your AI features evolve. Evidence accumulates continuously, so when the audit period arrives, the evidence package is a report, not a scramble.

The companies that treat AI governance as continuous infrastructure — not annual audit preparation — spend less time on compliance and produce stronger evidence when it matters.

Enjoyed this post?

Subscribe to get the latest on AI governance, compliance automation, and policy-as-code delivered to your inbox.

Ready to Govern Your AI?

Start enforcing policies across code, AI outputs, and documents in minutes.