AI Systems
Register and govern AI-powered applications with risk classification, integration linking, and policy enforcement.
Overview
AI Systems let you register each AI-powered application your organization operates — chatbots, code assistants, document analyzers, autonomous agents — as a governed resource. Each AI system tracks what data it processes, who it affects, and which integrations it uses, giving you a centralized view of your AI landscape.
Why Register AI Systems?
- Risk visibility — See all AI applications in one place with risk classifications
- Compliance alignment — Map systems to regulatory requirements (EU AI Act, HIPAA, SOC 2)
- Integration grouping — Link multiple integrations to a single system for unified governance
- Audit readiness — Maintain a registry of AI systems with owners, data categories, and deployment status
Creating an AI System
Navigate to AI Systems in the dashboard and click New System.
Required Fields
- Name — A descriptive name (e.g., "Customer Support Chatbot", "Code Review Assistant")
Optional Fields
- Description — What the system does, how it's used
- Deployment Status — Current lifecycle stage:
DEVELOPMENT— In development, not yet liveSTAGING— Testing/pre-productionPRODUCTION— Live and serving usersDEPRECATED— Being phased out
- System Owner — The person accountable for this system
- Data Categories — What data the system processes (e.g., PII, PHI, FINANCIAL, BIOMETRIC, CONFIDENTIAL)
- Affected Subjects — Who is impacted (e.g., CUSTOMERS, EMPLOYEES, PATIENTS, MINORS)
- Tags — Searchable labels for organization
Risk Classification
Each AI system has a risk classification that reflects its regulatory exposure:
| Classification | When It Applies | |---------------|-----------------| | MINIMAL | Low-risk, minimal data handling | | LIMITED | Some regulatory requirements (e.g., confidential data, public-facing) | | HIGH_RISK | Sensitive data or subjects — PHI with patients, PII with customers, biometric data, student records, financial data | | UNACCEPTABLE | Should not be deployed in current configuration | | UNCLASSIFIED | Default — not yet reviewed |
Auto-Classification
Aguardic can suggest a risk classification based on the data categories and affected subjects you select. Click Suggest Classification on the system detail page — the engine analyzes the combination and recommends a tier along with relevant compliance policy packs.
Start by entering your data categories and affected subjects, then use auto-classification to get a suggested risk tier. You can always override the suggestion.
For example:
- PHI + PATIENTS → HIGH_RISK (recommends HIPAA compliance pack)
- FINANCIAL + CUSTOMERS → HIGH_RISK (recommends PCI-DSS, financial controls)
- PII + PUBLIC → HIGH_RISK (recommends GDPR data protection)
- CONFIDENTIAL data → LIMITED
Linking Integrations
An AI system can use multiple integrations. For example, a customer support chatbot might use:
- An OpenAI integration for LLM responses
- A Slack integration for channel monitoring
- An Agent integration for tool call governance
To link integrations:
- Open the AI system detail page
- Go to the Integrations tab
- Click Link Integration and select from your existing integrations
- Policies bound to those integrations are now associated with this AI system
Violations from linked integrations roll up to the AI system, giving you a unified view of governance outcomes across all components of a single application.
Linking Entities
AI systems can also be linked to governed entities (customers, patients, vendors) to track which subjects are processed by which systems.
- Open the AI system detail page
- Go to the Entities tab
- Click Link Entity and select from your entity registry
This enables audit questions like "which AI systems process patient data?" or "what systems interact with this specific customer?"
Monitoring
The AI Systems list page shows:
- All registered systems with their risk classification and deployment status
- Integration count per system
- Quick filters by risk tier and deployment status
- Search by name or description
Next Steps
- Your First Policy — Create policies to govern your AI systems
- Integrations — Set up the integrations your AI systems use
- Audit Trail — Review violations across your AI systems