Browse the full policy marketplace catalog
Control and monitor AI usage across your org
Protect codebase and infrastructure from risks
Prevent data leaks and enforce data policies
EU AI Act compliance policy templates
PHI protection and healthcare AI policies
Trust Services Criteria policy templates
AI management system compliance policies
NIST AI Risk Management Framework policies
AI agent security, safety & reliability standard
Regulatory and internal compliance requirements
Enforce code quality and dev best practices
Operational policies for infrastructure workflows
The NIST AI RMF provides a voluntary framework for managing AI risks. Aguardic operationalizes the four core functions — Govern, Map, Measure, Manage — with enforceable policies and continuous evidence.
Pre-built NIST AI RMF policy pack — 4 policies, 22 enforceable rules
AI System Registry with risk categorization aligned to NIST profiles
Continuous measurement and evidence generation
14-day free trial · No credit card · NIST-aligned policy templates
84%
Score
5
Violations
3
Open
4/4
Policies
Policy Coverage
Requirements Coverage
No single tool covers every requirement. Here's exactly what Aguardic covers and what you'll need alongside us.
9
Covered
3
Partial
2
Not Covered
14
Total
GOVERN 1 — Policies & Processes
Establish organizational AI risk management policies, processes, and procedures
Policy-as-code architecture provides versioned, enforceable AI governance policies. Marketplace packs offer pre-built policy templates aligned to NIST functions.
Evidence: Policy definitions, version history, enforcement configuration
GOVERN 2 — Accountability
Establish accountability structures and mechanisms for AI risk management
Audit trails log who triggered evaluations and who reviewed escalations. Does not define organizational accountability roles or reporting structures.
Evidence: User action logs, escalation review records
GOVERN 3 — Workforce Diversity
Ensure workforce diversity and domain expertise in AI risk management
Requires organizational HR and team composition practices. Outside the scope of automated policy enforcement.
GOVERN 4 — Organizational Practices
Integrate AI risk management into broader enterprise risk management
Compliance dashboards and exportable reports integrate into enterprise risk reporting. Does not manage the broader enterprise risk management framework.
Evidence: Compliance reports, integration with enterprise dashboards
MAP 1 — Context & Intended Purpose
Establish context and define the intended purpose and use of AI systems
AI System Registry captures intended purpose, risk tier, data categories, target audience, and integration context for every registered AI system.
Evidence: AI System Registry records, system context documentation
MAP 2 — Categorization of AI Systems
Categorize AI systems by risk level and potential impacts
AI System Registry supports risk classification with configurable risk tiers. Systems are categorized by risk level, data sensitivity, and impact scope.
Evidence: Risk classification records, categorization history
MAP 3 — Benefits & Costs
Assess potential benefits and costs of AI system deployment
Requires business analysis and cost-benefit assessment processes. Outside the scope of automated governance.
MEASURE 1 — Appropriate Methods
Select and employ appropriate methods for measuring AI risks
Three-layer evaluation engine provides multiple measurement methods: deterministic rules for quantifiable metrics, semantic AI for qualitative assessment, knowledge RAG for contextual evaluation.
Evidence: Evaluation methodology logs, measurement records per layer
MEASURE 2 — AI Systems Evaluated
Evaluate AI systems for trustworthiness characteristics
Continuous policy evaluation assesses AI outputs against trustworthiness policies including safety, fairness, transparency, and accountability criteria.
Evidence: Continuous evaluation logs, trustworthiness assessment records
MEASURE 3 — Tracking Metrics
Track and report identified AI risks and metrics over time
Compliance dashboards track violation rates, enforcement actions, and risk metrics over time. Exportable reports for stakeholder review.
Evidence: Trend reports, violation metrics, compliance dashboard exports
MANAGE 1 — Risk Response
Prioritize and act on AI risks based on assessed impact and likelihood
Enforcement modes (Block/Warn/Monitor) provide tiered risk response. Policy severity levels (Critical/High/Medium/Low) enable risk-based prioritization.
Evidence: Enforcement action logs, risk-prioritized violation records
MANAGE 2 — Risk Treatment
Plan and implement risk treatment strategies for AI systems
Policy packs implement risk treatment controls. Continuous enforcement ensures treatments remain active. Policy updates track treatment evolution.
Evidence: Policy enforcement records, treatment effectiveness logs
MANAGE 3 — Post-deployment Monitoring
Continue monitoring AI systems after deployment
Continuous post-deployment evaluation across all connected integrations. Real-time violation detection and alerting for deployed AI systems.
Evidence: Post-deployment evaluation logs, real-time alert records
MANAGE 4 — Incident Response
Establish processes for AI incident response and recovery
Enforcement actions provide automated first-response (block, warn, escalate). Full incident management workflow is on the Aguardic roadmap.
Evidence: Automated response logs, escalation records
Coverage mappings are based on Aguardic's current product capabilities mapped to NIST AI 100-1 framework functions and categories. The NIST AI RMF is a voluntary framework — validate these mappings against your organization's specific risk management requirements.
Four Core Functions
Establish AI governance policies, define roles and accountability structures, and create organizational processes for AI risk management.
Identify and categorize AI systems, their operational contexts, potential impacts, and stakeholders. Build a complete AI system inventory.
Assess AI system performance, evaluate bias and fairness, quantify risk levels, and benchmark against organizational thresholds.
Implement risk controls, respond to incidents, track remediation, and continuously improve AI systems based on measurement data.
Does This Apply to You?
While NIST AI RMF is voluntary today, it is becoming the de facto standard that auditors, customers, and regulators reference when evaluating AI governance maturity.
Get Started in Three Steps
One-click install. 4 policies with 22 rules mapped to Govern, Map, Measure, and Manage.
Browse in MarketplaceRegister AI systems, categorize by risk, and document context and stakeholders.
Connect integrations. Policies enforce automatically. Evidence accumulates continuously.
Already have internal AI governance documents? Upload them and extract enforceable rules automatically
Install policy templates mapped to NIST AI RMF functions, register your AI systems, and start generating evidence automatically.
14-day free trial · No credit card · NIST-aligned policy templates