Browse the full policy marketplace catalog
Control and monitor AI usage across your org
Protect codebase and infrastructure from risks
Prevent data leaks and enforce data policies
EU AI Act compliance policy templates
SB 24-205 reasonable care policy templates
PHI protection and healthcare AI policies
Trust Services Criteria policy templates
AI management system compliance policies
NIST AI Risk Management Framework policies
AI agent security, safety & reliability standard
Regulatory and internal compliance requirements
Enforce code quality and dev best practices
Operational policies for infrastructure workflows
SB 24-205 took effect on February 1, 2026. Developers and deployers of high-risk AI systems now have an affirmative duty to prevent algorithmic discrimination in consequential decisions — with penalties of up to $20,000 per violation, enforced by the Colorado Attorney General. Aguardic is the AI agent governance platform that enforces Colorado AI Act policies in real time and generates the impact assessment evidence you need to defend your compliance.
Pre-built Colorado AI Act policy pack — aligned with NIST AI RMF for rebuttable presumption of reasonable care
AI System Registry with high-risk classification for consequential decisions (employment, lending, housing, healthcare, education, insurance, legal, government services)
Continuous impact assessment evidence with full audit trails exportable for the Colorado Attorney General
14-day free trial · No credit card · Free Colorado AI Act policy pack
82%
Score
3
Violations
1
Open
5/5
Policies
Policy Coverage
Requirements Coverage
No single tool covers every requirement. Here's exactly what Aguardic covers and what you'll need alongside us.
5
Covered
3
Partial
2
Not Covered
10
Total
Sec. 6-1-1702(1) — Developer Duty of Reasonable Care
Developers must use reasonable care to protect consumers from foreseeable risks of algorithmic discrimination arising from intended and contracted uses of a high-risk AI system.
Continuous policy enforcement demonstrates active risk mitigation throughout the AI system lifecycle. Every policy evaluation is logged with full decision reasoning, creating a defensible record of reasonable care.
Evidence: Policy enforcement logs, continuous evaluation records, violation detection and remediation trail
Sec. 6-1-1702(2) — Developer Documentation & Disclosure
Developers must provide deployers with a statement of intended uses, foreseeable risks, training data summary, performance evaluation, and mitigation measures.
AI System Registry captures intended purpose, risk classification, data categories, and deployment context. Requires manual enrichment for training data summary and performance evaluation documentation.
Evidence: AI System Registry exports, policy configuration records, risk classification documentation
Sec. 6-1-1702(4) — Disclosure of Algorithmic Discrimination to AG (90-Day Window)
Developers must disclose discovered algorithmic discrimination to the Colorado Attorney General and known deployers within 90 days of discovery or receipt of a credible report.
Real-time violation detection surfaces algorithmic discrimination signals as they occur. Full audit trail supports AG disclosure workflow, but does not automate notification to the Attorney General.
Evidence: Violation detection logs, incident timestamps, audit trail exports
Sec. 6-1-1703(2) — Deployer Risk Management Policy & Program
Deployers must implement an iterative risk management policy that identifies, documents, and mitigates known and reasonably foreseeable risks of algorithmic discrimination — reasonably aligned with NIST AI RMF or ISO 42001.
Pre-built NIST AI RMF and ISO 42001 policy packs align directly with the statute's rebuttable presumption framework under Sec. 6-1-1703(6). Policy versioning and continuous evaluation satisfy the "iterative process" requirement.
Evidence: NIST AI RMF policy pack, ISO 42001 policy pack, policy version history, continuous evaluation logs
Sec. 6-1-1703(3) — Annual Impact Assessments
Deployers must complete impact assessments annually and within 90 days of any substantial modification — covering purpose, intended use, deployed context, data categories, outputs, transparency measures, and monitoring.
AI System Registry and continuous evaluation data provide the foundation for impact assessments — intended purpose, data categories, deployment context, monitoring outputs, and violation history. Final assessment document requires human authorship.
Evidence: AI System Registry exports, continuous evaluation reports, violation trend analysis, policy coverage reports
Sec. 6-1-1703(4)(a) — Consumer Notice Before Consequential Decisions
When a high-risk AI system is used to make a consequential decision, deployers must notify the consumer with the system's purpose, contact information, description, and opt-out rights.
Pre-built Consumer Notice policy enforces disclosure requirements at the point of decision. Policy evaluation triggers on consequential decision endpoints and blocks actions that lack the required notice.
Evidence: Consumer notice policy evaluation logs, decision point audit trail, blocked action records
Sec. 6-1-1703(4)(b) — Right to Correction and Appeal
For adverse consequential decisions, deployers must allow consumers to correct incorrect personal data and appeal the decision via human review where technically feasible.
Requires a data subject request workflow and human review queue outside the scope of policy enforcement. Aguardic's escalation modes can route decisions for human review, but does not provide end-to-end appeal workflow.
Sec. 6-1-1703(5) — Post-Deployment Monitoring
Deployers must monitor deployed high-risk AI systems for algorithmic discrimination throughout the lifecycle and annually review each deployment.
Continuous evaluation provides ongoing monitoring data. Every AI agent action is evaluated against active policies with full audit trails. Annual review workflow supported via exportable compliance reports.
Evidence: Continuous evaluation logs, compliance dashboard, annual review exports
Sec. 6-1-1703(7) — Public Website Disclosure
Deployers must publish a statement summarizing the high-risk AI systems they deploy, how risks of algorithmic discrimination are managed, and the nature/source/maintenance of data used.
AI System Registry exports generate the structured inventory required for the public statement. Policy enforcement records document risk management in the format regulators expect.
Evidence: AI System Registry public exports, policy summary reports, data source documentation
Sec. 6-1-1704 — Consumer-Facing AI Disclosure (Non-High-Risk)
Any AI system that interacts with a Colorado consumer must disclose that the consumer is interacting with AI, regardless of high-risk classification.
Requires integration at the consumer-facing UI layer, typically handled by product engineering rather than governance infrastructure. Aguardic can enforce backend policies that flag undisclosed AI interactions, but cannot inject UI disclosures.
Coverage mappings are based on Aguardic's current product capabilities mapped to Colorado AI Act (SB 24-205) requirements for high-risk artificial intelligence systems. These mappings should be validated with legal counsel for your specific use case. Compliance with NIST AI RMF or ISO 42001 creates a rebuttable presumption of reasonable care under Sec. 6-1-1703(6) of the Act.
Enforcement Timeline
Developers and deployers of high-risk artificial intelligence systems must exercise reasonable care to protect Colorado consumers from known or reasonably foreseeable risks of algorithmic discrimination. Rebuttable presumption of reasonable care available for deployers who implement a risk management policy, complete impact assessments, and comply with disclosure requirements.
Penalty: Up to $20,000 per violation, enforced by the Colorado Attorney General
When a high-risk AI system makes or is a substantial factor in a consequential decision, consumers must be notified. Any AI system interacting with Colorado consumers must disclose the AI nature of the interaction. Adverse decisions trigger rights to correction, appeal, and explanation.
Penalty: Unfair trade practice under the Colorado Consumer Protection Act
Deployers must complete impact assessments annually and within 90 days of any substantial modification to a high-risk AI system. Records must be maintained for at least three years following final deployment.
Proposed amendment introduced in the August 2025 special legislative session. Would grant consumers who receive adverse decisions the right to request a list of up to 20 personal characteristics that most influenced the decision, and introduce joint and several liability for developers and deployers alongside safe harbor provisions. Passed committee 4-3. Status pending as of April 2026.
How Aguardic Helps
Classify every AI system by Colorado AI Act criteria — whether it makes or is a substantial factor in consequential decisions across employment, lending, housing, healthcare, education, insurance, legal services, or government services. AI-assisted risk classification with full lifecycle tracking.
Pre-built policy packs aligned with NIST AI RMF and ISO 42001 — the two frameworks the statute designates as acceptable compliance benchmarks. Rebuttable presumption of reasonable care built in by default. Policy versioning tracks every change for discovery defense.
Every AI agent action, policy evaluation, and violation is logged with full audit trails. Generate annual impact assessments from continuous data rather than point-in-time snapshots. Exportable records ready for the Colorado Attorney General on demand.
The Cost of Non-Compliance
$20,000
Maximum penalty per individual violation
Enforceable now
Per Consumer
Violations can stack per affected Colorado consumer, not per AI system. A single discriminatory model affecting thousands of consumers creates exposure in the millions.
Enforceable now
Unfair Trade
Violations are deemed unfair or deceptive trade practices under the Colorado Consumer Protection Act, enabling additional state remedies and consumer-facing reputational damage.
Enforceable now
The Colorado Attorney General has exclusive enforcement authority. There is no private right of action under SB 24-205, but the AG's office has signaled active monitoring beginning February 2026.
Does This Apply to You?
Consequential decisions cover:
Both developers and deployers have compliance obligations. The jurisdictional test is not where the AI was built or where the company is headquartered — it is whether the AI system is deployed in connection with consequential decisions about Colorado residents. Small deployers (fewer than 50 full-time employees) may qualify for a limited exemption if they meet specific conditions.
Organizations that proactively demonstrate Colorado AI Act compliance gain a competitive advantage in enterprise sales — particularly in regulated industries where algorithmic discrimination claims carry the highest reputational risk.
Get Compliant in Three Steps
Inventory every AI system that makes or influences consequential decisions. Classify by consequential decision category — employment, lending, housing, healthcare, education, insurance, legal services, or government services. AI-assisted classification suggestions built in.
One-click install. Pre-built policies for reasonable care, impact assessments, consumer notice, algorithmic discrimination detection, and NIST AI RMF alignment — the framework the statute designates for a rebuttable presumption of reasonable care.
Browse in MarketplaceConnect your AI tools and agents. Every consequential decision is evaluated automatically, notices are enforced at the point of action, and impact assessment evidence generates itself. Export records on demand for the Colorado Attorney General.
Already have internal AI governance documents? Upload them and extract enforceable rules automatically
Register your high-risk AI systems, install the Colorado AI Act policy pack with NIST AI RMF alignment, and start generating impact assessment evidence automatically.
14-day free trial · NIST AI RMF reasonable care presumption · Free Colorado AI Act policy pack
This page summarizes key provisions of the Colorado Artificial Intelligence Act (SB 24-205) for informational purposes only. Aguardic is not a law firm and this is not legal advice. Consult qualified legal counsel to assess your specific compliance obligations. Coverage mappings reflect Aguardic's current product capabilities as of April 2026 and are subject to change as the statute evolves through Attorney General rulemaking.