The Colorado Artificial Intelligence Act is the first comprehensive US state law regulating high-risk AI systems. Signed on May 17, 2024, it establishes obligations for developers and deployers of AI that makes or substantially influences consequential decisions affecting Colorado consumers. The law's stated goal is preventing algorithmic discrimination, and it applies to decisions in employment, education, financial services, healthcare, housing, insurance, legal services, and government benefits.
If you are building or deploying AI systems that touch any of those domains for Colorado residents, this guide covers what the law requires, who it applies to, what the current enforcement status is, and how to build a compliance program that works regardless of how the legal landscape evolves.
Current Status: Enforcement Is Frozen, the Law Is Not
The Colorado AI Act was originally set to take effect February 1, 2026. A special legislative session in August 2025 pushed that to June 30, 2026. Then, on April 9, 2026, xAI filed a federal lawsuit seeking to enjoin the law on First Amendment, Dormant Commerce Clause, due process, and equal protection grounds. Two weeks later, the Trump Department of Justice intervened, marking the first time the federal government has moved to invalidate a state AI law under the President's December 2025 executive order.
On April 27, 2026, Magistrate Judge Cyrus Y. Chung of the US District Court for Colorado granted a joint motion from xAI and the Colorado Attorney General that effectively freezes enforcement. AG Phil Weiser committed in the court filing that his office will not promulgate implementing rules and will not enforce the Act until after the current legislative session concludes and any resulting rulemaking is complete. Rulemaking has not begun. The legislature adjourns May 13.
Simultaneously, Governor Polis's AI Policy Working Group released a proposed replacement framework on March 17 that would substantially narrow the Act's scope, add a 90-day cure period, and push the effective date to January 1, 2027. As of this writing, no replacement bill has been formally introduced, and the legislative window is closing.
The practical effect is that enforcement will not begin on June 30, 2026. The realistic timeline pushes well past that date regardless of the litigation outcome. But the statute is still law. The underlying obligations have not been repealed or amended. And the compliance work required to satisfy those obligations does not change based on when enforcement begins.
Who the Law Applies To
The Colorado AI Act distinguishes between two roles, and many organizations will fill both.
Developers are entities that create or substantially modify AI systems intended for use as high-risk systems. If you build AI products that other companies deploy for consequential decisions, you are a developer. This includes foundation model providers, vertical AI vendors, and internal teams that build AI tools used by other business units for covered decisions.
Deployers are entities that use high-risk AI systems to make or substantially influence consequential decisions. If you license an AI hiring tool, use an AI underwriting model, or deploy an AI system that affects any covered decision category for Colorado residents, you are a deployer. You do not need to have built the system. Using it is enough.
A company that builds an AI lending model and also uses it internally is both a developer and a deployer, and must satisfy obligations for both roles.
What Makes an AI System "High-Risk"
A high-risk AI system under the Act is any AI system that, when deployed, makes or is a substantial factor in making a consequential decision. The law defines consequential decisions as those that have a material legal or similarly significant effect on a consumer's access to or the cost, terms, or availability of education, employment or employment opportunity, financial or lending services, essential government services, healthcare services, housing, insurance, or legal services.
The threshold is functional, not technical. It does not matter what model architecture you use, whether you call the system "AI" or not, or how much of the decision the system influences. If the system is a substantial factor in a decision that affects a Colorado consumer's access to any of those categories, it is high-risk under the Act.
Developer Obligations
Developers of high-risk AI systems must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. In practice, the Act translates this into several specific requirements.
Developers must provide deployers with documentation sufficient for the deployer to fulfill its own obligations. This includes a general description of the system's reasonably foreseeable uses and known limitations, a summary of the types of data used to train the system, known or reasonably foreseeable outputs and how they should be used, information about the measures taken to mitigate algorithmic discrimination risks, and how the system was evaluated for performance and fairness.
Developers must make available a statement describing the system, its intended uses, and a summary of the types of data it was designed to process. Developers must also disclose known material deficiencies, including any involvement in algorithmic discrimination, to the Colorado Attorney General and to known deployers within 90 days of discovery.
Deployer Obligations
Deployers face a broader set of requirements because they are the entities putting the system in front of consumers.
Deployers must implement a risk management policy and program that governs their use of high-risk AI systems. The program must specify and incorporate the principles, processes, and personnel used to identify, document, and mitigate known or reasonably foreseeable risks of algorithmic discrimination. The risk management program must be an iterative process, updated as the deployment environment changes.
Deployers must complete an impact assessment for each high-risk AI system before deployment and annually thereafter or upon material changes. The impact assessment must include the purpose and intended use of the system, an analysis of whether the system poses known or reasonably foreseeable risks of algorithmic discrimination, the categories of data processed as inputs, the outputs and how they are used to make or influence consequential decisions, and transparency measures in place.
Deployers must provide notice to consumers before a consequential decision is made using a high-risk AI system. The notice must include a statement that the system is being used, a description of its purpose, and contact information for the deployer. If the system makes an adverse consequential decision, the deployer must provide a statement of the principal reasons for the decision and an opportunity to correct any incorrect data the system used, along with an opportunity to appeal to a human reviewer.
The Affirmative Defense: NIST AI RMF and ISO 42001
Section 6-1-1706(3) provides an affirmative defense for developers, deployers, or other persons who are in compliance with a nationally or internationally recognized risk management framework for AI systems. Section 6-1-1703(6) creates a rebuttable presumption of reasonable care for deployers who comply with NIST AI RMF or ISO 42001.
This is the strongest legal protection the statute offers. If you can demonstrate active, ongoing compliance with NIST AI RMF or ISO 42001, you have a rebuttable presumption that you exercised reasonable care. The word "ongoing" matters. A one-time framework alignment exercise that was completed before deployment and never updated does not establish a continuous compliance posture. The statute's use of "iterative process" in the risk management requirement reinforces that compliance must be maintained, not just documented at a point in time.
Penalties
Violations of the Colorado AI Act are treated as violations of the Colorado Consumer Protection Act. The Attorney General has exclusive enforcement authority. Civil penalties can reach $20,000 per violation, with higher penalties when the affected consumer is elderly. For a high-volume AI system processing hundreds or thousands of decisions per day, the violation count compounds rapidly.
The Act also includes a private right of action through the existing Colorado Consumer Protection Act framework, meaning consumers affected by algorithmic discrimination may bring individual claims.
The Replacement Bill: What May Change
The Governor's Working Group framework, released March 17, 2026, proposes several significant changes if enacted. The scope would narrow to more closely resemble California's CCPA automated decision-making technology regulations. A 90-day cure period would be added, giving organizations time to fix violations before enforcement action. The effective date would push to January 1, 2027. The framework would also require the Attorney General to adopt implementing rules by December 31, 2026, and would mandate three-year record retention for system identifiers, change logs, documentation, and material update notices.
The replacement has not been introduced as legislation. The legislature adjourns May 13. A bill can pass in as few as three days in Colorado, so last-minute action is possible. But planning a compliance program around a bill that does not exist yet is a gamble.
How to Comply: The Work That Does Not Change
Regardless of whether enforcement begins June 30, January 1, or later, and regardless of whether the statute is amended, the core compliance work is the same. Every proposed version of the law, the original Act, the Working Group replacement framework, and the Rodriguez draft all require the same foundational capabilities.
Build the AI system inventory. You cannot classify, govern, or produce evidence for AI systems you have not catalogued. Every AI system that touches consequential decisions needs a documented owner, a defined purpose, a risk classification, and a mapping of the data it processes and the decisions it influences. The Aguardic compliance platform automates this inventory across your AI stack.
Implement risk management as a continuous process. The statute explicitly requires an "iterative process." That means risk assessment is not a one-time exercise. It must be updated when the system changes, when the deployment context changes, when new data sources are added, or when new failure modes are discovered. Continuous evaluation of AI outputs against policy constraints produces the evidence that "iterative" requires.
Complete impact assessments. Before deployment and annually thereafter, document the purpose, data categories, output types, discrimination risks, and mitigation measures for each high-risk system. Keep these versioned and tied to specific system configurations so you can demonstrate which assessment applied at any given point in time.
Build the consumer notice and appeal infrastructure. Pre-decision notice, adverse decision explanation, data correction mechanisms, and human appeal pathways all require technical implementation, not just policy language. These need to work at the speed your AI system operates.
Align with NIST AI RMF or ISO 42001. The affirmative defense is too valuable to leave on the table. Active framework compliance gives you the strongest possible legal position regardless of how enforcement plays out. Map your controls to framework requirements and produce evidence of compliance continuously, not annually.
Generate audit evidence by default. When the Attorney General eventually does enforce, whether under the current Act or a replacement, the first request will be for evidence. Every AI decision, every policy evaluation, every consumer notice, and every adverse decision explanation should be logged with enough detail that you can reconstruct the complete compliance state at any point in time.
The Strategic Calculation
The Colorado AI Act is in legal limbo. Enforcement is frozen. The legislature may replace or substantially amend the statute. Federal preemption may ultimately invalidate parts of it. It is tempting to wait.
But the compliance work described above is not Colorado-specific. NIST AI RMF alignment is useful for EU AI Act compliance, ISO 42001 certification, enterprise procurement reviews, and SOC 2 AI controls. Impact assessments and risk management programs are required by every serious AI governance framework. Consumer notice and adverse decision transparency are becoming procurement baseline requirements regardless of jurisdiction.
The organizations that build this infrastructure now will satisfy Colorado whenever enforcement begins. They will also satisfy every other jurisdiction and every enterprise customer that asks for evidence of responsible AI practices. The ones that wait will build the same thing under time pressure, at higher cost, with less evidence to show for the period they spent waiting.
The law is uncertain. The compliance work is not.
We built Aguardic to automate Colorado AI Act compliance across your entire AI stack, from inventory and risk management to continuous policy enforcement and audit evidence generation. Start with your existing compliance documents and see what enforceable rules are already hiding in your policies.



