Colorado AI Act: What Businesses Need to Know
Colorado has passed the nation's first comprehensive state law targeting algorithmic discrimination in the private sector. If your business uses AI to make decisions about employees, customers, or consumers in hiring, lending, healthcare, housing, or insurance you are likely covered. Enforcement begins June 30, 2026. The time to prepare is now.
What Is the Colorado AI Act?
The Colorado Artificial Intelligence Act (CAIA), enacted as Senate Bill 24-205, was signed into law on May 17, 2024. Its enforcement date was later extended to June 30, 2026 by Senate Bill 25B-004, passed during a special legislative session on August 28, 2025, following industry concerns about operational feasibility.
The law's purpose is direct: hold organizations accountable for algorithmic discrimination unfair outcomes caused by AI by shifting the legal standard from proving intentional bias to penalizing negligent disparate impact. Unlike data privacy laws that govern how data is collected, the CAIA regulates what AI systems do with that data. It targets predictive machine learning models that have operated quietly inside corporate infrastructure for years, not just generative AI chatbots.
One additional risk factor: a December 2025 federal executive order designated the CAIA as a target for potential federal preemption, with a Department of Commerce review due in March 2026. Despite this, the Colorado Attorney General retains full rulemaking authority, and businesses should treat June 30, 2026 as a hard, immovable deadline.
Who Does It Apply To?
The CAIA applies to any entity doing business in Colorado that develops or deploys covered AI systems with no minimum revenue threshold. A single Colorado consumer affected by your AI system triggers jurisdiction.
The law assigns distinct duties to two separate parties:
A "developer" builds, trains, or substantially modifies a high-risk AI system. Developers must supply technical documentation (training data summaries, known limitations, bias testing results) to downstream buyers, maintain public-facing disclosures of system risk profiles, and report discovered discrimination to the Attorney General. Developers have no size exemption obligations apply regardless of company size.
A "deployer" uses a high risk system to make consequential decisions. Deployers carry the primary consumer facing burden: conducting impact assessments, notifying consumers, disclosing adverse decisions, and enabling appeals. Many organizations are both developer and deployer simultaneously.
A small business exemption exists but it is deliberately narrow. Deployers with fewer than 50 full time employees may qualify, but only if they:
- Do not use proprietary data to train, fine-tune, or customize the model
- Restrict use strictly to the parameters disclosed by the developer
- Provide the developer's impact assessment to consumers on request. Integrating even a single proprietary dataset to optimize the model forfeits this exemption entirely.
Sector-specific exemptions cover HIPAA regulated healthcare entities (only for AI requiring a licensed provider's independent action), financial institutions already examined under equivalent AI risk standards, insurers governed by C.R.S. 10-3-1104.9, and cybersecurity tools used solely for fraud and threat detection.
What Does "High Risk AI" Mean?
A "high risk AI system" is any automated system that makes or is a "substantial factor" in making a "consequential decision." Consequential decisions are those with material legal or similarly significant effects on: education enrollment, employment, financial and lending services, essential government services, healthcare, housing, insurance, and legal services.
The "substantial factor" standard is a critical legal trap. Placing a human reviewer at the end of an automated workflow does not exempt the system. If an algorithm scores, ranks, or filters and a human relies on that output without truly independent analysis the algorithm remains a substantial factor. The law addresses automation bias directly.
Algorithmic discrimination is defined as differential treatment or disparate impact against individuals based on protected characteristics including age, color, disability, ethnicity, genetic information, limited English proficiency, national origin, race, religion, sex, sexual orientation, gender identity, reproductive health decisions, and veteran status. Crucially, intent is irrelevant statistically adverse outcomes for a protected class trigger strict liability.
Compliance Checklist
- Inventory all AI systems and classify which qualify as high-risk under the consequential decision standard.
- Determine your role developer, deployer, or both as duties differ substantially.
- Adopt a written risk management program aligned to NIST AI RMF or ISO/IEC 42001; a generic policy statement does not satisfy this requirement.
- Complete a documented impact assessment before deploying any high-risk system, covering: intended purpose, data categories, affected demographics, and discrimination risks and mitigations.
- Refresh assessments annually and within 90 days of any intentional and substantial modification to the system's architecture, logic, or training data.
- Notify consumers plainly and conspicuously before any high risk system makes or substantially contributes to a consequential decision about them.
- Disclose adverse decisions: provide the principal reasons, the AI's role, data types used, and the consumer's right to correct inaccurate data.
- Establish an appeals mechanism that enables consumers to challenge AI-driven adverse decisions through an independent human review where technically feasible.
- Publish a public statement listing your high-risk AI systems, data practices, and discrimination mitigation methodologies.
- Mandate technical documentation from vendors training data lineage, known limitations, bias test results as a non-negotiable procurement requirement; you cannot conduct a valid impact assessment without it.
- Report discovered discrimination to the AG within 90 days; developers must also notify all known deployers of the affected system within the same window.
Penalties and What's at Stake
The Colorado Attorney General holds exclusive enforcement authority, there is no private right of action. Before pursuing enforcement, the AG must provide 60 days' written notice to cure. Uncured violations are classified as deceptive trade practices under the Colorado Consumer Protection Act, carrying up to $20,000 per violation, counted separately per affected consumer. A flawed model processing 1,000 loan applications exposes up to $20 million in penalties.
The law rewards proactive compliance with two powerful legal shields. Organizations that fulfill documentation, impact assessment, and notification requirements earn a rebuttable presumption of reasonable care, forcing the AG to prove negligence. More powerfully, organizations that discover and self-cure a violation through their own audits or red-teaming, maintain documented alignment with NIST AI RMF or ISO/IEC 42001, and report to the AG within 90 days, qualify for an affirmative defense that bars prosecution entirely.
The choice is binary: invest in proactive governance now, or face enforcement liability that compounds with every transaction the model processes.
Act Now
The runway to compliance is shorter than it appears. Auditing AI systems, executing impact assessments, restructuring vendor contracts, and deploying consumer notification workflows takes time measured in quarters, not weeks. The cost of preparation is a fraction of the cost of a single multi-application enforcement action. Convene your legal, compliance, and technical teams today map your algorithmic inventory, close documentation gaps, and establish a defensible governance framework before the enforcement deadline arrives.