Corruption News

Rocky Mountain High on AI: Colorado Emerges as the First Mover on State AI Law

0

Colorado was the first state across the comprehensive AI law finish line, as its governor in May signed into law the Colorado AI Act. Baker Donelson’s Vivien F. Peaden explores details of the new law and what companies need to know.

Colorado has stepped boldly into the difficult area of regulating artificial intelligence (AI) with the enactment of the Colorado AI Act on May 17. Formally Senate Bill 205, the groundbreaking law has similarities to the EU AI Act, including taking a risk-based approach and establishing rules around AI impact assessment.

The law, which takes effect in 2026, will require developers and users of “high-risk AI systems” to adopt compliance measures and protect consumers from the perils of AI bias. Noncompliance with the Colorado AI Act (CAIA) could lead to hefty civil penalties for engaging in deceptive trade practices.

The enactment of an AI law in the Centennial State is the culmination of a nationwide trend in 2024 to regulate the use of AI, with three Cs leading the charge: California, Connecticut and Colorado. While California is making slow progress with its proposed regulations of automated decision-making technology (ADMT), Connecticut’s ambitious AI law (SB 2) was derailed by a veto threat from Gov. Ned Lamont. 

In the end, Colorado’s SB-205 became the lone horse crossing the finish line. Two other states, Utah and Tennessee, also passed state AI-related laws this year, focusing specifically on regulating generative AI and deepfakes. That makes the Colorado AI Act the first comprehensive U.S. state law with rules and guardrails for AI development, use and bias mitigation. 

AI systems regulated under Colorado’s law

The CAIA largely adopts the broad definition of “artificial intelligence system” nearly verbatim from the EU AI Act, which was in March 2024, (see this alert on EU AI Act). As illustrated below, the CAIA takes a technology-neutral stance and purposefully sets a broad definition so that it does not become obsolete as AI rapidly advances:

EU AI Act definition CAIA definition
A machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments Any machine-based system that, for any explicit or implicit objective, infers from inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.

Colorado’s law follows the EU AI Act’s risk-based approach but has a narrower focus on the use of “high-risk AI systems” in the private sector. For Colorado residents, an AI system is “high-risk” if it “makes, or is a substantial factor in making, a consequential decision,” affecting their access to or conditions to receiving the following:

  • Education enrollment or an education opportunity
  • Employment or an employment opportunity
  • A financial or lending service
  • An essential government service
  • Healthcare services
  • Housing
  • Insurance
  • A legal service

Unlike the 2023 Colorado Privacy Act that exempts employee data and financial institutions subject to the Gramm-Leach-Bliley Act (GLBA), the Colorado AI Act expressly prohibits algorithmic discrimination affecting Colorado residents’ employment opportunities or access to financial or lending services. Further, Colorado’s definition of “high-risk AI systems” excludes a list of “low-risk AI tools” for anti-malware, cybersecurity, calculators, spam-filtering, web caching and spell-checking, among other low-risk activities.

Developers & deployers

While the EU AI Act sets a comprehensive framework to regulate all activities across six key players that develop, use and distribute AI systems, the Colorado AI Act narrows the field down to only two players:

  • AI developer: legal entity doing business in Colorado that develops, or intentionally and substantially modifies an AI system
  • AI deployer: legal entity doing business in Colorado that uses a high-risk AI system

Under the CAIA, both developers and deployers of high-risk AI systems must use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination. If the Colorado Attorney General’s Office brings an enforcement action, a company will be afforded a rebuttable presumption if it complies with its respective obligations for developer or deployer.

Upon creation of a new high-risk AI, or intentional and substantial modifications of a high-risk AI, an AI developer must comply with the following requirements:

  • AI instructions: Provide disclosures and documentation to downstream users regarding the intended use and specifics of its high-risk AI systems
  • Impact assessment facilitation: Make available additional documentation or information to facilitate impact assessment by downstream users (aka the AI deployer)
  • Public disclosure: Maintain and post a current public statement on the developer’s website summarizing: (i) what types of high-risk AI it has developed for use and license; and (ii) how it manages risks of algorithmic discrimination
  • Incident reporting: Report to the Colorado Attorney General’s Office upon discovery of any algorithmic discrimination

For AI deployers that are downstream users of high-risk AI, the CAIA imposes similar obligations around public disclosure and incident reporting:

  • Public disclosure: maintain and post a current public statement on the Deployer’s website summarizing its use of high-risk AI
  • Incident reporting: report to the Colorado Attorney General’s Office upon discovery of algorithmic discrimination

In addition, an AI deployer must comply with the following requirements:

  • Risk management program: Implement a risk-management policy and program that governs high-risk AI uses
  • Impact assessment: Conduct an impact assessment of the current use of high-risk AIs annually and within 90 days after any intentional and substantial modification of high-risk AI
  • Pre-use notice to consumers: Notify consumers with a statement disclosing information about the high-risk AI system in use
  • Consumer rights disclosure: Inform Colorado consumers of their rights under the CAIA, including the right to pre-use notice, the right to exercise data privacy rights and the right to an explanation if an adverse decision is made from the use of high-risk AI, among others

Exemptions

The CAIA provides an exemption for high-risk AI deployers if they are small to medium-sized enterprises (SMEs) employing 50 or fewer full-time employees and meet certain conditions. These organizations do not need to maintain a risk management program, conduct an impact assessment or create a public statement, but they are still subject to a duty of care and must provide the relevant consumer notices.

Enforcement

The CAIA vests the Colorado AG with exclusive enforcement authority. Any violation of the CAIA constitutes a deceptive trade practice subject to hefty civil penalties imposed by the Colorado Consumer Protection Act. Section 6-1-112 of the Colorado Consumer Protection Act currently imposes a civil penalty of up to $20,000 per violation and up to $50,000 per violation if a deceptive trade practice is committed against a resident over age 60.

Conclusion

With the Colorado AI Law set to take effect Feb. 1, 2026, and potentially serving as a blueprint for other states, companies must start planning their AI compliance roadmap, including policy development, AI audit and assessment and AI vendor contract management. The time to get ready is now to ensure compliance and mitigate potential regulatory and operational risks.

 


Source link

Leave A Reply

Your email address will not be published.