Colorado SB 205: Consumer Protections in Interactions with Artificial Intelligence Systems
Highlights:
- The lack of a federal AI governance bill makes it look like we may see states follow a similar model to U.S. Privacy Law; A proliferation of state laws for U.S. companies to comply with.
- Introduced April 10, 2024 – to minimize discrimination in the development and deployment of AI systems, passed on May 8th and Colorado Governor, Jarod Polis signed into law on May 20th.
- Could be seen as a baseline for future state AI laws.
- Includes an AI disclosure requirement for any business using AI to interact with consumers.
Key Takeaways:
- This is the 1st state to pass a broad AI Law imposing significant requirements on covered organizations.
- Only applies to high-risk AI systems following an early revision excluding general purpose AI systems.
- If the developer complies with specific provisions in the bill there is a rebuttable presumption that a developer used reasonable care.
Definitions:
- Algorithmic Discrimination – Any condition that is a result of an AI system that provides an “unlawful differential treatment or impact” that has a deferential impact of an individual, or group of individuals based on; age, color, disability, ethnicity, genetic data, English proficiency, nationality, race, religion, reproductive health, sex, veteran status or other classifications protected under law.
- Duty of Care – Developers and Deployers to use reasonable care to protect consumers from known or reasonably foreseeable risks of algorithmic discrimination.
Covered Uses:
- Both deployment and development are covered.
- When the AI system is being used as a significant factor in decision-making in a defined set of circumstances that are considered to be consequential, E.g., education, employment, financial or lending services, healthcare, housing, or legal services. These AI systems are considered high-risk.
- When the AI system is intended to be used for purposes defined as non-high risk; for example: Perform a narrow task, detect decision making patterns, anti-fraud technology (not using facial recognition), anti-malware, anti-virus, cybersecurity, data storage, spell-checking, etc., if they interact with individuals they must be disclosed.
Who Must Comply:
- Developers and Deployers of high-risk AI systems doing business in Colorado.
- A Developer is an organization that not only develops AI systems, but also substantially modifies an AI system.
- A Deployer is a person or entity that uses a high-risk AI system.
Enforcement:
Colorado AG will have enforcement powers. The AG is also granted rule making authority to support implementation of the law.
- There is no Private Right of Action.
- Colorado AG would lead enforcement actions under the state’s unfair or deceptive trade practices, however there is a safe-harbor for businesses that can demonstrate they have taken actions to address any violation, and that they are in compliance with a specified AI Framework.
Exemptions:
- Unlike State Privacy Laws, there are no minimum levels of data processed or revenue that must be met for SB205 to apply.
- There are a few exceptions to SB 205: for example:
- Small businesses are exempt from several of the requirements (employing less than 50 full time employees, does not use their own data to train the AI system, only using the AI system for the intended purpose as previously disclosed to the small business by the developer, continued learning is not based on data derived from the business, certain impact assessments are made available to consumers).
- If the high-risk AI systems are approved or follow standards established by a federal agency, or if they conduct research supporting an application for approval from a federal agency.
Key Requirements:
Developer
– Impact Assessment
Initial impact assessment and annually, or at the time of significant modifications, must be conducted.
– Disclosure to Deployers
High-Risk statement that discloses specific information about the high-risk system.
Information and documentation to enable the Deployer to undertake an Impact Assessment.
Documentation of the type of training data used, known or foreseeable limitations of the AI system, the purpose and intended benefits of the AI system. This documentation must include performance evaluation results, governance controls, intended outputs, steps taken to mitigate any risks, and how the AI system should be used, and monitored.
– Regulatory Disclosure
Disclose to the AG and known deployers any known or reasonably foreseeable risks of algorithmic discrimination, within 90 days of discovery.
– Public Disclosure
Make available (On their website or in a public use inventory) a statement summarizing the types of high-risk systems developed or significantly modified made available to a deployer, and any known or foreseeable risks of algorithmic discrimination that may arise.
– General Disclosure of AI Systems (applies to all AI systems – not just high-risk)
Disclosure to individuals interacting with the AI unless it is obvious to a reasonable person.
Deployers
– Documentation
Creation of a risk management policy and governance program that specifies the people, processes and principles used to identify and mitigate risks of algorithmic discrimination.
– Impact Assessments
Initial Impact assessment and annually, or at the time of significant modifications, must be conducted. Ongoing monitoring of the AI system to ensure it is not causing discrimination.
– Consumer Disclosure
Provide consumers with notice when a decision is being made with a high-risk AI system, including details of the decision, certain redress, which includes the ability to correct their personal data and appeal the decision. Certain disclosures will also need to be made to consumers on the deployer’s websites.
– Regulatory Disclosure
Report to the AG any instances of discrimination from using the high-risk AI systems.
– General Disclosure of AI Systems (applies to all AI systems – not just high-risk)
Disclosure to individuals they are interacting with AI unless it is obvious to a reasonable person.
Next Steps:
- SB205 will go into effect on February 1, 2026
- The Governor stated he signed the Bill into law “with reservations”, and that it will need to be fine-tuned to “ensure the final product does not hamper development and expansion of new technologies in Colorado.” Therefore, stay tuned for modifications to the requirements.
- Full Text of SB205 is available here.
- Expect to see other states follow Colorado’s lead and pass their own AI laws.
How Can Myna help?
Whether it is monitoring the path of future changes to SB205, or the passing of subsequent AI laws in other U.S. states, let our team of experts help you assess your AI Risk and develop your organization’s AI Governance model. We’re here to help your organization develop or mature it’s governance structure, risk models, defined responsibilities, and more.
Please reach out and our team will be happy to discuss and answer any questions you may have.
For information on how Myna can help you with any or all these requirements, please contact Wills Catling, Director at Myna Partners at: william.catling@levelupconsult.com and we’d be happy to set up a consultation to hear about your needs.