The Colorado AI Act: What You Need To Know

Sep 20, 2024 9:00 AM ET

Authored by Baker Tilly’s Jordan Anderson

As artificial intelligence (AI) continues to transform the way we live, work and interact with technology, Colorado has taken a significant step forward in the regulation of these systems. Signed into law by Governor Jared Polis on June 8, 2021, the Colorado AI Act [1] (also known as Senate Bill 24-205) is the first state-level comprehensive legislation in the U.S. that regulates the use of AI systems. The act aims to promote transparency, accountability and fairness in the development and deployment of AI systems while protecting the rights and interests of consumers and citizens.

Key provisions of the act

The act focuses primarily on “high-risk” AI systems, which are AI-based systems that, when deployed, make “consequential decisions.” Consequential decisions are decisions that have a material impact on consumers’ educational opportunity, employment, finance and lending, healthcare, housing, legal or government services.

Developer obligations

“Developers,” those that create or substantially modify a high-risk artificial intelligence system, must exercise reasonable care to protect consumers from any known or foreseeable risks of algorithmic discrimination arising from the use of their AI system. Additionally, “Developers” must make available the following documentation, disclosures and information to “Deployers” and other developers of the AI system:

  1. A general statement describing the reasonably foreseeable uses and known harmful or inappropriate uses of the high-risk AI system
  2. Documentation disclosing: 
    - High-level summaries of the type of data used to train the high-risk AI system 
    - Known or reasonably foreseeable limitations of the AI system 
    - The purpose of the AI system 
    - Its intended benefits and uses 
    - All other information necessary for a deployer to comply with the “Deployer’s” obligations
  3. Documentation describing: 
    - How the AI system was evaluated for performance and mitigation of algorithmic discrimination 
    - The data governance measures to cover the training datasets and measures used to examine the suitability of data sources including possible biases and appropriate mitigation 
    - The intended outputs of the AI system 
    - How the system should be used, not be used and be monitored
  4. Any additional documentation that is reasonably necessary to assist the “Deployer” in understanding the outputs and monitor the performance of the AI system

Disclosures and notifications

“Developers” are obligated to disclose, on their website or in a public use-case inventory, a statement summarizing the types of high-risk AI that the developer has developed or modified and how the developer manages risks of algorithmic discrimination. Additionally, the “Developer” is required to keep these disclosures updated as the AI system is modified.

Within 90 days of a “Developer” discovering that a high-risk AI system has been deployed and has caused or is reasonably likely to have caused discrimination, they must inform the Colorado Attorney General and all known “Deployers” and “Developers” of the AI system.

Deployer obligations

“Deployers” are entities that do business in Colorado and deploy (e.g., implement or use with consumer impacts) a high-risk AI system. Like a “Developer,” a “Deployer” must exercise reasonable care to protect consumers from any known or reasonably foreseeable risks of algorithmic discrimination and must notify consumers when they have deployed a high-risk AI system to make, or be a substantial factor in making, a consequential decision concerning a consumer. “Deployers” are required to disclose:

  • A description of the AI system
  • A description of the purpose of the AI system
  • The nature of consequential decisions being made
  • Instructions on how to access details of the AI system on their website
  • Provide information regarding the consumer’s right to opt out of the processing of personal data concerning the consumer for profiling
  • Make available in a manner that is clear and readily available on their website, the types of high-risk AI systems deployed, how they manage known or foreseeable risks of algorithmic discrimination, and the nature, source and extent of information collected and used by the AI system
  • Within 90 days of discovering that their high-risk AI system has caused or is reasonably likely to have caused discrimination, they must inform the Colorado Attorney General

Adverse decisions

When a “Deployer’s” high-risk AI system makes a consequential decision that its adverse to the consumer, the “Deployer” must:

Provide to the consumer a statement disclosing:

  • The principal reason or reasons for the consequential decision, including the degree and manner to which the AI system contributed to the decision
  • The types and sources of data that were processed by the AI system in making the decision
  • Provide an opportunity for the consumer to correct any incorrect personal data and appeal the adverse decision and require human review

Additional disclosures to consumers

While the sections above refer specifically to high-risk AI systems, the following disclosures apply to any AI system that consumers interact with. “Deployers” of AI systems (that are not obvious to a reasonable person) must disclose that the system the consumer is interacting with is an AI system.

Enforcement by attorney general

The Colorado Attorney General has exclusive authority to enforce the act. “Developers” and “Deployers” that are faced with an enforcement action have an affirmative defense if both the following are true:

They discover and cure a violation of the act because of feedback, adversarial testing or “red teaming,” or internal review processes

They comply with the latest version of the NIST AI Risk Management Framework [2] or another nationally or internationally recognized risk management framework for AI systems, or any risk management framework designated by the attorney general.

Additional regulations

The attorney general may promulgate additional rules as necessary for the purpose of implementing and enforcing the act. These changes may include documentation and requirements for “Developers,” notifications to consumers, required disclosures and risk management and impact assessment policies and procedures.

Establishing risk management policies and program

A “Deployer” of a high-risk AI system must implement and maintain a risk management policy and program to govern the AI system that incorporates the principles, processes and personnel that the “Deployer” uses to identify, document and mitigate risks of algorithmic discrimination.

Acceptable risk management frameworks include the NIST AI Risk Management Framework [3], ISO/IEC 42001 [4] or other internationally recognized, substantially equivalent, risk management standards.

Impact assessment

Within 90 days of the act taking effect, a “Deployer” or third party contracted by the “Developer” must complete an impact assessment that is then repeated annually and whenever substantial modifications to high-risk AI systems occur. The impact assessment must include, at a minimum:

  • A statement disclosing the purpose, intended use cases and benefits afforded by the high-risk AI system
  • An analysis of whether the deployment of the AI system poses any risks of algorithmic discrimination and the steps that have been taken to mitigate those risks
  • A description of the categories of data the AI system processes as inputs and the outputs the AI system produces
  • Any metrics used to evaluate the performance and limitations of the AI system
  • A description of any transparency measures taken to notify a user that the AI system is in use
  • A description of post-deployment monitoring and user safeguards

Exemptions to high-risk AI policy

The “Deployer” employs fewer than 50 people

The “Deployer” does not use its own data to train the AI system

The AI system is used for its intended purpose

The “Deployer” makes available to consumers any impact assessment that the “Developer” of the AI system has completed

What should organizations do to prepare for compliance?

The act will take effect on Feb. 1, 2026, giving organizations two years to prepare for compliance. Organizations that operate in Colorado and leverage AI should consider the following steps to comply:

  • Appoint a team to lead AI compliance efforts
  • Conduct an inventory and assessment of existing and planned AI use cases and determine whether they meet the standard of a high-risk AI system
  • Implement an AI risk management framework, such as the NIST AI Risk Management Framework [5]
  • Create and document the policies and procedures for disclosing, explaining and evaluating AI systems, and for addressing the feedback from the end user or consumer
  • Set the foundation to conduct impact assessments by identifying the policies, processes and resources needed to orchestrate, conduct, document, analyze and monitor AI impact
  • Implement and test the mechanisms and tools for providing the required documentation, disclosures and other required information
  • Train and educate staff and stakeholders on the ethical and legal implications of AI systems, and on the best practices for designing and operating them
  • Monitor and review AI systems regularly to adjust and improve as needed

Organizations that develop or deploy AI systems for use in Colorado should consider an AI Readiness Assessment to identify gaps in organizational preparedness and build a road map to achieve and maintain compliance with changing regulations.

What does this mean for companies outside of Colorado?

Although the legislation directly applies to organizations that do business in Colorado, the Colorado AI Act is landmark legislation that sets a precedent for other states to follow. Utah has enacted legislation that establishes liability for use of AI that violates consumer protection laws if not properly disclosed. Additionally, four other states (CA, IL, MA, OH) have active bills related to fair and responsible use of AI.

This policy proliferation reflects the growing awareness and concern about the potential impacts and risks of AI systems on society and individuals. Organizations with operations in affected states will need to align their AI practices with the state’s regulatory standards, potentially prompting a broader adoption of these guidelines to ensure consistency across their operations.

Finally, it is important to monitor the changing AI regulatory landscape, conduct regular risk and vulnerability assessments of AI systems and ensure governance is being applied across the organization.

How we can help

Ensuring your organization is properly equipped to adhere to incoming AI regulations will help save time, energy and resources by preventing retrospective efforts. Baker Tilly’s digital team can support your organization in defining an AI strategy, conducting readiness and impact assessments, designing and implementing an AI governance and risk management framework, or – if you already have things in place – implementing and scaling AI systems.

Contact a Baker Tilly specialist to learn more.