Navigating the Patchwork: FAQs on Emerging US State AI Laws
The US legal landscape surrounding artificial intelligence is rapidly evolving, mirroring the previously fragmented approach to data privacy. Colorado and Virginia have recently introduced the first state-level laws specifically regulating AI, prompting businesses to understand these new requirements and how they might impact their operations.
This article provides a clear overview of these emerging regulations and offers practical guidance on how businesses can prepare for compliance. The focus will be on the Colorado law, officially known as “An Act Concerning Consumer Protections in Interactions with Artificial Intelligence Systems” (the “CO Act”), and the “High-Risk Artificial Intelligence Developer and Deployer Act” (the “VA Bill”) passed by the Virginia legislature, which is awaiting the governor’s signature.
Will These Laws Apply to My Business?
The applicability of these laws hinges on the nature of your business’s AI activities and the scope of your operations.
- Developers: If your business develops any AI systems, you will likely need to comply with certain provisions of the CO Act. The VA Bill would only affect you IF the developed systems are considered “high risk.”
- Deployers/Users: Businesses that use AI systems are subject to the CO Act or VA Bill only if the system qualifies as “high risk.”
Both the CO Act and the VA Bill differentiate between developers and deployers of AI systems, imposing specific requirements that vary based on each role.
The CO Act applies to businesses operating in Colorado that:
- Develop AI systems generally, or
- Deploy certain “high-risk artificial intelligence systems.”
The VA Bill, in contrast, applies to businesses operating in Virginia that develop or deploy “a high-risk artificial intelligence system.”
Defining AI Systems and High-Risk AI Systems
Defining the scope of “AI system” is critical to determining whether these new laws apply to your business. Both the CO Act and the VA Bill provide definitions, but the CO Act’s definition is generally considered more encompassing.
- CO Act: Defines an AI system as “any machine-based system, that for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.”
- VA Bill: Presents a similar definition but excludes models used for development, prototyping, and research before the model is made available to deployers or consumers.
Both laws generally align in their definition of a high-risk AI system: one that makes, or is a substantial factor in making, a “consequential decision.” However, there are exceptions. Some specific activities, such as “perform[ing] a narrow procedural task,” detecting decision-making patterns without influencing prior determinations, and performing certain security or IT tasks, may be exempt.
What Constitutes a “Consequential Decision”?
The term “consequential decision” is key to the definition of “high-risk AI.” Both the CO Act and the VA Bill use the term to refer to a decision that materially affects the provision or denial of certain services and opportunities:
- Education enrollment or opportunity
- Employment or employment opportunity
- Financial or lending services
- Essential government services
- Healthcare services
- Housing
- Insurance
- Legal services
The VA Bill also includes decisions about parole, probation, pardons, or other release from incarceration, plus those related to court supervision and marital status under the definition of “consequential decisions.” However, the exclusion of government entities from the VA Bill’s scope makes the inclusion of these items of diminished practical importance because government entities are most likely to make decisions using these criteria.
Compliance Requirements: What Do I Need to Do?
The specific requirements for compliance will depend on your business’s role as a developer, a deployer, or both. Here’s a breakdown of the key obligations:
For Developers:
- Documentation Requirements: Both the CO Act and VA Bill require developers to create thorough documentation covering various aspects of the AI system and provide this documentation to deployers and other developers. The documentation should outline the data used for development, the models used, and the measures taken to mitigate potential algorithmic discrimination.
- Public Disclosures (CO Act, high-risk systems): For high-risk AI systems, developers must clearly display the systems on their website or in “a public use case inventory” and make available how the developer manages known or foreseeable risks of algorithmic discrimination.
- Mandatory Disclosures to Authorities (CO Act): If a developer becomes aware from any credible source that its high-risk AI system has caused or is reasonably likely to cause algorithmic discrimination, they must notify the Colorado Attorney General and all known deployers. This notification must take place within 90 days of the discovery.
For Deployers:
- Consumer Notices: Both laws mandate specific disclosures to consumers:
- Disclose to the consumer that they are interacting with an AI system (CO Act).
- Disclose that the consumer is interacting with a high-risk AI system (VA Bill).
- Provide information about the risk of algorithmic bias and the general purpose and nature of the AI system.
- Adverse Decision Notice: If a high-risk AI system makes a decision adverse to a consumer, the deployer must provide the consumer with:
- The reason for the decision
- The extent to which the AI system contributed to the decision
- The type of data processed by the system and its sources
- The consumer must also have the right to appeal the decision, request human review, and correct any inaccurate personal data used to make the decision.
- Risk Management Program and Impact Assessments: Both the CO Act and VA Bill include these provisions, which require deployers and developers to:
- Maintain a commercially reasonable risk management program, including documented policies that identify and mitigate risks of algorithmic discrimination.
- Perform an impact assessment to determine the performance and known limitations in the system’s scope.
- Mandatory Disclosures of Algorithmic Discrimination (CO Act): If a deployer discovers algorithmic discrimination, they must notify the Colorado Attorney General within 90 days of discovery.
Enforcement and Timeline
- Enforcement: Neither law provides a private right of action. Enforcement authority rests with the Attorney General of each state.
- Penalties (CO Act): Violations of the Colorado law are considered an unfair and deceptive trade practice and are subject to civil penalties, potentially up to $20,000 per violation.
- Penalties (VA Bill): Generally, the Virginia law provides for civil penalties of up to $1,000 per violation but significantly increases that to $10,000 per willful violation.
- Effective Dates: The CO Act will take effect on February 1, 2026. The VA Bill’s effective date is July 1, 2026, assuming it is signed into law by the governor.
Preparing for the New AI Regulations
Businesses operating in Colorado and Virginia (assuming the VA Bill becomes law) should take the following steps now to prepare for compliance:
-
Inventory AI Use Cases: Document all current and planned AI system development efforts and deployments. This is the fundamental step for any subsequent legal analysis.
-
Assess Your Role (Developer, Deployer, or Both): Determine your role for each AI use case, as the compliance obligations depend on how your business relates to the AI system.
-
Identify High-Risk AI Systems: Determine which AI systems play a role in making “consequential” decisions.
-
Prepare Required Disclosures: Review the new disclosure requirements and create the necessary content, including any required consumer notices.
-
Conduct and Document Impact Assessments: Adapt existing data protection assessment processes to meet the impact assessment requirements, understanding that while the requirements may not be identical, there are enough commonalities to be useful.
-
Draft and Implement an AI Risk Management Program: Consider using established frameworks like the NIST AI RMF or ISO/IEC 42001 as guidance, as both the CO Act and VA Bill acknowledge these frameworks.