On February 20, 2025, the Virginia legislature passed the High-Risk Artificial Intelligence Developer and Deployer Act (House Bill 2094, the “VA AI Act”) that mandates developers and deployers of high-risk artificial intelligence systems (“HRAI systems”) to adhere to specific governance requirements. The VA AI Act will come into effect on July 1, 2026 (if Virginia Governor Glenn Youngkin signs the bill or decides not to act on it by March 24) and is similar to the Colorado Act concerning Consumer Protections in Interactions with Artificial Intelligence Systems (the “CO AI Act”), which will be effective from February 1, 2026. With Virginia enacting the nation’s second AI governance law, we provide a summary of both AI Acts below, highlighting the key differences between them.
Key Terms
The AI Acts largely impose obligations on developers and deployers of HRAI systems.
• “High-risk artificial intelligence systems” are generally defined as AI systems that make, or are a substantial factor in making, consequential decisions about consumers.
• “Developers” are businesses that develop or intentionally and substantially modify an AI system where “intentionally and substantially modifying” an AI system means deliberately changing an AI system in a manner that results in a new reasonably foreseeable risk of algorithmic discrimination. “Algorithmic discrimination” refers to unlawful differential treatment based on protected characteristics such as age, color, disability, ethnicity, and more. Developers could include deployers that further train an AI system.
• “Deployers” are businesses that deploy an HRAI system.
• “Consequential decisions” are those that have a material legal, or similarly significant effect on the provision or denial to any consumer of, or the cost or terms of (i) education enrollment or an education opportunity, (ii) employment or an employment opportunity, (iii) a financial or lending service, (iv) health-care services, (v) housing, (vi) insurance or (vii) a legal service. Consequential decisions include those impacting an essential governance service under the CO AI Act and those impacting parole, probation, a pardon or any other release from incarceration of supervision or material status under the VA AI Act.
Scope
The scope of the VA AI Act is narrower than that of the CO AI Act, primarily due to differences in how HRAI systems are defined. For instance:
• Under the VA AI Act, HRAI systems refer to AI systems specifically intended to autonomously make, or significantly contribute to making, a consequential decision. An AI system developed by a developer may not be considered an HRAI system under Virginia law if the developer did not intend for it to be used for a high-risk purpose, even if it could be or was used by a deployer for such a purpose. In contrast, under the CO AI Act, an AI system could be classified as an HRAI system if it were used for a high-risk purpose, regardless of the developer’s original intent.
• Under the VA AI Act, an AI system is considered an HRAI system if it is a substantial factor in making a consequential decision, but only if it was used as the principal basis for that decision. Conversely, under the CO AI Act, an AI system can be a substantial factor in making a consequential decision if it assists in the decision-making process.
The AI Acts each require that an AI system impact a consumer to be classified as an HRAI system. While both AI Acts define “consumer” to be a state resident, the VA AI Act excludes individuals acting in a commercial or employment context. Consequently, the VA AI Act does not apply to employers using AI systems to make consequential decisions about their employees and contractors. Although the CO AI Act does not have a similar commercial or employment exclusion, it does partially exclude deployers with fewer than fifty full-time equivalent employees who do not use their own data to train HRAI systems.
Developer Obligations
Under both AI Acts, developers of HRAI systems must take reasonable care to prevent algorithmic discrimination and meet certain disclosure requirements to establish a rebuttable presumption of reasonable care. These disclosure requirements do not mandate the revelation of trade secrets, information protected by the respective state or federal law, or information that could pose a security risk to the developers.
Disclose Model Cards and other Artifacts to Deployers and Other Developers. Developers must disclose model cards and other artifacts to deployers and other developers that detail the system’s purpose, data used, limitations, risks, mitigation measures, and other information. However, a developer who also deploys the HRAI system is not required to generate these documents unless the HRAI system is provided to an unaffiliated entity acting as a deployer. The specific information required to be disclosed includes:
• Benefits and Uses of HRAI Systems. The AI Acts require developers to disclose the purpose and the intended benefits and uses of HRAI systems and how the HRAI system should be used, not used and monitored by an individual. The CO AI Act specifically requires developers to disclose reasonably foreseeable uses of HRAI systems and known harmful or inappropriate uses of the HRAI system whereas the VA AI Act requires developers to disclose the intended uses of the HRAI system.
• Components of HRAI Systems. Both AI Acts require developers to disclose the intended outputs of the HRAI system. The CO AI Act also requires developers to disclose high-level summaries of the type of data used to train the HRAI system.
• Performance and Limitations of HRAI Systems. The AI Acts require developers to disclose (i) known or reasonably foreseeable limitations of the HRAI system, including known or reasonably foreseeable risks of algorithmic discrimination arising from the intended uses of the HRAI system, (ii) how the HRAI system was evaluated for performance and mitigation of algorithmic discrimination before it was made available to the deployer, (iii) the data governance measures used to cover the training datasets and the measures used to examine the suitability of data sources, possible biases and appropriate mitigation, and (iv) the measures the developer took to mitigate known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of the HRAI system.
• Additional Information to Assist Deployer. Any additional documentation that is reasonably necessary to assist a deployer in understanding the outputs and monitor the performance of the HRAI system for algorithmic discrimination and necessary to enable the deployer to complete an impact assessment.
Publish AI Inventory. The CO AI Act requires developers of HRAI systems to disclose, on their websites or in public use inventories, a statement summarizing the types of HRAI systems they have developed or intentionally and substantially modified, and currently make available to a deployer or another developer. This statement must also explain how the developers manage known or reasonably foreseeable risks of algorithmic discrimination. The statement must be updated as necessary to ensure accuracy, and no later than ninety days after the developer intentionally and substantially modifies any HRAI system.
The VA AI Act does not have a similar requirement.
Make Ongoing Post-Market Risk Notifications. The CO AI Act requires developers of HRAI systems to disclose any known or reasonably foreseeable risk of algorithmic discrimination to the Colorado Attorney General and all known deployers or other developers of the HRAI systems. This disclosure must be made no later than ninety days after the developer discovers through ongoing testing that the HRAI systems have been deployed and caused, or are reasonably likely to cause, algorithmic discrimination. Additionally, if developers receive a credible report from a deployer that an HRAI system has been deployed and has caused algorithmic discrimination, they must also make this disclosure.
The VA AI Act does not have a similar requirement to make a report to the Virginia Attorney General.
Ensure Detection of Synthetic Material. The VA AI Act requires developers of HRAI systems that generate or substantially modify synthetic material to ensure that consumers can detect that the material is synthetic. This detection must be possible using industry-standard tools or tools provided by the developers. “Synthetic material” refers to content generated or significantly modified by algorithms.
The CO AI Act does not have a similar requirement.
Deployer Obligations
Deployers under the AI Acts must use reasonable care to protect consumers from algorithmic discrimination. Specifically, deployers must implement a risk management policy and program, review HRAI systems for algorithmic discrimination in Colorado, conduct impact assessments, and make certain disclosures related to the deployment of HRAI systems. None of these obligations require deployers to disclose trade secrets or information protected by the respective state or federal law.
Deployers of HRAI systems that meet these requirements are presumed to have used reasonable care to protect against algorithmic discrimination.
Implement a Risk Management Policy and Program. A deployer of an HRAI system must implement a risk management policy and program that governs the deployment of the HRAI system. This policy and program must be reasonable, taking into account several factors, including guidance from the latest version of the National Institute of Standards and Technology’s Artificial Intelligence Risk Management Framework, the International Organization for Standardization’s ISO/IEC 42001, or other comparable nationally or internationally recognized risk management frameworks for AI, or a risk management framework designated by the state AG.
Review for Algorithmic Discrimination. The CO AI Act requires deployers of HRAI systems to review the deployment of each HRAI system at least annually to ensure that the systems are not causing algorithmic discrimination. The VA AI Act does not have a similar requirement for annual reviews.
Conduct an Impact Assessment. The AI Acts require deployers of HRAI systems to complete an impact assessment for deployed HRAI systems. The impact assessment must cover certain information, including:
• An analysis of whether the HRAI system poses any known or reasonably foreseeable risk of algorithmic discrimination.
• A description of the categories of data the HRAI system processes as inputs and the outputs produced.
• The metrics used to evaluate the performance and known limitations of the HRAI system.
The VA AI Act also requires deployers to include an analysis of their HRAI systems’ validity and reliability in accordance with industry practices, including a description of the metrics used to evaluate the performance and limitations of such systems.
The CO AI Act mandates that deployers conduct the assessment at least annually and within ninety days after any intentional and substantial modification to an HRAI system is made available. In contrast, the VA AI Act requires deployers to conduct impact assessments less frequently: before initially deploying HRAI systems and before significantly updating them. However, the VA AI Act does not define what constitutes a “significant update.”
Both AI Acts require deployers to maintain completed impact assessments for at least three years following the final deployment of the HRAI system. The VA AI Act has a separate and longer duration requirement, mandating that impact assessments be maintained for as long as the HRAI system is deployed and for three years following the final deployment. Deployers subject to the VA AI Act may want to maintain impact assessments for the longer duration out of caution until the legislature or the state AG clarifies the requirement.
Provide Disclosures to Consumers. A deployer of an HRAI system must notify the consumer that the deployer has deployed the system and provide to the consumer certain information no later than the time the deployer deploys the HRAI system, including the purpose of the HRAI system, the nature of the consequential decision and a description of the HRAI system. The VA AI Act specifies that the description of the HRAI system must include certain components, including a description of the personal characteristics or attributes that the system will measure or assess, how automated components of such system are used to inform the consequential decision and any human components of such system.
Provide Adverse Action Notices to Consumers. A deployer of an HRAI system must provide certain information to a consumer if the system is used to make, or be a substantial factor in making, a consequential decision adverse to the consumer. This includes an explanation of the principal reason for the consequential decision. Additionally, the deployer must offer the consumer an opportunity to correct any incorrect personal data processed by the HRAI system in making the decision, as well as an opportunity to appeal the adverse decision.
Maintain an HRAI System Inventory. The CO AI Act requires a deployer of HRAI systems to maintain certain information about the HRAI systems on their website, including the types of HRAI systems currently deployed by the deployer, the nature, source and extent of information collected and used by the deployer, and how the deployer manages known or reasonably foreseeable risks of algorithmic discrimination that may arise from the deployment of each HRAI system.
The VA AI Act does not have a similar requirement.
“Talking to a Bot” Disclosures
The CO AI Act requires developers and deployers of AI systems intended to interact with consumers to disclose to consumers that they are interacting with an AI system, unless it would be obvious to a reasonable person. This requirement is separate from the obligation to notify consumers when a deployer uses an HRAI system, but consumers are not necessarily interacting with the system. The VA AI Act does not have a similar requirement.
Colorado’s disclosure requirement appears to target developers and deployers of AI systems, even if those systems are not classified as HRAI systems. However, since the term “deployer” is defined as a person who deploys an HRAI system, this requirement seems limited to those deploying HRAI systems. This interpretation could create a discrepancy between when developers and deployers must inform consumers about AI interactions. Guidance from the Colorado AG may clarify this issue.
Exclusions for Healthcare Entities, Financial Institutions and Insurers
The AI Acts do not apply to developers or deployers that are covered entities under HIPAA that provide health-care recommendations that are generated by an AI system, require a health-care provider to take action to implement the recommendations, and are not considered to be high risk. The AI Acts also contain exclusions for banks, credit unions and insurers if certain conditions are met.
Enforcement, Investigations and Rulemaking
Enforcement. The AI Acts do not create, nor can they serve as the basis for, a private right of action. Notwithstanding the absence of a private right of action, it is not uncommon for plaintiffs to utilize statutes without a private right of action in support of a civil negligence claim, particularly where (as here) the claim would hinge on “reasonableness.”
The state attorneys general have exclusive enforcement authority under the respective AI Acts. In Colorado, the AG may bring an enforcement action against alleged violators of the CO AI Act for a maximum civil penalty of $20,000 per violation. In Virginia, the AG may bring an enforcement action against alleged violators of the VA AI Act for a maximum civil penalty of $1,000 and for and between $1,000 to $10,000 for willful violations.
Investigations. Under the CO AI Act, the AG may require developers and deployers to disclose certain information about their HRAI systems without needing a specific basis for the request. The AG can ask a developer to disclose the information that developers are required to share with other developers about their systems, and a deployer to disclose its risk management policy, impact assessments, or related records. In comparison, the VA AI Act requires the AG to have reasonable cause to believe that a business has engaged in or is engaging in a violation of the VA AI Act to request such information through a civil investigative demand.
Affirmative Defense. Businesses have an affirmative defense under the CO AI Act if they (i) cure an alleged violation due to an internal review process, red teaming or other adversarial testing or feedback the businesses encouraged internal or external users to provide and (ii) comply with NIST’s AI Risk Management Framework or other AI risk management frameworks recognized by the AG.
Similarly, businesses have an affirmative defense under the VA AI Act if they (i) discover an alleged violation through red-teaming or another method, (ii) cure such alleged violation within forty-five days after discovering it, (iii) provide to the AG notice that the alleged violation has been cured and evidence that any harm caused by the alleged violation has been mitigated, and (iv) are otherwise in compliance with the VA AI Act.
Rulemaking. The Colorado AG has rulemaking authority under the CO AI Act. The Virginia AG does not explicitly have the same authority under the VA AI Act.
We will continue to track legislative and regulatory developments related to AI. Contact us for more information.