On October 4, 2022, the White House Office of Science and Technology released the Blueprint for an AI Bill of Rights (the Bill) to guide the development and use of artificial intelligence (AI) in the United States. The White House recognized that while AI is a powerful driver of innovation, the technology can also be wielded as an invasive surveillance tool. Given AI’s growing decision-making role in sensitive domains such as housing, banking, healthcare, and criminal justice, there is concern that algorithms “plagued by bias and discrimination” may lead to disparate and harmful outcomes. To mitigate these and other risks, the Bill sets out five protections to which all individuals should be entitled:
1. Safe and Effective Systems. Developers should identify health and safety risks prior to deploying an AI system and consult key stakeholders during the development process. Once deployed, developers should regularly monitor the system to demonstrate that the outcomes are safe and within the bounds of the system’s intended use.
2. Algorithmic Discrimination Protections. The Bill recognizes that automated decision-making can contribute to disparate treatment based on race, color, ethnicity, sex, childbirth, health, gender, sexual orientation, religion, age, national origin, disabilities, veteran status, genetic information, and other protected classifications. To combat unequal treatment, developers should conduct proactive equity assessments, rely on representative data, provide accessibility features for people with disabilities, and perform ongoing disparity tests.
3. Data Privacy. Developers should only collect what is necessary to inform the AI system and respect individuals’ decisions regarding collection, use, access, transfer, and deletion of data. Data collection requests should be easy to comprehend and enable the user to provide meaningful, informed consent. Where an AI system involves data from sensitive domains such as housing, healthcare, employment, education, criminal justice, and finance, developers should deploy enhanced safeguards and restrictions for data inferences.
4. Notice and Explanation. Similar to the GDPR, the Bill grants individuals the right to know how and why a decision that impacted them was determined by AI. Thus, AI developers and deployers should provide explanations that are technically sound, meaningful, easy to understand, and publicly available whenever possible.
5. Human Alternatives, Consideration, and Fallback. Where appropriate and especially within sensitive domains, individuals should have the right to opt out of automated decision-making in favor of a human alternative. Appropriateness is determined based “on reasonable expectations in a given context and with a focus on ensuring broad accessibility and protecting the public from especially harmful impacts.” Human access and fallback should be accessible, equitable, and accompanied by personnel with appropriate training.
The Bill provides some guidance for the application of these principles by way of a two-part test. First, the framework applies only to automated systems. Second, these systems must “have the potential to meaningfully impact the American public’s rights, opportunities or access to critical resources or services.”
The regulatory landscape will continue to evolve thanks to new state laws, increased regulator attention, and the EU’s proposed AI Act. We will continue to monitor and report on any significant developments at the state, federal, and international levels.