On January 13, 2025, California Attorney General (“AG”) Rob Bonda issued an advisory describing providers’ and businesses’ obligations related to the development, sale, and use of artificial intelligence (“AI”) and automated decision systems (“ADS”) in the healthcare industry (“Advisory”). The Advisory puts healthcare providers, insurers, and businesses serving the healthcare industry on notice of the AG’s heightened scrutiny on AI, and the variety of potential enforcement options available to the AG related to AI systems. Our top ten takeaways are below.
- The AG considers a wide range of “healthcare entities” to be responsible for ensuring AI and ADS comply with California law. The Advisory not only addresses itself to traditional healthcare players, such as healthcare providers, insurers, and researchers, but also to businesses which provide services to those more traditional parties, such as vendors, developers, researchers, and investors. Healthcare entities that “develop, sell, or use” AI “must ensure their systems comply with laws protecting consumers.” Specifically, healthcare entities “should ensure that AI systems are tested, validated, and audited to ensure that their use is safe, ethical, and lawful, and reduces, rather than replicates or exaggerates, human error and bias” while also being transparent about the use of patients’ data to train AI models and how healthcare providers are using the AI to make decisions about patients’ health. But beyond healthcare providers, the Advisory also explicitly states that “businesses may also be liable for supplying AI tools when they know, or should have known, that AI will be used to violate the law.”
- The AG outlines five acts it considers (potentially) unlawful. These potentially unlawful acts include using AI or ADS to: (1) override doctors’ decisions about insurance coverage and/or medical necessity; (2) draft patient notes, communications, or orders that “include erroneous or misleading information”, such as discriminatory stereotypes; (3) make predictions based on patients’ “past healthcare claims data” which results in “disadvantaged patients or groups…being denied services” and “groups with robust past access being provided enhanced services”; (4) predict whether a patient “is the ‘type of person’ more likely to miss an appointment”, resulting in administrative barriers to healthcare access such as double-booking appointments; and (5) conduct cost/benefit analysis of treatments for people with disabilities “that are based on stereotypes that undervalue the lives of people with disabilities.”
- The AG sees California’s Unfair Competition Law (UCL) as applying broadly to AI and ADS in healthcare. The AG interprets California’s UCL broadly: “The [UCL] was intentionally written with ‘broad, sweeping language’ to protect Californians from obvious and familiar forms of fraud and deception as well as new, creative, and cutting-edge forms of misleading behavior.” In addition to traditionally “unfair” acts such as false advertising and anticompetitive practices, healthcare entities using, selling, or developing AI or ADS systems should be aware that “a violation of any other state, federal, or local law is ‘independently actionable’ under the UCL.
- AI cannot practice medicine in California. The AG notes California bans the practice of medicine by “corporations and other ‘artificial legal entities’” and provides that “[o]nly human…medical professionals…are licensed to practice medicine in California.” Therefore, the AG’s position is that AI and ADS cannot override providers’ determinations concerning patient care, medical need, or insurance coverage.
- The AG cautions providers about investing in AI in ways that could violate professional licensure and/or ethics rules. The Advisory states providers are prohibited from delegating their practice of medicine to AI or ADS, and may also “violate conflict of interest law if they or their family member have a financial interest in AI services”.
- The California AG is focused on potentially discriminatory and biased decisions made by AI and ADS. Entities or individuals receiving “any state support” and “principally engaged in the business of providing[…]healthcare” may not discriminate based on protected classifications, such as sex, race, color, or religion. The Advisory takes an expansive view of what constitutes discrimination – AI and ADS systems that make “less accurate” predictions about protected classes (regardless of availability of data) could be considered discriminatory. Indeed, “[a]lthough a policy or tool may be facially neutral, healthcare entities may not simply ignore or avoid data regarding inequity relating to race, gender, or other protected classification.” The AG also notes California is already investigating potential discrimination in healthcare algorithms.
- Privacy and confidentiality need to be maintained when introducing patient data to AI or ADS. The Advisory reminds healthcare entities that the privacy rights and restrictions in California’s Confidentiality of Medical Information Act (“CMIA”), HIPAA, the Genetic Privacy Information Act (“GINA”), and California’s Consumer Privacy Act (“CCPA”) should be considered when exposing patient data to AI and ADS systems, such as for training or evaluation.
- The AG urges healthcare providers to consider obtaining informed consent from a patient before using AI for diagnosis or treatment. California law requires providers to obtain informed consent from patients “for a proposed course of treatment”. The AG takes a page from privacy regulations, and states that using coercive tactics or “dark patterns” to obtain consent interferes with patient autonomy and the effectiveness of the informed consent. The AG suggests seeking consent to use AI and ADS in medical care may be preferred by patients, noting a majority of Californians reported being “uncomfortable” with the use of AI in healthcare the recent California Health Policy Survey.
- Use of and access to sensitive data – such as reproductive and sexual data – by AI and ADS should be limited appropriately. The CMIA, which requires “providers of healthcare” (including some digital health companies and mobile applications) to maintain confidentiality of California patients’ medical information, was recently amended to categorize certain reproductive and sexual health data as “sensitive”. Per the AG, “as developers and users of [electronic health records] and related applications increasingly incorporate AI, they must ensure compliance with CMIA and limit access and improper use of sensitive information.”
- The Advisory is not all-encompassing. The Advisory “does not encompass all possible laws that may apply to health AI”. It also states, in addition to new laws which may arise, “tort, public health, charitable trusts, competition, and criminal laws…apply equally to AI systems as they do to non-AI system[s].” The AG goes on to clarify: “conduct that is illegal without the involvement of AI is equally unlawful if AI is involved, and the fact that AI is involved is not a defense to liability under any law.”