In mid-July, the Federal Trade Commission (FTC) reportedly opened an investigation into OpenAI, the maker of ChatGPT, sending the company an extensive Civil Investigative Demand (CID). While FTC investigations are normally non-public, the Washington Post published what appears to be part of the CID sent from the FTC to OpenAI. This investigation comes on the heels of FTC Chair Lina Khan stating her intention to use existing consumer protection law to protect people from the potential dangers of generative artificial intelligence (“AI”). Khan and the Biden administration have signaled that they will take a “whole of government” approach to using existing law to combat any potentially harmful outcomes of AI.
According to the CID, the FTC’s investigation is twofold. First, the Commission is investigating whether OpenAI, “in connection with offering products and services that incorporate or rely on Large Language Models, has engaged in unfair or deceptive privacy or data security practices”, and second, whether OpenAI “engaged in unfair or deceptive practices relating to risks of harm to consumers” in violation of Section 5 of the FTC Act. The FTC has defined “Large Language Model” in a portion of the CID that has not been publicly released, but the term Large Language Models generally refers to AI algorithms that use large datasets and deep learning to predict and generate new content. In the CID, the FTC states that it intends to determine whether Commission action to obtain monetary relief would be in the public interest.
Specific requests in the CID cover an array of focus areas, including OpenAI’s model development and training, assessing and addressing risks, and technical details like API integrations. The FTC requires that OpenAI describe in detail the data used to train or develop each Large Language Model product it has made available – such as ChatGPT – along with how it obtained that data. The FTC goes on to inquire into how Large Language Models were trained, including the training process, the individuals involved, the role of human feedback, and oversight. This may be consistent with the Commission’s prior statements that it intends to ensure consumers do not face bias from AI tools.
Showcasing the Commission’s continued interest in privacy, security, and consumer protection, the FTC asks OpenAI to detail the risks that it identified while training products like ChatGPT. As part of this, the FTC inquires into a known OpenAI data security incident along with the potential for future incidents. The CID includes questions that probe a data security incident OpenAI disclosed on March 24, 2020 that enabled some users to see information from other users’ chats and payment information. It also asks for information about whether Large Language Models have been subject to known attack types, such as “prompt injection.” The Commission further asks about OpenAI’s policies and procedures for mitigating risks that APIs and plugins could enable unauthorized access to or exfiltration of users’ personal information. Additionally, the Commission asks OpenAI to list the types of personal information it collects and stores, as well as any steps OpenAI has taken to identify authorized use that could lead any Large Language Model to generate statements about real individuals, containing accurate personal information.
Not long after the FTC reportedly opened its investigation into OpenAI, the company signed on to a list of voluntary AI commitments published by the White House. These commitments are targeted at ensuring the safe, secure, and transparent development of AI and have been joined by Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. The commitments include, among others:
- Red-teaming models and systems in areas including misuse, societal risks, and national security concerns (including potential cyber threats)
- Sharing information among companies and governments regarding safety risks, dangerous or emergent capabilities, and attempts to circumvent safeguards
- Investing in cybersecurity and incentivizing third-party discovery and reporting of issues and vulnerabilities
- Publicly report model or system capabilities, limitations, and domains of appropriate and inappropriate use
These commitments are voluntary, although there may be potential for the Commission to view departures from these commitments as unfair or deceptive practices under its FTC Act authorities. The Biden Administration also indicated that it is currently working on an executive order to address the risks AI poses. The full list of voluntary commitments to the White House may serve as a roadmap for a more robust executive order.
While the effects of OpenAI’s commitment remain to be seen, good faith efforts to fulfill its promises to the White House could aid the company in responding to the FTC. Ultimately, if the FTC determines that OpenAI has violated consumer protection law, it may seek a consent decree that impacts how generative AI products like ChatGPT are developed in the future.