In the Matter of OPEN AI (Federal Trade Commission 2023)


The Federal Trade Commission should open an investigation and order OpenAI to halt the release of GPT models until necessary safeguards are established. These safeguards should be based on the guidance for AI products the FTC has previously established and the emerging norms for the governance of AI.


BREAKING - Following CAIDP Complaint, FTC Launches Investigation of Open AI

Summary - CAIDP Complaint and FTC Investigation

CAIDP - OpenAI Original Complaint

  • Raised concerns about bias, transparency, privacy, public safety, and deception with ChatGPT/GPT-4
  • Detailed specific harms in each of these areas that OpenAI acknowledged could occur with GPT-4
  • Cited OpenAI's own statements, research papers, and system cards that warned about risks
  • Argued OpenAI released GPT-4 despite knowing about risks, violating FTC guidance
  • Supplement to Complain

CAIDP - OpenAI Supplement

  • Highlighted global investigations launched in Italy, Canada, France, Australia, Germany, Spain, and the EU
  • Pointed to calls for regulation and pause on deployment from experts worldwide
  • Noted increased public concern about generative AI safety
  • Flagged new cybersecurity, privacy, and democracy threats posed by ChatGPT

FTC - OpenAI Investigation (Demand Letter)

  • Seeks information on bias, transparency, privacy, safety, and deception risks
  • Asks for details on what consumer complaints OpenAI has received
  • Requests info on how personal data is used and protected
  • Inquires about policies, procedures, and reviews related to model development
  • Questions what steps OpenAI takes to filter, audit, and moderate model outputs
  • Aligns with original complaint areas and supplement details on emerging issues 

In summary, the investigation document closely traces the main issues and concerns outlined in the original CAIDP complaint and supplement. It shows the FTC is taking seriously the potential for consumer harm and illegality identified by CAIDP. The document signals a comprehensive inquiry aimed at gathering evidence on the key areas CAIDP argued require investigation of OpenAI's practices.

UPDATE - CAIDP Submits New Filing to FTC, Urges Action in OpenAI Case (Nov. 15, 2023)

Today (Nov 15, 2023), the Center for AI and Digital Policy urged the Federal Trade Commission to move forward the investigation of OpenAI and issue an order to establish safeguards and guardrails for ChatGPT.


In March, CAIDP filed the initial Complaint against OpenAI. In July, CAIDP filed a Supplement, providing new evidence for the FTC to act. Shortly afterward, The New York Times and The Wall Street Journal reported that the FTC had opened the investigation CAIDP requested.


In the 37-page Supplement filed today, CAIDP highlights the enforcement actions taken by agencies outside the US, documents new problems with ChatGPT, calls attention to OpenAI’s failure to uphold commitments, quotes leading AI experts on the need to establish guardrails as soon as possible, and cites growing public concern about the use of personal data in AI products.


In a letter to Chair Lina Kahn, CAIDP also quoted President Biden's new Executive Order on AI:


"The Federal Government will enforce existing consumer protection laws and principles and enact appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI. Such protections are especially important in critical fields like healthcare, financial services, education, housing, law, and transportation, where mistakes by or misuse of AI could harm patients, cost consumers or small businesses, or jeopardize safety or rights."


In the original Complaint, CAIDP wrote:


"OpenAI, Inc., a California-based corporation, has released a product GPT-4 for the consumer market that is biased, deceptive, and a risk to privacy and public safety. The outputs cannot be proven or replicated. No independent assessment was undertaken prior to deployment. . . . There should be independent oversight and evaluation of commercial AI products offered in the United States. CAIDP urges the FTC to open an investigation into OpenAI, enjoin further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace."


CAIDP, Second Supplement (Nov. 14, 2023)


CAIDP, Letter to FTC Chair Lina Kahn (Nov. 14, 2023)


UPDATE - CAIDP Updates ChatGPT Complaint, Urges Action by FTC (July 10, 2023)

WASHINGTON DC - The Center for AI and Digital Policy (CAIDP) has escalated its case against OpenAI, creator of the ChatGPT AI product, by filing a supplementto the original complaint lodged with the U.S. Federal Trade Commission (FTC) in March 2023, accusing OpenAI of "unfair and deceptive practices." This CAIDP action introduces new developments in the global debate over ChatGPT and increases pressure on the FTC to expedite an investigation.


ChatGPT has been the center of numerous investigations from consumer agencies around the world since the original CAIDP complaint. Many of these international investigations have resulted in concrete actions and heightened oversight on OpenAI, with several having implications for how AI should be regulated.


"The continued global focus on regulating AI products such as ChatGPT only emphasizes the need for immediate action from the FTC," said Marc Rotenberg, Executive Director for CAIDP. "While other countries have taken swift action, the FTC has yet to acknowledge the first complaint made by CAIDP, raising concerns about the agency's ability to address the pressing issues that are emerging from the rapidly evolving AI industry."


In response to mounting concerns over the ethical implications and the regulatory needs of such AI products, regulatory agencies in Italy, Canada, France, Australia, Germany, Spain, Switzerland, the U.K., and Japan, have initiated investigations and legal proceedings against OpenAI.


The supplement to the original CAIDP complaint outlines these international efforts, sheds light on newly surfaced issues not fully captured in the initial complaint, such as risks to democracy, and calls attention to statements by FTC Commissioners that the agency would act to safeguard consumers from the harms of unregulated AI products.


With growing public support for AI product regulation, expert opinions, bipartisan AI legislation proposed by the Senate Majority Leader Chuck Schumer, and President Biden's calls for ensuring product safety before public deployment, CAIDP reaffirms its commitment to pressuring the FTC for urgent intervention.


CAIDP Supplement, In the Matter of Open AI (July 10, 2023)


CAIDP's Randolph Asks FTC Commissioners about  Status of Open AI Complaint (May 18, 2023)

  • On May 18, 2023, CAIDP Fellow Christabel Randolph spoke at the FTC Open Commission Meeting. Ms. Randolph reminded Commissioners Kahn, Slaughter, and Bedoya about the CAIDP complaint. From the transcript:

I'm here today to ask about the status of the CAIDP complaint concerning OpenAI. To date, we have not even received an acknowledgement. Have you opened the investigation we requested? If you have not opened the investigation, can you tell us whether you will and if you don't, can you explain why? The FTC has previously made such announcements, for example, announcing the opening of Facebook investigation after Cambridge Analytical. So we expect and we request a response from the FTC about what actions will be taken on our complaint.

Chronology of the CAIDP FTC Complaint

  • On March 6, 2023, CAIDPs' Marc Rotenberg and Merve Hickok published a letter in The New York Times "Regulating AI: The US Needs to Act." They wrote, "The recent coverage of Washington’s response to artificial intelligence is a welcome shift toward an overdue policy debate. But the challenge ahead is not so much about educating lawmakers about new technology as it is about establishing the necessary safeguards to protect the public."
  • On March 8, 2023, CAIDP's Merve Hickok testified before the U.S. Congress. She stated, "We do not have the guardrails in place, the laws that we need, the public education, or the expertise in government to manage the consequences of the rapid changes that are now taking place."
  • On March 20, 2023, Merve Hickok and Marc Rotenberg, wrote  a public letter, stating that CAIDP will file a complaint with the Federal Trade Commission, "calling for an investigation of Open AI and the product chatGPT." They explained,  "We believe the FTC has the authority to act in this matter and is uniquely positioned as the lead consumer protection agency in the United States to address this emerging challenge. We will ask the FTC to establish a moratorium on the release of further commercial versions of GPT until appropriate safeguards are established."
  • On March 30, 2023, CAIDP filed a 46-page complaint with the Federal Trade Commission. The Center asked the Commission to begin an investigation of OpenAI and to prevent the release of further models until necessary guardrails are established. In a letter to the FTC Commissioners, Marc Rotenberg and Merve Hickok reminded the Commission that they previously declared that AI products should be “transparent, explainable, fair, and empirically sound while fostering accountability.”
  • On April 4, 2023, President Biden, meeting with his top science advisors, explained the need to address the potential risks of AI to society, the economy, and national security. He called for "responsible innovation and appropriate guardrails to protect America’s rights and safety, and protecting their privacy, and to address the bias and disinformation." He  said "tech companies have a responsibility to make sure their products are safe before making them public."
  • On April 6, 2023, CAIDP releases AI and Democratic Values,  a comprehensive review of AI policies and practices in 75 countries. In the United States country report, CAIDP notes that the Federal Trade Commission has set out guidelines for businesses offering AI products, but also warns that "the Federal Trade Commission has failed to act on several pending complaints concerning the deployment of AI techniques in the commercial sector." (Page 1085).
  • On April 11, 2023, the House Committee on Energy and Commerce announced an oversight hearing for the Federal Trade Commission for Tuesday, April 18, 2023, to review the agency's program and budget request for FY2024.

CAIDP FTC Complaint

  • Center for AI and Digital Policy (CAIDP), Press Release, AI POLICY ORGANIZATION URGES FEDERAL TRADE COMMISSION TO INVESTIGATE OPENAI AND SUSPEND FURTHER SALE OF LARGE LANGUAGE MODEL PRODUCTS, SUCH AS GPT-4; FILES FORMAL COMPLAINT WITH FTC; Center for AI and Digital Policy Says OpenAI Violated Section 5 of the FTC Act, FTC Guidance for AI Products, and Rules for Governance of AI, March 30, 2023 
  • Center for AI and Digital Policy (CAIDP), Letter to Chair Kahn, Commissioner Slaughter, Commissioner Wilson, and Commissioner Bedoya,  March 30, 2023

Background- Open Letter from CAIDP

20 March 2023


Dear Friends


In 2019, many countries around the world, including the United States, committed to the development of human-centric and trustworthy AI. Yet less than a few years on, we appear to be approaching a tipping point with the release of Generative AI techniques, which are neither human-centric nor trustworthy. 


These systems produce results that cannot be replicated or proven. They fabricate and hallucinate. They describe how to commit terrorist acts, how to assassinate political leaders, and how to conceal child abuse.  GPT-4 has the ability to undertake mass surveillance at scale, combining the ability to ingest images, link to identities, and develop comprehensive profiles.


As this industry has rapidly evolved so too has the secrecy surrounding the products. The latest technical paper on GPT-4 provides little information about the training data, the number of parameters, or the assessment methods. A fundamental requirement in all emerging AI policy frameworks – an independent impact assessment prior to deployment – was never undertaken. 


Many leading AI experts, including many companies themselves, have called for regulation. Yet there is little effort in the United States today to develop regulatory responses even as countries around the world race to establish legal safeguards. 


The present course cannot be sustained. The public needs more information about the impact of artificial intelligence. Independent experts need the opportunity to interrogate these models. Laws should be enacted to promote algorithmic transparency and counter algorithmic bias. There should be a national commission established to assess the impact of AI on American Society, to better understand the benefits as well as the risks.


This week the Center for AI and Digital Policy, joined by others, will file a complaint with the Federal Trade Commission, calling for an investigation of Open AI and the product chatGPT.  We believe the FTC has the authority to act in this matter and is uniquely positioned as the lead consumer protection agency in the United States to address this emerging challenge. We will ask the FTC to establish a moratorium on the release of further commercial versions of GPT until appropriate safeguards are established. We will simultaneously petition the FTC to undertake a rulemaking for the regulation of the generative AI industry. 


We favor growth and innovation. We recognize a wide range of opportunities and benefits that AI may provide. But unless we are able to maintain control of these systems, we will be unable to manage the risk that will result or the catastrophic outcomes that may emerge. We are asking the FTC to “hit the pause button” so that there is an opportunity for our institutions, our laws, and our society to catch up. We need to assert agency over the technologies we create before we lose control. 


Merve Hickock and Marc Rotenberg


For the CAIDP 

The FTC and AI Policy

Over the last several years, the FTC has issued several reports and policy guidelines concerning marketing and advertising of AI-relate products and services. We believe that OpenAI should be required by the FTC to comply with these guidelines.

FTC Keep your AI claims in check (2023)

When you talk about AI in your advertising, the FTC may be wondering, among other things:

  • Are you exaggerating what your AI product can do? 
  • Are you promising that your AI product does something better than a non-AI product?
  • Are you aware of the risks?
  • Does the product actually use AI at all?

[More information]

Aiming for truth, fairness, and equity in your company’s use of AI (2021)

In 2021, the FTC warned that advances in Artificial Intelligence  "has highlighted how apparently “neutral” technology can produce troubling outcomes – including discrimination by race or other legally protected classes." The FTC explained it has decades of experience enforcing three laws important to developers and users of AI:

  • Section 5 of the FTC Act
  • Fair Credit Reporting Act
  • Equal Credit Opportunity Act

The FTC said its recent work on AI – coupled with FTC enforcement actions – offers important lessons on using AI truthfully, fairly, and equitably.

  • Start with the right foundation
  • Watch out for discriminatory outcomes
  • Embrace transparency and independenc
  • Don’t exaggerate what your algorithm can do or whether it can deliver fair or unbiased results
  • Tell the truth about how you use data
  • Do more good than harm
  • Hold yourself accountable – or be ready for the FTC to do it for you

[More information]

Using Artificial Intelligence and Algorithm (2020)

" . . . we at the FTC have long experience dealing with the challenges presented by the use of data and algorithms to make decisions about consumers. Over the years, the FTC has brought many cases alleging violations of the laws we enforce involving AI and automated decision-making, and have investigated numerous companies in this space.


"The FTC’s law enforcement actions, studies, and guidance emphasize that the use of AI tools should be transparent, explainable, fair, and empirically sound, while fostering accountability. We believe that our experience, as well as existing laws, can offer important lessons about how companies can manage the consumer protection risks of AI and algorithms."

  • Be transparent.
    • Don’t deceive consumers about how you use automated tools
    • Be transparent when collecting sensitive data
    • If you make automated decisions based on information from a third-party vendor, you may be required to provide the consumer with an “adverse action” notice
  • Explain your decision to the consumer.
    • If you deny consumers something of value based on algorithmic decision-making, explain why
    • If you use algorithms to assign risk scores to consumers, also disclose the key factors that affected the score, rank ordered for importance
    • If you might change the terms of a deal based on automated tools, make sure to tell consumers.
  • Ensure that your decisions are fair.
    • Don’t discriminate based on protected classes.
    • Focus on inputs, but also on outcomes
    • Give consumers access and an opportunity to correct information used to make decisions about them
  • Ensure that your data and models are robust and empirically sound.
  • Hold yourself accountable for compliance, ethics, fairness, and nondiscrimination.

FTC Report to Congress (2022)

FTC Report Warns About Using Artificial Intelligence to Combat Online Problems

Agency Concerned with AI Harms Such As Inaccuracy, Bias, Discrimination, and Commercial Surveillance Creep (June 16, 2022)


Today the Federal Trade Commission issued a report to Congress warning about using artificial intelligence (AI) to combat online problems and urging policymakers to exercise “great caution” about relying on it as a policy solution. The use of AI, particularly by big tech platforms and other companies, comes with limitations and problems of its own. The report outlines significant concerns that AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance.

FTC Judgements Concerning AI Practices

On March 4, 2022, the Federal Trade Commission announced the settlement of a case against a company that had improperly collected children's data. Notably, the FTC required the company to destroy the data and any algorithms derived from the data.

On May 7, 2021, the Federal Trade Commission announced the settlement of a case with the developer of a photo app that allegedly deceived consumers about its use of facial recognition technology and its retention of the photos and videos of users who deactivated their accounts. Notably, the FTC required the company to delete the photos and videos of users who deactivated their accounts and the models and algorithms the companies developed.


News Reports

Reports on FTC Investigation of OpenAI

Reports on Initial CAIDP Complaint


The New York Times
ChatGPT Faces Ban in Italy Over Privacy Concerns
The artificial intelligence tool ChatGPT was temporarily banned in Italy on Friday, the first known instance of the chatbot being blocked by...


The FTC should investigate OpenAI and block GPT over 'deceptive' behavior, AI policy group claims
An AI policy think tank wants the US government to investigate OpenAI and its wildly popular GPT artificial intelligence product,...


U.S. advocacy group asks FTC to stop new OpenAI GPT releases
In a complaint to the agency, a summary of which is on the group's website, the Center for Artificial Intelligence and Digital Policy called...



The Verge
FTC should stop OpenAI from launching new GPT models, says AI policy group
The Center for AI and Digital Policy filed a complaint asking the FTC to investigate OpenAI for violating bans on unfair and deceptive...


OpenAI’s ChatGPT faces U.S. FTC complaint, call for European regulators to step in
There's suddenly a coordinated pushback against A.I.'s rapid development.


The Register
FTC urged to freeze OpenAI's 'biased, deceptive' GPT-4
The Center for AI and Digital Policy, a non-profit research organization, has urged America's Federal Trade Commission to investigate OpenAI...


OpenAI faces complaint to FTC that seeks investigation and suspension of ChatGPT releases
The Center for AI and Digital Policy accuses OpenAI of violating a part of the FTC Act that prohibits unfair and deceptive business...


Times of India
Stop OpenAI from releasing more ChatGPT version: US group to FTC
The Center for AI and Digital Policy (CAIDP) has filed a complaint with the Federal Trade Commission (FTC) against OpenAI, claiming that the...

CAIDP Resources

CAIDP Presentation, Hitting the Pause Button: A Moratorium for Generative AI (March 19, 2023)


Testimony and statement for the Record, Merve Hickok, CAIDP Chair and Research Director

Advances in AI: Are We Ready For a Tech Revolution?

House Committee on Oversight and Accountability, March 6, 2023


OpenAI's system card with a long list of possible risks (ranging from disinformation to nuclear proliferation and terrorism): 


Twitter thread by Sam Altman: “we also need enough time for our institutions to figure out what to do. regulation will be critical and will take time to figure out; although current-generation AI tools aren’t very scary, i think we are potentially not that far away from potentially scary ones.”


Marc Rotenberg and Merve Hickok, Regulating A.I.: The U.S. Needs to Act

New York Times, March 6, 2023


Marc Rotenberg and Merve Hickok, Artificial Intelligence and Democratic Values: Next Steps for the United States

Council on Foreign Relations, August 22, 2023


Merve Hickok and Marc Rotenberg, The State of AI Policy: The Democratic Values Perspective

Turkish Policy Qaurtelry, March 4, 2022


Marc Rotenberg and Sunny Seon Kang, The Use of Algorithmic Decision Tools, Artificial Intelligence, and Predictive Analytics

Federal Trade Commission, August 20, 2018


Marc Rotenberg, In the Matter of HireVue,  Complaint and Request for Investigation, Injunction, and Other Relief

Federal Trade Commission, November 6, 2019


Marc Rotenberg, In the Matter of Universal Tennis,  Complaint and Request for Investigation, Injunction, and Other Relief

Federal Trade Commission, May 17, 2017

Related Initiatives

News from BEUC

Consumer advocacy group BEUC also called on EU and national authorities - including data-protection watchdogs - to investigate ChatGPT and similar chatbots, following the filing of a complaint in the US.

Although the EU is currently working on the world's first legislation on AI, BEUC's concern is that it would take years before the AI Act could take effect, leaving consumers at risk of harm from a technology that is not sufficiently regulated.

Ursula Pachl, deputy director general of BEUC, warned that society was "currently not protected enough from the harm" that AI can cause.

"There are serious concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people. These AI systems need greater public scrutiny, and public authorities must reassert control over them," she said.

FTC Petitions

FTC Opens Rulemaking Petition Process, Promoting Public Participation and Accountability

Changes to FTC Rules of Practice reflect commitment to public access to vital agency processes

September 15, 2021


At an open Commission meeting today, the Federal Trade Commission voted to make significant changes to enhance public participation the agency’s rulemaking, a significant step to increase public participation and accountability around the work of the FTC.


The Commission approved a series of changes to the FTC’s Rules of Practice designed to make it easier for members of the public to petition the agency for new rules or changes to existing rules that are administered by the FTC. The changes are a key part of the work of opening the FTC’s regulatory processes to public input and scrutiny. This is a departure from the previous practice, under which the Commission had no obligation to respond to or otherwise address petitions for agency action.


[More information]

[Federal Register notice]

[Public participation in the rulemaking process]


Your Comments Welcome

As we move forward with the FTC Complaint and Petition, we welcome your suggestions for points to make and issues to raise with the FTC. Most helpful for us are (1) accurate, authoritative descriptions of risks arising from the use of GPT, and (2) expert opinions of risks arising from the use of GPT, and (3) examples of how GPT violates the specific guidelines that the Federal Trade Commission has established for the marketing and advertising of AI products and services. 


Among the topics we have identified so far:

  • Enhanced risk to cybersecurity
  • Enhanced risk to data protection and privacy
  • Enhanced risk to children's safety
  • Failure to conduct independent risk assessment prior to deployment
  • Failure to establish independent risk assessment throughout AI lifecycle
  • Failure to accurately describe data source
  • Failure to disclose data collection practices regarding users
  • False advertising regarding reliability
  • Lack of transparency in outputs produced
  • Replication of bias in protected categories

Please note that we make evidence-based arguments and cite to published work. We will not be able to include general policy arguments, unsupported claims, or rhetorical statements.


Thank you for your assistance!

Note: Please fill out the fields marked with an asterisk.