The Federal Trade Commission should open an investigation and order OpenAI to halt the release of GPT models until necessary safeguards are established. These safeguards should be based on the guidance for AI products the FTC has previously established and the emerging norms for the governance of AI.
In summary, the investigation document closely traces the main issues and concerns outlined in the original CAIDP complaint and supplement. It shows the FTC is taking seriously the potential for consumer harm and illegality identified by CAIDP. The document signals a comprehensive inquiry aimed at gathering evidence on the key areas CAIDP argued require investigation of OpenAI's practices.
Today (Nov 15, 2023), the Center for AI and Digital Policy urged the Federal Trade Commission to move forward the investigation of OpenAI and issue an order to establish safeguards and guardrails for ChatGPT.
In March, CAIDP filed the initial Complaint against OpenAI. In July, CAIDP filed a Supplement, providing new evidence for the FTC to act. Shortly afterward, The New York Times and The Wall Street Journal reported that the FTC had opened the investigation CAIDP requested.
In the 37-page Supplement filed today, CAIDP highlights the enforcement actions taken by agencies outside the US, documents new problems with ChatGPT, calls attention to OpenAI’s failure to uphold commitments, quotes leading AI experts on the need to establish guardrails as soon as possible, and cites growing public concern about the use of personal data in AI products.
In a letter to Chair Lina Kahn, CAIDP also quoted President Biden's new Executive Order on AI:
"The Federal Government will enforce existing consumer protection laws and principles and enact appropriate safeguards against fraud, unintended bias, discrimination, infringements on privacy, and other harms from AI. Such protections are especially important in critical fields like healthcare, financial services, education, housing, law, and transportation, where mistakes by or misuse of AI could harm patients, cost consumers or small businesses, or jeopardize safety or rights."
In the original Complaint, CAIDP wrote:
"OpenAI, Inc., a California-based corporation, has released a product GPT-4 for the consumer market that is biased, deceptive, and a risk to privacy and public safety. The outputs cannot be proven or replicated. No independent assessment was undertaken prior to deployment. . . . There should be independent oversight and evaluation of commercial AI products offered in the United States. CAIDP urges the FTC to open an investigation into OpenAI, enjoin further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace."
I'm here today to ask about the status of the CAIDP complaint concerning OpenAI. To date, we have not even received an acknowledgement. Have you opened the investigation we requested? If you have not opened the investigation, can you tell us whether you will and if you don't, can you explain why? The FTC has previously made such announcements, for example, announcing the opening of Facebook investigation after Cambridge Analytical. So we expect and we request a response from the FTC about what actions will be taken on our complaint.
20 March 2023
In 2019, many countries around the world, including the United States, committed to the development of human-centric and trustworthy AI. Yet less than a few years on, we appear to be approaching a tipping point with the release of Generative AI techniques, which are neither human-centric nor trustworthy.
These systems produce results that cannot be replicated or proven. They fabricate and hallucinate. They describe how to commit terrorist acts, how to assassinate political leaders, and how to conceal child abuse. GPT-4 has the ability to undertake mass surveillance at scale, combining the ability to ingest images, link to identities, and develop comprehensive profiles.
As this industry has rapidly evolved so too has the secrecy surrounding the products. The latest technical paper on GPT-4 provides little information about the training data, the number of parameters, or the assessment methods. A fundamental requirement in all emerging AI policy frameworks – an independent impact assessment prior to deployment – was never undertaken.
Many leading AI experts, including many companies themselves, have called for regulation. Yet there is little effort in the United States today to develop regulatory responses even as countries around the world race to establish legal safeguards.
The present course cannot be sustained. The public needs more information about the impact of artificial intelligence. Independent experts need the opportunity to interrogate these models. Laws should be enacted to promote algorithmic transparency and counter algorithmic bias. There should be a national commission established to assess the impact of AI on American Society, to better understand the benefits as well as the risks.
This week the Center for AI and Digital Policy, joined by others, will file a complaint with the Federal Trade Commission, calling for an investigation of Open AI and the product chatGPT. We believe the FTC has the authority to act in this matter and is uniquely positioned as the lead consumer protection agency in the United States to address this emerging challenge. We will ask the FTC to establish a moratorium on the release of further commercial versions of GPT until appropriate safeguards are established. We will simultaneously petition the FTC to undertake a rulemaking for the regulation of the generative AI industry.
We favor growth and innovation. We recognize a wide range of opportunities and benefits that AI may provide. But unless we are able to maintain control of these systems, we will be unable to manage the risk that will result or the catastrophic outcomes that may emerge. We are asking the FTC to “hit the pause button” so that there is an opportunity for our institutions, our laws, and our society to catch up. We need to assert agency over the technologies we create before we lose control.
Merve Hickock and Marc Rotenberg
For the CAIDP
Over the last several years, the FTC has issued several reports and policy guidelines concerning marketing and advertising of AI-relate products and services. We believe that OpenAI should be required by the FTC to comply with these guidelines.
When you talk about AI in your advertising, the FTC may be wondering, among other things:
In 2021, the FTC warned that advances in Artificial Intelligence "has highlighted how apparently “neutral” technology can produce troubling outcomes – including discrimination by race or other legally protected classes." The FTC explained it has decades of experience enforcing three laws important to developers and users of AI:
The FTC said its recent work on AI – coupled with FTC enforcement actions – offers important lessons on using AI truthfully, fairly, and equitably.
" . . . we at the FTC have long experience dealing with the challenges presented by the use of data and algorithms to make decisions about consumers. Over the years, the FTC has brought many cases alleging violations of the laws we enforce involving AI and automated decision-making, and have investigated numerous companies in this space.
"The FTC’s law enforcement actions, studies, and guidance emphasize that the use of AI tools should be transparent, explainable, fair, and empirically sound, while fostering accountability. We believe that our experience, as well as existing laws, can offer important lessons about how companies can manage the consumer protection risks of AI and algorithms."
FTC Report Warns About Using Artificial Intelligence to Combat Online Problems
Agency Concerned with AI Harms Such As Inaccuracy, Bias, Discrimination, and Commercial Surveillance Creep (June 16, 2022)
Today the Federal Trade Commission issued a report to Congress warning about using artificial intelligence (AI) to combat online problems and urging policymakers to exercise “great caution” about relying on it as a policy solution. The use of AI, particularly by big tech platforms and other companies, comes with limitations and problems of its own. The report outlines significant concerns that AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance.
On March 4, 2022, the Federal Trade Commission announced the settlement of a case against a company that had improperly collected children's data. Notably, the FTC required the company to destroy the data and any algorithms derived from the data.
On May 7, 2021, the Federal Trade Commission announced the settlement of a case with the developer of a photo app that allegedly deceived consumers about its use of facial recognition technology and its retention of the photos and videos of users who deactivated their accounts. Notably, the FTC required the company to delete the photos and videos of users who deactivated their accounts and the models and algorithms the companies developed.
CAIDP Presentation, Hitting the Pause Button: A Moratorium for Generative AI (March 19, 2023)
Testimony and statement for the Record, Merve Hickok, CAIDP Chair and Research Director
House Committee on Oversight and Accountability, March 6, 2023
OpenAI's system card with a long list of possible risks (ranging from disinformation to nuclear proliferation and terrorism):
Twitter thread by Sam Altman: https://twitter.com/sama/status/1627110893388693504?s=20 “we also need enough time for our institutions to figure out what to do. regulation will be critical and will take time to figure out; although current-generation AI tools aren’t very scary, i think we are potentially not that far away from potentially scary ones.”
Marc Rotenberg and Merve Hickok, Regulating A.I.: The U.S. Needs to Act
New York Times, March 6, 2023
Marc Rotenberg and Merve Hickok, Artificial Intelligence and Democratic Values: Next Steps for the United States
Council on Foreign Relations, August 22, 2023
Merve Hickok and Marc Rotenberg, The State of AI Policy: The Democratic Values Perspective
Turkish Policy Qaurtelry, March 4, 2022
Marc Rotenberg and Sunny Seon Kang, The Use of Algorithmic Decision Tools, Artificial Intelligence, and Predictive Analytics
Federal Trade Commission, August 20, 2018
Federal Trade Commission, November 6, 2019
Federal Trade Commission, May 17, 2017
Consumer advocacy group BEUC also called on EU and national authorities - including data-protection watchdogs - to investigate ChatGPT and similar chatbots, following the filing of a complaint in the US.
Although the EU is currently working on the world's first legislation on AI, BEUC's concern is that it would take years before the AI Act could take effect, leaving consumers at risk of harm from a technology that is not sufficiently regulated.
Ursula Pachl, deputy director general of BEUC, warned that society was "currently not protected enough from the harm" that AI can cause.
"There are serious concerns growing about how ChatGPT and similar chatbots might deceive and manipulate people. These AI systems need greater public scrutiny, and public authorities must reassert control over them," she said.
FTC Opens Rulemaking Petition Process, Promoting Public Participation and Accountability
Changes to FTC Rules of Practice reflect commitment to public access to vital agency processes
September 15, 2021
At an open Commission meeting today, the Federal Trade Commission voted to make significant changes to enhance public participation the agency’s rulemaking, a significant step to increase public participation and accountability around the work of the FTC.
The Commission approved a series of changes to the FTC’s Rules of Practice designed to make it easier for members of the public to petition the agency for new rules or changes to existing rules that are administered by the FTC. The changes are a key part of the work of opening the FTC’s regulatory processes to public input and scrutiny. This is a departure from the previous practice, under which the Commission had no obligation to respond to or otherwise address petitions for agency action.
As we move forward with the FTC Complaint and Petition, we welcome your suggestions for points to make and issues to raise with the FTC. Most helpful for us are (1) accurate, authoritative descriptions of risks arising from the use of GPT, and (2) expert opinions of risks arising from the use of GPT, and (3) examples of how GPT violates the specific guidelines that the Federal Trade Commission has established for the marketing and advertising of AI products and services.
Among the topics we have identified so far:
Please note that we make evidence-based arguments and cite to published work. We will not be able to include general policy arguments, unsupported claims, or rhetorical statements.
Thank you for your assistance!