Zoom’s use of AI and ML techniques places at risk the privacy, autonomy, and security
of consumers. Zoom’s representations regarding the use of consumer data for AI/ML training are
misleading, deceptive, and unfair. The Federal Trade Commission should open an investigation to determine whether Zoom has violated the Federal Trade Commission Act, the 2020 Zoom Consent Order, or the guidance the FTC has set out regarding AI products and services.
Over the last several years, the FTC has issued several reports and policy guidelines concerning marketing and advertising of AI-related products and services. We believe that OpenAI should be required by the FTC to comply with these guidelines.
When you talk about AI in your advertising, the FTC may be wondering, among other things:
In 2021, the FTC warned that advances in Artificial Intelligence "has highlighted how apparently “neutral” technology can produce troubling outcomes – including discrimination by race or other legally protected classes." The FTC explained it has decades of experience enforcing three laws important to developers and users of AI:
The FTC said its recent work on AI – coupled with FTC enforcement actions – offers important lessons on using AI truthfully, fairly, and equitably.
" . . . we at the FTC have long experience dealing with the challenges presented by the use of data and algorithms to make decisions about consumers. Over the years, the FTC has brought many cases alleging violations of the laws we enforce involving AI and automated decision-making, and have investigated numerous companies in this space.
"The FTC’s law enforcement actions, studies, and guidance emphasize that the use of AI tools should be transparent, explainable, fair, and empirically sound, while fostering accountability. We believe that our experience, as well as existing laws, can offer important lessons about how companies can manage the consumer protection risks of AI and algorithms."
FTC Report Warns About Using Artificial Intelligence to Combat Online Problems
Agency Concerned with AI Harms Such As Inaccuracy, Bias, Discrimination, and Commercial Surveillance Creep (June 16, 2022)
Today the Federal Trade Commission issued a report to Congress warning about using artificial intelligence (AI) to combat online problems and urging policymakers to exercise “great caution” about relying on it as a policy solution. The use of AI, particularly by big tech platforms and other companies, comes with limitations and problems of its own. The report outlines significant concerns that AI tools can be inaccurate, biased, and discriminatory by design and incentivize relying on increasingly invasive forms of commercial surveillance.
On March 4, 2022, the Federal Trade Commission announced the settlement of a case against a company that had improperly collected children's data. Notably, the FTC required the company to destroy the data and any algorithms derived from the data.
On May 7, 2021, the Federal Trade Commission announced the settlement of a case with the developer of a photo app that allegedly deceived consumers about its use of facial recognition technology and its retention of the photos and videos of users who deactivated their accounts. Notably, the FTC required the company to delete the photos and videos of users who deactivated their accounts and the models and algorithms the companies developed.
Testimony and statement for the Record, Merve Hickok, CAIDP Chair and Research Director
House Committee on Oversight and Accountability, March 6, 2023
Marc Rotenberg and Merve Hickok, Artificial Intelligence and Democratic Values: Next Steps for the United States
Council on Foreign Relations, August 22, 2023
Merve Hickok and Marc Rotenberg, The State of AI Policy: The Democratic Values Perspective
Turkish Policy Qaurtelry, March 4, 2022
Marc Rotenberg and Sunny Seon Kang, The Use of Algorithmic Decision Tools, Artificial Intelligence, and Predictive Analytics
Federal Trade Commission, August 20, 2018
Federal Trade Commission, November 6, 2019
Federal Trade Commission, May 17, 2017