NTIA Request for Comment - AI Accountability



NTIA Press Release


 [From the NTIA - April 11, 2023]

 

WASHINGTON – Today, the Department of Commerce’s National Telecommunications and Information Administration (NTIA) launched a request for comment (RFC) to advance its efforts to ensure artificial intelligence (AI) systems work as claimed – and without causing harm. The insights gathered through this RFC will inform the Biden Administration’s ongoing work to ensure a cohesive and comprehensive federal government approach to AI-related risks and opportunities.

While people are already realizing the benefits of AI, there are a growing number of incidents where AI and algorithmic systems have led to harmful outcomes. There is also growing concern about potential risks to individuals and society that may not yet have manifested, but which could result from increasingly powerful systems. Companies have a responsibility to make sure their AI products are safe before making them available. Businesses and consumers using AI technologies and individuals whose lives and livelihoods are affected by these systems have a right to know that they have been adequately vetted and risks have been appropriately mitigated.

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms. For these systems to reach their full potential, companies and consumers need to be able to trust them,” said Alan Davidson, Assistant Secretary of Commerce for Communications and Information and NTIA Administrator. “Our inquiry will inform policies to support AI audits, risk and safety assessments, certifications, and other tools that can create earned trust in AI systems.”

NTIA’s “AI Accountability Policy Request for Comment” seeks feedback on what policies can support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems that they work as claimed. Much as financial audits create trust in the accuracy of a business’ financial statements, so for AI, such mechanisms can help provide assurance that an AI system is trustworthy in that it does what it is intended to do without adverse consequences.

Just as food and cars are not released into the market without proper assurance of safety, so too AI systems should provide assurance to the public, government, and businesses that they are fit for purpose. NTIA is seeking input on what policies should shape the AI accountability ecosystem, including topics such as:

  • What kinds of trust and safety testing should AI development companies and their enterprise clients conduct.
  • What kinds of data access is necessary to conduct audits and assessments.
  • How can regulators and other actors incentivize and support credible assurance of AI systems along with other forms of accountability.
  • What different approaches might be needed in different industry sectors—like employment or health care.  

President Biden has been clear that when it comes to AI, we must both support responsible innovation and ensure appropriate guardrails to protect Americans’ rights and safety. The White House Office of Science and Technology Policy’s Blueprint for an AI Bill of Rights provides an important framework to guide the design, development, and deployment of AI and other automated systems. The National Institute of Standards and Technology’s (NIST) AI Risk Management Framework serves as a voluntary tool that organizations can use to manage risks posed by AI systems. 

Comments will be due 60 days from publication of the RFC in the Federal Register.  


About the National Telecommunications and Information Administration

The National Telecommunications and Information Administration (NTIA), part of the U.S. Department of Commerce, is the Executive Branch agency that advises the President on telecommunications and information policy issues. NTIA’s programs and policymaking focus largely on expanding broadband Internet access and adoption in America, expanding the use of spectrum by all users, advancing public safety communications, and ensuring that the Internet remains an engine for innovation and economic growth.  


NTIA’s “AI Accountability Policy Request for Comment” seeks feedback on what policies can support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems. Much as financial audits create trust in the accuracy of a business’ financial statements, so for AI, such mechanisms can help provide assurance that an AI system is trustworthy. Just as financial accountability required policy and governance to develop, so too will AI system accountability.

NTIA is seeking input on what policies should shape the AI accountability ecosystem, including topics such as: 

 

AI Assurance

What kind of data access is necessary to conduct audits and assessments

A placeholder image

How can regulators and other actors incentivize and support credible assurance of AI systems along with other forms of accountability

A placeholder image

What different approaches might be needed in different industry sectors—like employment or health care

 

Written comments in response to the RFC must be provided to NTIA by June 12, 2023, 60 days from the date of publication in the Federal Register. Comments submitted in response to the RFC will be made publicly available via https://www.regulations.gov/.

Comments should respond to questions posed in the RFC, and commenters are encouraged to correlate the content of their comments to the pillars and questions set forth in the RFC. Commenters need not respond to every question. Comments should be typed, double-spaced, and signed and dated by the filing party or a legal representative of that party. 


How to file a written comment

  1. Go to https://www.regulations.gov/ 
  2. Search for docket number 230407-0093.
  3. Click the “Comment Now!” icon
  4. Complete the required fields
  5. Enter or attach your comments  


CAIDP Comments to NTIA


Summary of CAIDP Comments

  1. Companies should not release AI products that are not safe. President Biden has said directly, at least twice, that tech companies have a responsibility to make sure their products are safe before making them public.
  2. Human-centric accountability practices must protect fundamental rights, democratic values, and the rule of law.
  3. Accountability mechanisms must incorporate best practices set out in the Universal Guidelines for Artificial Intelligence, the OECD AI Principles and the UNESCO Recommendation on AI Ethics.
  4. Accountability should be based on mandatory impact assessments, audits, and certifications throughout the AI lifecycle to ensure transparency, auditability, contestability, and traceability.
  5. Legal standards should be established to ensure AI accountability. Accountability mechanisms or practices will have no meaningful impact in the absence of clearly defined legal standards and enforceable remedies.
  6. The United States should support a comprehensive international treaty for AI to ensure accountability across the public and private sectors.

CAIDP Comments to NTIA on AI and Accountability (June 12, 2023)


CAIDP Recommendations to Commentators


 

The Center for AI and Digital Policy (CAIDP) supports the Request for Comment concerning a AI Accountability announced by the National Telecommunications and Information Administration (NTIA) on April 11, 2023. We intend to submit comments. In advance of the deadline, we offer several recommendations to commentators.

 


General Advice

  • Read carefully the Request for Comment. Understand the context for the RFC and the type of information that the NTIA is seeking from the public
  • Learn about NTIA's prior work on AI policy.  The NTIA is one of several US federal agencies developing guidelines for AI. You should consult the prior work of the agency as well as other agencies. For example, the Office of Science and Technology Policy has launched several important AI policy initiatives, including the Blueprint for an AI Bill of Rights. You may be able to reference this earlier work in support of your recommendations.
  • Answer the questions that are most relevant to your work and expertise. It is not necessary to answer all of the questions
  • Write clearly and directly. Short, simple statements are often the most effective.
  • Provide evidence. If you make a factual claim, include a citation to a report, article, or study to support your point.
  • Make concrete recommendations. If you have a specific recommendations for NTIA, state them.
  • Keep a copy of your comments. Managing the federal portal for agency comments is not easy. The best strategy is often to prepare your comments in advance and then cut and past the sections into the form,
  • Evaluate the outcome. You were asked by a public agency for your views. You took time to prepare a response. You should expect the agency to consider your comments. The agency may not agree with you, but you are entitled to a "reasoned response" to your recommendations.

Good luck!

 


Specific Advice

CAIDP has several recommendations for AI accountability:

  • Enactment of federal legislation for the governance of AI based on the Universal Guidelines for AI. The Universal Guidelines Obligation for Assessment and Accountability states:

An AI system should be deployed only after an adequate evaluation of its purpose and objectives, its benefits, as well as its risks. Institutions must be responsible for decisions made by an AI system.

  • Implementation of the OSTP AI Bill of Rights

Accountable. Notices should clearly identify the entity responsible for designing each component of the system and the entity using it

  • Implementation of the OECD AI Principles (which the United States has already endorsed). The OECD AI Accountability Principle (1.5) states:

AI actors should be accountable for the proper functioning of AI systems and for the respect of the above principles, based on their roles, the context, and consistent with the state of art.

  • Support for the Council of Europe AI Treaty (a global binding treaty for AI)

We would appreciate your support for these recommendations!


CAIDP References


AI and Democratic Values Index (2023)

[This is an excerpt from the US country report, prepared by CAIDP]

 

The U.S. lacks a unified national policy on AI but President Biden, and his top advisors, has expressed support for AI aligned with democratic values. The United States has endorsed the OECD/G20 AI Principles. The White House has issued two Executive Orders on AI that reflect democratic values, a federal directive encourages agencies to adopt safeguards for AI. The most recent Executive Order also establishes a process for public participation in the development of federal regulations on AI though the rulemaking has yet to occur. The overall U.S. policy-making process remains opaque and the Federal Trade Commission has failed to act on several pending complaints concerning the deployment of AI techniques in the commercial sector. But the administration has launched new initiatives and encouraged the OSTP, NIST, and other agencies to gather public input. The recent release of the Blueprint for an AI Bill of Rights by the OSTP represents a significant step forward in the adoption of a National AI Policy and in the U.S.’s commitment to implement the OECD AI Principles. There is growing opposition to the use of facial recognition, and both Facebook and the IRS have cancelled facial recognition systems, following widespread protests. But concerns remain about the use of facial surveillance technology across the federal agencies by such U.S. companies as Clearview AI. The absence of a legal framework to implement AI safeguards and a federal agency to safeguard privacy also raises concerns about the ability of the U.S. to monitor AI practices.

 

[More information about the AI and Democratic Values Index]