OMB Request for Comment - Memorandum on AI Governance, Innovation, and Management

The Office of Management and Budget (OMB) is seeking public input on a draft policy for the use of AI by the U.S. government. This draft policy would empower Federal agencies to leverage AI to improve government services and more equitably serve the American people. The document focuses on three main pillars:

  1. Strengthening AI governance;
  2. Advancing responsible AI innovation; and
  3. Managing risks from the use of AI by directing agencies to adopt mandatory safeguards for the development and use of AI that impacts the rights and safety of the public.

Deadline: December 5, 2023

This week, President Biden signed a landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial IntelligenceAs the United States takes action to realize the tremendous promise of AI while managing its risks, the federal government will lead by example and provide a model for the responsible use of the technology. As part of this commitment, today, ahead of the UK Safety Summit, Vice President Harris will announce that the Office of Management and Budget (OMB) is releasing for comment a new draft policy on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence. This guidance would establish AI governance structures in federal agencies, advance responsible AI innovation, increase transparency, protect federal workers, and manage risks from government uses of AI.


[Press release]

Key Points

Strengthening AI Governance


To improve coordination, oversight, and leadership for AI, the draft guidance would direct federal departments and agencies to:

  • Designate Chief AI Officers, who would have the responsibility to advise agency leadership on AI, coordinate and track the agency’s AI activities, advance the use of AI in the agency’s mission, and oversee the management of AI risks.
  • Establish internal mechanisms for coordinating the efforts of the many existing officials responsible for issues related to AI. As part of this, large agencies would be required to establish AI Governance Boards, chaired by the Deputy Secretary or equivalent and vice-chaired by the Chief AI Officer.
  • Expand reporting on the ways agencies use AI, including providing additional detail on AI systems’ risks and how the agency is managing those risks.
  • Publish plans for the agency’s compliance with the guidance.

Advancing Responsible AI Innovation


To expand and improve the responsible application of AI to the agency’s mission, the draft guidance would direct federal agencies to:

  • Develop an agency AI strategy, covering areas for future investment as well as plans to improve the agency’s enterprise AI infrastructure, its AI workforce, its capacity to successfully develop and use AI, and its ability to govern AI and manage its risks.
  • Remove unnecessary barriers to the responsible use of AI, including those related to insufficient information technology infrastructure, inadequate data and sharing of data, gaps in the agency’s AI workforce and workforce practices, and cybersecurity approval processes that are poorly suited to AI systems.
  • Explore the use of generative AI in the agency, with adequate safeguards and oversight mechanisms.

Managing Risks from the Use of AI


To ensure that agencies establish safeguards for safety- and rights-impacting uses of AI and provide transparency to the public, the draft guidance would:

  • Mandate the implementation of specific safeguards for uses of AI that impact the rights and safety of the public. These safeguards include conducting AI impact assessments and independent evaluations; testing the AI in a real-world context; identifying and mitigating factors contributing to algorithmic discrimination and disparate impacts; monitoring deployed AI; sufficiently training AI operators; ensuring that AI advances equity, dignity, and fairness; consulting with affected groups and incorporating their feedback; notifying and consulting with the public about the use of AI and their plans to achieve consistency with the proposed policy; notifying individuals potentially harmed by a use of AI and offering avenues for remedy; and more.
  • Define uses of AI that are presumed to impact rights and safety, including many uses involved in health, education, employment, housing, federal benefits, law enforcement, immigration, child welfare, transportation, critical infrastructure, and safety and environmental controls.
  • Provide recommendations for managing risk in federal procurement of AI. After finalization of the proposed guidance, OMB will also develop a means to ensure that federal contracts align with its recommendations, as required by the Advancing American AI Act and President Biden’s AI Executive Order of October 30, 2023.

CAIDP Statements to OMB

We write to you, on behalf of the Center for AI and Digital Policy (CAIDP), regarding the need for the OMB to establish regulations for the use of AI by the federal agencies of the United States. These regulations are required by Executive Order 13960 and the AI in Government Act of 2020. Further delay by the OMB places at risk fundamental rights, public safety, and commitments that the United States has made to establish trustworthy AI.


1) Countries must establish national policies for AI that implement democratic values

2) Countries must ensure public participation in AI policymaking and create robust mechanisms for independent oversight of AI systems

3) Countries must guarantee fairness, accountability, and transparency in all AI systems

4) Countries must commit to these principles in the development, procurement, and implementation of AI systems for public services

5) Countries must halt the use of facial recognition for mass surveillance


The OMB Should Begin the AI Rulemaking Now


The OMB Should Follow the President’s Lead and Establish Safeguards for Trustworthy AI.

We write to you, on behalf of the Center for AI and Digital Policy (CAIDP), regarding the need for the OMB to establish regulations for the use of AI by federal agencies. We wrote to OMB on October 19, 2021, explaining the urgency of the matter. Recent developments underscore the need for the OMB to begin the rulemaking process.


The OMB should issue the government-wide memorandum and begin the formal rulemaking for the regulation of AI, as required by E.O. 13960 and the AI in Government Act.

CAIDP Advice to Commentators

 The Center for AI and Digital Policy (CAIDP) supports the request for comment on the Proposed Memorandum on Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, announced by the Office of Management and Budget on November 1, 2023. We intend to submit comments. In advance of the deadline, we offer several recommendations to commentators.

General Advice

  • Read carefully the Proposed Memorandum. Understand the context and the type of information that the OMB is seeking from the public
  • Learn about OMB's prior work on AI policy.  The OMB is the lead agency for the regulation of federal agencies. You should consult the prior work of the agency as well as other agencies. For example, the Office of Science and Technology Policy has launched several important AI policy initiatives, including the Blueprint for an AI Bill of Rights. You may be able to reference this earlier work in support of your recommendations.
  • Write clearly and directly. Short, simple statements are often the most effective.
  • Provide evidence. If you make a factual claim, include a citation to a report, article, or study to support your point.
  • Make concrete recommendations. If you have a specific recommendations for OMB, state them.
  • Keep a copy of your comments. Managing the federal portal for agency comments is not easy. The best strategy is often to prepare your comments in advance and then cut and past the sections into the form,
  • Evaluate the outcome. You were asked by a public agency for your views. You took time to prepare a response. You should expect the agency to consider your comments. The agency may not agree with you, but you are entitled to a "reasoned response" to your recommendations.

Good luck!


CAIDP References

AI and Democratic Values Index (2023)

[This is an excerpt from the US country report, prepared by CAIDP]


The U.S. lacks a unified national policy on AI but President Biden, and his top advisors, has expressed support for AI aligned with democratic values. The United States has endorsed the OECD/G20 AI Principles. The White House has issued two Executive Orders on AI that reflect democratic values, a federal directive encourages agencies to adopt safeguards for AI. The most recent Executive Order also establishes a process for public participation in the development of federal regulations on AI though the rulemaking has yet to occur. The overall U.S. policy-making process remains opaque and the Federal Trade Commission has failed to act on several pending complaints concerning the deployment of AI techniques in the commercial sector. But the administration has launched new initiatives and encouraged the OSTP, NIST, and other agencies to gather public input. The recent release of the Blueprint for an AI Bill of Rights by the OSTP represents a significant step forward in the adoption of a National AI Policy and in the U.S.’s commitment to implement the OECD AI Principles. There is growing opposition to the use of facial recognition, and both Facebook and the IRS have cancelled facial recognition systems, following widespread protests. But concerns remain about the use of facial surveillance technology across the federal agencies by such U.S. companies as Clearview AI. The absence of a legal framework to implement AI safeguards and a federal agency to safeguard privacy also raises concerns about the ability of the U.S. to monitor AI practices.


[More information about the AI and Democratic Values Index]