Today, the Biden-Harris Administration is announcing new efforts that will advance the research, development, and deployment of responsible artificial intelligence (AI) that protects individuals’ rights and safety and delivers results for the American people. . . .
SUMMARY: The Biden-Harris Administration is developing a National Artificial Intelligence (AI) Strategy that will chart a path for the United States to harness the benefits and mitigate the risks of AI. This strategy will build on the actions that the Federal Government has already taken to responsibly advance the development and use of AI. To inform this strategy, OSTP requests public comments to help update U.S. national priorities and future actions on AI.
DATES: Interested individuals and organizations are invited to submit comments by 5:00 p.m. ET on July 7, 2023. . . .
Background: AI has been part of American life for years, and it is one of the most powerful technologies of our generation. The pace of AI innovation is accelerating rapidly, which is creating new applications for AI across society. This presents extraordinary opportunities to improve the lives of the American people and solve some of the toughest global challenges. However, it also poses serious risks to democracy, the economy, national security, civil rights, and society at large. To fully harness the benefits of AI, the United States must mitigate AI’s risks. . . .
Comments must be submitted via the Federal eRulemaking Portal at regulations.gov. Federal eRulemaking Portal: Go to www.regulations.gov to submit your comments electronically. Information on how to use www.regulations.gov, including instructions for accessing agency documents, submitting comments, and viewing the docket, is available on the site under “FAQ” (https://www.regulations.gov/faq).
On July 7, 2023, CAIDP sent detailed comments to the Office of Science and Technology Policy regarding the US National AI Strategy. A few of the key points:
1) Ensure the development of human-centered and trustworthy Artificial Intelligence based on fundamental rights, democratic values, and the rule of law
2) Prioritize investment in AI systems that are innovative and ensure public safety
3) Establish guardrails for AI based on transparency, contestability, traceability, robustness, safety, security and accountability.
4) Implement the OSTP AI Bill of Rights, the OECD AI Principles, and the UNESCO Recommendations on AI Ethics
The Center for AI and Digital Policy supports the Request for Information concerning a National AI Strategy, announced by the Office of Science and Technology Policy on May 23, 2023. We intend to submit comments. In advance of the deadline, we offer several recommendations to commentators.
CAIDP has made several recommendations for the US National AI Strategy:
[This is an excerpt from the US country report, prepared by CAIDP]
The U.S. lacks a unified national policy on AI but President Biden, and his top advisors, has expressed support for AI aligned with democratic values. The United States has endorsed the OECD/G20 AI Principles. The White House has issued two Executive Orders on AI that reflect democratic values, a federal directive encourages agencies to adopt safeguards for AI. The most recent Executive Order also establishes a process for public participation in the development of federal regulations on AI though the rulemaking has yet to occur. The overall U.S. policy-making process remains opaque and the Federal Trade Commission has failed to act on several pending complaints concerning the deployment of AI techniques in the commercial sector. But the administration has launched new initiatives and encouraged the OSTP, NIST, and other agencies to gather public input. The recent release of the Blueprint for an AI Bill of Rights by the OSTP represents a significant step forward in the adoption of a National AI Policy and in the U.S.’s commitment to implement the OECD AI Principles. There is growing opposition to the use of facial recognition, and both Facebook and the IRS have cancelled facial recognition systems, following widespread protests. But concerns remain about the use of facial surveillance technology across the federal agencies by such U.S. companies as Clearview AI. The absence of a legal framework to implement AI safeguards and a federal agency to safeguard privacy also raises concerns about the ability of the U.S. to monitor AI practices.