Ghostboard pixel

Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks

The US Department of Commerce is taking action to enforce President Biden’s Executive Order on AI

The US Department of Commerce announced several actions to ensure the AI Executive Order is implemented, including four NIST draft publications on identifying, and mitigating risks associated with generative AI, a program for AI evaluation, and a request for comment on AI's impact on patentability.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara
The US Department of Commerce is taking action to enforce President Biden’s Executive Order on AI
Photo by Brandon Mowinkel / Unsplash

Six months after President Biden's Executive Order (EO) on the Safe, Secure, and Trustworthy Development of AI was signed, the U.S. Department of Commerce announced several steps it will take to ensure its implementation. On the one hand, the Department’s National Institute of Standards and Technology (NIST) has issued four draft publications intended as guides to boost AI systems' safety, security, and trustworthiness. On the other, the U.S. Patent and Trademark Office (USPTO) released a request for public comment (RPC) to collect information on how generative AI impacts the evaluation of what is considered prior art; what it means for a person to have ordinary skill in an art; and the decisions of what is patentable under US law.

Two publications are companion resources to the NIST's AI Risk Management Framework (AI RMF) and Secure Software Development Framework (SSDF); together, they were designed as guidance on mitigating the risks of generative AI. The AI RMF Generative AI Profile (NIST AI 600-1) references input submitted by the NIST generative AI public working group of more than 2,500 members and lists 13 key risks and over 400 potential mitigation actions. Some of the most notable identified risks include easier access to information on weapons, a lower barrier of entry for cybersecurity attacks, and the already familiar propagation of hate speech, denigrating, and stereotyping content. The Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST Special Publication (SP) 800-218A) focuses on securing code but helps address concerns surrounding malicious training data that affects generative AI systems negatively.

Of the remaining two, Reducing Risks Posed by Synthetic Content (NIST AI 100-4) is a guide to promoting transparency in digital content, seeing that AI can easily alter the latter. The document details methods for synthetic content detection, authentication, and labeling. A Plan for Global Engagement on AI Standards (NIST AI 100-5) proposes a plan for developing global AI standards. This work invites feedback on topics for which standardization may be urgent, such as mechanisms for enhancing awareness of the origin of digital content and shared practices for testing, evaluation, verification, and validation of AI systems. All four works are being published as drafts so the general public can provide feedback before NIST submits the final versions later this year. Public comments on NIST's draft publications are open until June 2, 2024.

In parallel, NIST has also launched NIST GenAI, a new program to measure and evaluate AI technologies by launching a series of challenge problems to determine the capabilities and limitations of generative AI technologies. One of the program's main motivations is to help determine whether a human or an AI produced an arbitrary text, image, video, or audio recording. The evaluations will help develop strategies to promote information integrity and the responsible use of digital content. Registration for the NIST GenAI pilot will open in May.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara

Data Phoenix Digest

Subscribe to the weekly digest with a summary of the top research papers, articles, news, and our community events, to keep track of trends and grow in the Data & AI world!

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More