The US National Institute for Standards and Technology released new guidance and a tool to test AI models for risk
The National Institute for Standards and Technology (NIST) has released five products, including three finalized guidance documents, a draft on risk mitigation for AI models, and an open-source testing platform, to promote safe, secure, and trustworthy AI development.
In a recent press release, the US Department of Commerce announced that the National Institute for Standards and Technology (NIST), itself a part of the Department of Commerce, released five products directly related to President Biden's Executive Order (EO) on the Safe, Secure and Trustworthy Development of AI. These products include the final versions of three guidance documents initially released as drafts for public comment in April this year, a new draft on risk mitigation guidance by the U.S. AI Safety Institute, and the general release of Dioptra, a test platform to evaluate the trustworthiness of an AI system.
The three finalized documents are the AI RMF Generative AI Profile (NIST AI 600-1), the Secure Software Development Practices for Generative AI and Dual-Use Foundation Models (NIST Special Publication (SP) 800-218A), and A Plan for Global Engagement on AI Standards (NIST AI 100-5). The first document supplements the NIST's AI Risk Management Framework (AI RMF), enables organizations to identify the risks posed by generative AI, and recommends risk management actions that align with individual organizations' objectives and priorities. The second document expands the NIST's Secure Software Development Framework, which focuses on coding practices, to address specific generative AI concerns such as training data poisoning. The final document suggests a course of action for developing and implementing global standards, and fostering coordination and information-sharing practices.
The released draft is Managing Misuse Risk for Dual-Use Foundation Models (NIST AI 800-1), a report by the NIST’s AI Safety Institute proposing voluntary best practices that foundation model developers can undertake to secure their systems against misuse. The seven discussed approaches aim to mitigate risks associated with models providing information on developing biological weapons, performing cyberattacks, and generating imagery depicting child sexual abuse and other non-consensual actions. NIST will accept comments on this draft until September 9, 2024. The second newly released product is the free and open-source Dioptra software platform for adversarial testing. The NIST has previously endorsed the use of Dioptra among government IT workers. However, the agency is now making the software available to a wider audience to encourage the evaluation of AI systems to find which attacks could degrade their performance and to what extent.
With these five products, the NIST looks to spread awareness among a wider audience about the specific risks posed by generative AI and to provide a toolkit that empowers developers to manage and mitigate these risks to the possible extent, while supporting innovation and promoting the development of these groundbreaking and transformative technologies.