A new report by the US National Telecommunications and Information Administration examines the risks and benefits of open-weight models

The National Telecommunications and Information Administration (NTIA), which, like the NIST, is part of the US Department of Commerce, recently published a report that weighs the risks and benefits of open models. Open models are characterized by the wide availability of their weights, a characteristic that immediately raises the question of whether their availability can significantly increase their potential for misuse.

The NTIA’s Report on Dual-Use Foundation Models with Widely Available Model Weights takes this worry as its starting point and isolates risks that are specific to generative AI models over 10B parameters to assess their urgency with which they should be addressed and recommend a suitable course of action: limit the availability of future open-weights models, continue to monitor the situation or embrace the technology with minimal intervention. The report identifies a wide range of risks that could be worsened by the widespread availability of open models, like enabling access to chemical, biological, radiological, or nuclear (CBRN) weapons, lowering the barrier of entry toward performing offensive cyberattacks, and the ongoing use of generative AI models to create several forms of abusive content.

However, the report does not find that any of these risks provide conclusive evidence to adopt a policy restricting the availability of openly available models. Rather, the findings suggest that these models also bring several benefits that should not be overlooked, including boosting cyber defense operations and security research into generative AI models, contributing to transparency and accountability mechanisms development, fostering research and development, and promoting healthy competition.

After cautiously analyzing the available information, the report recommends an approach where the government continues to monitor the situation by gathering and evaluating evidence of the risks posed by openly available models, while preparing for action, including restricting the availability of future models, if necessary.