Meta announced the availability of its Llama models for U.S. government agencies and contractors
Following reports of China using Meta's Llama 2 model for military intelligence, Meta announced it would make its AI models available to US government agencies and contractors despite ongoing debates about the safety and usefulness of commercial models in military contexts.
Less than a week ago, a Reuters report described ChatBIT, a military-focused AI tool to gather and process intelligence built on Meta's Llama 2 13B model. This confirmed suspicions that the Chinese government was leveraging open AI models for defense. The use of AI for military purposes is highly controversial, regardless of whether it's done using open or closed-source tools. Meta quickly clarified to Reuters that the only involved model was the outdated Llama 2 and its use was 'unauthorized' under Meta's terms.
Lately, an argument has been floating around that the availability and affordability of open models may increase their potential for misuse. As evidence, the National Telecommunications and Information Administration (NTIA), part of the US Department of Commerce, published a report titled Report on Dual-Use Foundation Models with Widely Available Model Weights, where the NTIA examines just this issue. Fortunately, the report does not find that any of these risks provide conclusive evidence to adopt a policy restricting the availability of openly available models.
Days after the Reuters report emerged and a day before the US federal elections, Meta released a statement confirming that it was making its models available to US government agencies and contractors including Accenture Federal Services, Amazon Web Services, Anduril, Booz Allen, Databricks, Deloitte, IBM, Leidos, Lockheed Martin, Microsoft, Oracle, Palantir, Scale AI and Snowflake. According to the statement, the US government and these contractors will be able to leverage Meta's Llama models for "responsible and ethical uses" that "will not only support the prosperity and security of the United States, they will also help establish U.S. open source standards in the global race for AI leadership."
Even if the issue about whether open AI models pose a greater risk is slowly being resolved as a negative, the efficacy and safety of using any commercial AI models for military applications is far more controversial. In particular, a recent study by the AI Now Institute has found that, without an efficient insulation of military AI systems and personal data from commercial models, the use of AI in military contexts risks expanding the attack vectors by making the systems and interfaces they interact with more vulnerable.