Whether it's misinformation or disinformation, generative AI undoubtedly poses a challenge
At TechCrunch Disrupt 2024, a panel featuring executives from the Center for Countering Digital Hate, CITRIS Policy Lab, and Meta Oversight Board discussed AI's role in amplifying misinformation and disinformation, debating various regulatory approaches and technical solutions.
It is beyond question that disinformation and misinformation are not "modern" issues by most measures. What is also becoming increasingly evident is that even if AI-generated text and media contributing to mis- and disinformation are turning out to be qualitatively similar to more traditional strategies towards the same aim, the quantitative aspect is the more worrying one.
This Wednesday at TechCrunch Disrupt 2024, Center for Countering Digital Hate CEO Imran Ahmed, Director of the CITRIS Policy Lab Brandie Nonnecke, and Meta Oversight Board Co-Chair Pamela San Martin participated in a panel to discuss how AI has exacerbated misinformation and disinformation, their thoughts on current attempts at regulation, and other alternatives to mitigate the threats of AI-generated deepfakes. The discussion was equally lively, earnest, and respectful, revealing a diversity of opinions around the topic, but also converging towards a common view: those behind social media platforms and generative AI products still have a lot of work to do.
Undoubtedly, the highlight of the panel was Imran Ahmed's reconstruction of generative AI as a "perpetual bulls–t machine". The Center for Countering Digital Hate CEO remarked how, before generative AI became so pervasive, the cost of disinformation typically amounted to the effort put into producing a given piece of disinformation, and how generative AI had made it so even that one cost was now zero. In his words: "So what you have, theoretically, is a perfect loop system in which generative AI is producing, it’s distributing, and then it’s assessing the performance — A/B testing and improving. You’ve got a perpetual bulls–t machine. That’s quite worrying!"
Brandie Nonnecke reminded the audience that even though the threat is real, there are concrete efforts to mitigate the effects of and get the spread of misinformation and disinformation under control. Nonnecke specifically referred to the recently signed bills in the state of California, one of which requires developers of AI-powered media generators to make detectors available for the general public; another that establishes that social media platforms must identify and remove deepfakes and other sources of disinformation; and another that makes it punishable to create or distribute non-consensual deepfakes.
Following this more optimistic trend, Meta Oversight Board Co-Chair Pamela San Martin remarked that for all our fears of deepfakes overtaking global elections this year, the rise in deepfakes for election disinformation did not reach the heights the world was expecting. The key point she drove was that pushing regulation based on unmotivated fears risks cutting back on the more productive side of generative AI. While in agreement with her fellow panelists, San Martin held the most optimistic view of the three, attempting to balance the benefits of (external) regulation with enabling entities, particularly social media platforms, to implement some form of self-regulation.
San Martin's contribution easily set off one of the most passionate segments of the debate. Nonnecke and Ahmed remained mostly skeptical of the benefits of self-regulation, and respectfully dissenting with San Martin, expressed their doubts about whether the Meta Oversight Board was enough to guarantee true transparency and accountability, given that, however indirectly, Meta still holds substantial influence (however indirect) over the Board. Similar criticism is applicable, for instance, to OpenAI's newly appointed Safety and Security Committee.
On a related note, Imran Ahmed brought up the social media platform X's Grok and its image generation capabilities as a paradigm of "self-regulation" gone bad, with the lack of guardrails for Grok's image generation capabilities becoming a selling point for acquiring a subscription to the social media platform. Overall, the panelists agreed there is a way forward, whether it is implementing stricter technical controls—ranging from digital watermarking to making it impossible to purchase non-consensual deepfake imagery using credit cards, and penalizing ISPs for enabling domain hosting for these services— or accelerating government-originated regulation to hold the creators of generative AI tools and social media platforms accountable.