Leading voice cloning tools are alarmingly easy to manipulate to produce election disinformation
An alarming new report from the Center for Countering Digital Hate reveals how leading AI voice cloning tools can be easily manipulated to generate disinformation mimicking political leaders' voices, posing a severe threat to the integrity of the many upcoming elections worldwide in 2024.
A recent report by the Center for Countering Digital Hate (CCDH) exposes how scarily easy it is to bypass the restrictions of leading AI-powered voice cloning tools to produce convincing disinformation and place it in the mouths of prominent political figures. The CCDH attempted to create disinformation using six leading audio generation platforms (ElevenLabs, Speechify, PlayHT, Descript, Invideo AI, and Veed) and succeeded in 80% of the test cases.
Unexpectedly, one of the tested tools —Invideo AI—went as far as to suggest entire scripts based on the provided prompts. Furthermore, Speechify and Play HT failed 100% of their tests, which included clones of Donald Trump warning people not to vote because of bomb threats, President Macron confessing to campaign fund misuse, and President Biden claiming to have manipulated electoral results.
The report went public amid an election season that will see over 2 billion voters across 50 countries head to the polls and shows that, despite claims to the contrary, many media generation platforms still lack effective misuse-preventing safeguards. In parallel, the CCDH disclosed that in the vicinity of the global electoral season, reports of AI-enabled misinformation experienced a staggering 697% increase. High-profile incidents include the January 2024 robocall incident in New Hampshire, where an AI-generated voice clone of President Biden discouraged voters from going to the polls; and the widely circulated fake audio recordings where the UK’s Labour Party leader Keir Starmer purportedly abused staff members and criticized the city of Liverpool.
As a result of its investigation, the CCDH is calling for AI companies to set responsible safeguards that keep users from preventing electoral misinformation, for social media companies to implement efficient fake audio detection and prevention measures, and for existing laws to be updated to address AI-generated harm to preserve electoral integrity. In the harrowing words of CCDH CEO and founder Imran Ahmed, “Hyperbolic AI companies often claim to be creating and guarding the future, but they can’t see past their own greed. It is vital that in the crucial months ahead they address the threat of AI election disinformation and institute standardized guardrails before the worst happens.”