Politics

/

ArcaMax

Will AI deepfakes and robocalls upset the 2024 election?

Jeffrey Fleishman, Los Angeles Times on

Published in Political News

In the analog days of the 1970s, long before hackers, trolls and edgelords, an audiocassette company came up with an advertising slogan that posed a trick question: "Is it live or is it Memorex?" The message toyed with reality, suggesting there was no difference in sound quality between a live performance and music recorded on tape.

Fast forward to our age of metaverse lies and deceptions, and one might ask similar questions about what's real and what's not: Is President Joe Biden on a robocall telling Democrats to not vote? Is Donald Trump chumming it up with Black men on a porch? Is the U.S. going to war with Russia? Fact and fiction appear interchangeable in an election year when AI-generated content is targeting voters in ways that were once unimaginable.

American politics is accustomed to chicanery — opponents of Thomas Jefferson warned the public in 1800 that he would burn their Bibles if elected — but artificial intelligence is bending reality into a video game world of avatars and deepfakes designed to sow confusion and chaos. The ability of AI programs to produce and scale disinformation with swiftness and breadth is the weapon of lone wolf provocateurs and intelligence agencies in Russia, China and North Korea.

"Truth itself will be hard to decipher. Powerful, easy-to-access new tools will be available to candidates, conspiracy theorists, foreign states, and online trolls who want to deceive voters and undermine trust in our elections," said Drew Liebert, director of the California Initiative for Technology and Democracy, or CITED, which seeks legislation to limit disinformation. "Imagine a fake robocall [from] Gov. Newsom goes out to millions of Californians on the eve of election day telling them that their voting location has changed."

The threat comes as a polarized electorate is still feeling the aftereffects of a pandemic that turned many Americans inward and increased reliance on the internet. The peddling of disinformation has accelerated as mistrust of institutions grows and truths are distorted by campaigns and social media that thrive on conflict. Americans are both susceptible to and suspicious of AI, not only its potential to exploit divisive issues such as race and immigration, but also its science fiction-like wizardry to steal jobs and reorder the way we live.

Russia orchestrated a wave of hacking and deceptions in attempts to upset the U.S. election in 2016. The bots of disinformation were a force in January when China unsuccessfully meddled in Taiwan's election by creating fake news anchors. A recent threat analysis by Microsoft said a network of Chinese sponsored operatives, known as Spamouflage, is using AI content and social media accounts to "gather intelligence and precision on key voting demographics ahead of the U.S. presidential election."

 

One Chinese disinformation ploy, according to the Microsoft report, claimed the U.S. government deliberately set the wildfires in Maui in 2023 to "test a military grade 'weather weapon.'"

A new survey by the Polarization Research Lab pointed to the fears Americans have over artificial intelligence: 65% worry about personal privacy violations, 49.8% expect AI to negatively affect the safety of elections and 40% believe AI might harm national security. A poll in November by UC Berkeley found that 84% of California voters were concerned about the dangers of misinformation and AI deepfakes during the 2024 campaign.

More than 100 bills have been introduced in at least 39 states to limit and regulate AI-generated materials, according to the Voting Rights Lab, a nonpartisan organization that tracks election-related legislation. At least four measures are being proposed in California, including bills by Assemblymembers Buffy Wicks (D-Oakland) and Marc Berman (D-Menlo Park) that would require AI companies and social media platforms to embed watermarks and other digital provenance data into AI-generated content.

"This is a defining moment. As lawmakers we need to understand and protect the public," said Adam Neylon, a Republican state lawmaker in Wisconsin, which passed a bipartisan bill in February to fine political groups and candidates $1,000 for not adding disclaimers to AI campaign ads. "So many people are distrustful of institutions. That has eroded along with the fragmentation of the media and social media. You put AI into that mix and that could be a real problem."

...continued

swipe to next page

©2024 Los Angeles Times. Visit at latimes.com. Distributed by Tribune Content Agency, LLC.

Comments

blog comments powered by Disqus