Amid the AI revolution, a new battleground is emerging in politics—AI-generated deepfakes. As the 2024 election looms, the Federal Election Commission (FEC) is seriously considering potential regulation of these type of political ads.
What is a Deepfake?
A deepfake is synthetic media created using AI to convincingly alter audio, video, or images, making it seem like someone is saying or doing something they never did. This technology can produce highly realistic content, raising concerns about its potential to spread misinformation and deceive viewers.
Deepfakes are created by training AI models on existing data of the target person and using neural networks to generate new content that mimics the original. Detecting and addressing deepfakes’ potential negative impacts is an ongoing challenge.
Remember the viral 2018 deepfake video featuring Jordan Peele impersonating President Obama? Since then, deepfake technology, alongside other AI-driven tools, has evolved, becoming increasingly sophisticated, accessible, and eerily realistic.
This progression has experts in AI and misinformation deeply concerned, especially in the context of the 2024 election.
Deepfake ads have already started.
The GOP and some campaigns, such as Governor DeSantis, have already harnessed AI-manipulated visuals to sway public opinion.
Good Old-fashioned Photographic Evidence?
Photographs and videos have long been hailed as windows into reality. From the haunting black-and-white images of war-torn landscapes to the iconic shots that ignited social movements, these mediums have played a pivotal role in shaping our collective consciousness as evidence of truth.
While skepticism towards traditional media has grown recently, videos and images have often retained higher trust. The advent of deepfakes and the possibility for anyone to manipulate visual evidence to serve political agendas has started to change this. The very nature of visual evidence and its impact on public perception will require careful consideration.
Experts warn that the use of deepfakes is likely to escalate, and the consequences are twofold: not only could deepfakes mislead the public, but they could also provide politicians with a convenient escape route. If genuine images or videos of their actions surface, politicians could easily dismiss them as deepfakes, leveraging the inherent ambiguity of AI-manipulated content.
As AI-generated deepfakes become more sophisticated, the political landscape is poised for challenging times.
AI’s Impact on Voter Outreach and Engagement
AI-powered tools are also revolutionizing voter outreach strategies. These tools, encompassing automated chatbots, personalized messaging systems, and predictive analytics, hold the promise of broadening campaign interactions and fostering meaningful connections with voters, a trend gaining momentum as the 2024 election season progresses.
What ethical concerns arise from using AI-powered tools in voter outreach strategies?
For good actors, striking the right balance between technological innovation and preserving the authenticity of human connection will be critical. Ensuring data privacy, maintaining transparency, and adhering to ethical guidelines are essential considerations as campaigns navigate the evolving terrain of AI in voter engagement. Movements must move fast and break things… carefully.
For bad actors? Campaigns, state-sponsored entities, and malicious actors could leverage this technology to advance their narratives and sow discord.
So what is the FEC doing?
The FEC’s decision to tackle this issue partly responds to the advocacy group Public Citizen’s demand for clarity on whether the existing legal framework extends to cover AI-generated deepfakes.
In an open meeting today, the Federal Election Commission (FEC) unanimously decided to advance Public Citizen’s petition requesting rulemaking to address the anticipated onslaught of “deepfakes” in 2024 campaign advertising. The next step in the process will be a public comment period, which will open next week and remain open for 60 days, after which the FEC will determine whether or not to take up a final rule.
“Deepfakes pose a significant threat to democracy as we know it. The FEC must use its authority to ban deepfakes or risk being complicit with an AI-driven wave of fraudulent misinformation and the destruction of basic norms of truth and falsity.”
Robert Weissman, president of Public CitizenAugust 10, 2023 via Press Release
Within the FEC, a divide exists. Some officials question the agency’s authority to govern this swiftly evolving landscape, while others view deepfakes as a distinctive form of fraud that necessitates intervention. As a potential regulatory framework takes shape, one element could be the requirement for conspicuous disclaimers indicating the use of AI technology in crafting these persuasive illusions.
Of course, challenges lie ahead. Loopholes are likely to surface, especially concerning the influence of political action committees (PACs) and individual users who might exploit AI-generated deception in ways that elude regulation.
In an era where the line between fact and fabrication in politics is becoming increasingly blurred, the FEC and the public have their work cut out.
In the context of our everyday? It’s mind-boggling to consider how these tools could be used to influence the masses in all areas of human endeavor.