Former President Trump has recently claimed that images from a Harris campaign rally were artificially generated, marking a prominent example of deepfake skepticism within American politics.
This allegation comes despite rising incidents of deepfake manipulation, including attempts to sway the New Hampshire primary with fabricated audio of President Biden and the use of deepfakes in Slovakia’s upcoming elections. Meanwhile, xAI is revealing an advanced model capable of creating extraordinarily realistic and unfiltered images.
As AI technology evolves rapidly, it creates new challenges for trust and authenticity. While the impact on elections is well-known, the potential effects on other areas of society remain underexplored. For instance, during legal proceedings, could criminals use generated security footage to exonerate themselves or claim “AI” when presented with incriminating audio?
Beyond legal contexts, synthetic media is increasingly being used for fraudulent purposes. Deloitte estimated that in 2023, generative AI enabled thefts amounting to $12.3 billion, a figure expected to rise with advancing technology. The proliferation of deepfakes indicates that we might soon reach a critical point where trust is severely compromised.
Addressing this issue isn’t straightforward, as there is no single solution. Instead, a multifaceted approach is necessary. Developing and implementing reliable forensic techniques, such as watermarking technology, is crucial. Watermarking can embed signatures in content to verify its authenticity, though its use is currently inconsistent.
To supplement watermarking, standardized and easy-to-verify authenticity techniques must be developed. This could include automated deepfake detectors, best practices, and contextual evidence, with ongoing research needed to keep pace with technological advancements.
Policymakers should focus on funding AI forensics research and creating clear, accessible standards for verifying content. States should also invest in outreach to educate local institutions on these standards.
Education is vital to address public confusion about generative AI. Simple public service announcements (PSAs) could increase awareness, and legislative efforts like the Artificial Intelligence Public Awareness and Education Campaign Act could help. However, this effort must be continuous to keep up with rapidly evolving technology.
Through a comprehensive approach involving technology, institutional support, and public education, we can build the necessary framework to pass the challenges posed by deepfakes and maintain trust in digital media. Immediate and sustained action is essential to create a new standard for authenticity and trust.