In December, senior national security officials convened in the White House Situation Room to conduct a simulation exercise for the 2024 election, facing challenging scenarios that tested the federal response to potential election-related chaos, according to sources familiar with the meeting.
The simulated scenarios involved a fabricated AI-generated video, falsely depicting a Senate candidate destroying ballots, orchestrated by Chinese operatives. Additionally, the officials grappled with the question of how to respond if violence erupted at polling stations on Election Day.
During the hour-long session, the second-ranking officials from the FBI, CIA, Homeland Security, and Justice departments deliberated on the appropriate response to the deepfake video.
The discussion included considerations about whether and how to inform the public about the activity, especially if the originators were uncertain, the sources revealed.
The meeting shed light on the complex challenges faced by the Biden administration in crafting a coordinated federal response to issues such as widespread disinformation, deepfakes, and harassment of election officials. One US official involved in the election security drill expressed frustration, stating, “We’re all tied up in knots.”
This simulation marked the first such exercise conducted by the Biden White House in over three years, underscoring the significance of the questions and dilemmas confronting the administration as it strategizes responses to potential threats to the 2024 election.
One major consideration is the delicate balance between countering disinformation and avoiding inadvertently amplifying false narratives. Officials must assess the origin of the information operation before deciding on a public response. If a foreign actor is identified, officials can act swiftly to address the threat.
However, if there is a possibility of involvement by an American citizen, there is reluctance to publicly counter it, fearing it may be perceived as influencing the election or restricting free speech.
In both simulated scenarios, federal officials leaned toward a restrained public response, opting to allow state and local governments to take the lead. This approach reflects a fundamental dilemma: how to protect voters from election threats when many of them harbor mistrust toward the federal government.
While state and local officials are more trusted voices in their communities, federal officials grapple with how to decisively support them without undermining public confidence.
In the case of the deepfake video, participants in the simulation favored state election officials over the federal government to lead public messaging against disinformation in their respective jurisdictions.
There was no volunteer at the table to serve as the lead federal agency to inform the public about the deepfake. Regarding potential violence at polling stations, federal officials decided against dispatching federal agents to assist local police, citing jurisdictional limitations.