On September 26, 1983, Russian Lt. Col. Stanislav Petrov’s decision to question false information about an inbound US nuclear strike prevented a catastrophic global nuclear exchange.
Today, rapid advancements in artificial intelligence (AI) make Petrov’s job much harder. AI-driven disinformation poses a remarkable threat to national security, political polarization, election integrity, hate speech, trust in science, and financial scams.
In 2024, as half the world heads to the ballot box, deepfakes and AI-generated disinformation will target political leaders and influencers, making the problem of misinformation more urgent than ever.
False information produced and spread by AI can lead to nuclear war, as crises have high stakes and short timelines, making it difficult to verify intelligence. AI tools for verifying content authenticity are not reliable, and the likeliest nuclear hotspots involve actors with low levels of trust.
The national security risks extend beyond nuclear exchange, as disinformation campaigns can delegitimize military efforts and assign blame for biological attacks.
The proliferation of misinformation can also hamper public health responses and cybersecurity, as spearphishing tactics become more believable and effective.
To mitigate these risks, a central strategy must focus on scrutinizing powerful AI systems for disinformation risks before development and deployment.
Systems with harmful potential must be prevented from release until safeguards are in place. This is crucial for protecting democracy, the economy, national security, and the safety of all Americans.