According to a report from the non-profit Center for Countering Digital Hate (CCDH), generative AI tools are being scrutinized for their potential to create deceptive images related to political candidates and voting.
The report, released amid a year of significant elections worldwide, found that some AI models can generate false images, including those depicting Joe Biden sick in a hospital and Donald Trump sadly sitting in a jail cell.
The CCDH tested several AI models, including Midjourney, ChatGPT, DreamStudio, and Image Creator, and found that they were able to generate election disinformation images in 41% of cases. Midjourney performed the worst, generating such images in 65% of cases.
The popularity of generative AI tools like ChatGPT, developed by Microsoft-backed OpenAI, has raised concerns about the potential for fraud, particularly in the context of elections.
In response to these concerns, twenty digital giants, including Meta, Microsoft, Google, and TikTok, have pledged to fight AI content designed to mislead voters.
They have committed to using technologies to counter potentially harmful AI content, such as through the use of invisible watermarks detectable by machines.
The CCDH has called on platforms to prevent users from generating and sharing misleading content about geopolitical events, candidates for office, elections, or public figures.
OpenAI has stated that it is working to prevent abuse and improve transparency on AI-generated content, including declining requests that ask for the generation of images of real people, including candidates.
Meanwhile, a Microsoft engineer has raised concerns about the dangers of AI image generators DALL-E 3 and Copilot Designer, warning that they can unintentionally create harmful content, including images that sexually objectify women and display political bias. Despite raising these concerns with his supervisors, the engineer says he has not seen sufficient action taken.
In response, a Microsoft spokesperson stated that the company has established internal systems for employees to report and escalate any concerns about its AI technologies. She added that the engineer who raised the concerns is not part of the dedicated security teams at Microsoft.