Earlier this month, the internet was abuzz with controversy over images depicting a black George Washington and racially diverse WWII soldiers fighting in Hitler’s Nazi army.
These images, which went viral, were created using artificial intelligence (AI) by Gemini, a company that refused to create images depicting strictly white individuals.
Elon Musk, known for his early recognition of AI’s potential benefits and harms, argued that these images only confirmed conspiracy theorists’ beliefs that whites are being discriminated against in what he termed the “Great Replacement.”
Pollster Nate Silver weighed in, cautioning against the risks of giving too much power to a few AI engineers, particularly in the case of behemoth companies like Google, worth approximately $1.7 trillion, with access to vast amounts of personal data from global search histories.
Silver labeled Gemini’s rollout as one of the most disastrous in Silicon Valley’s history, and perhaps even in recent corporate America, especially considering Google’s esteemed reputation. However, Google has yet to respond to Fortune’s request for comment on the matter.
The controversy surrounding Gemini’s AI-generated images reflects a broader societal issue regarding identity politics, which has permeated corporate boardrooms and made diversity, equity, and inclusion (DEI) programs politically contentious.
Claudine Gay and Alissa Heinerscheid, both formerly associated with Harvard and Bud Light, respectively, became emblematic of the challenges and criticisms surrounding diversity hiring practices.
In this context, AI companies have introduced a technological Pandora’s Box, where outcomes are influenced by human biases embedded in the software’s training data.
The focus of the debate around misinformation and disinformation has shifted from traditional news media to AI companies, highlighting the complexities and challenges of navigating social and ethical issues in the digital age.