Google Temporarily Halts AI-Generated Human Images Amid Diversity Representation Concerns
Google has announced a temporary suspension of its artificial intelligence model, Gemini, from producing images of people. This decision comes in response to criticism over the model’s portrayal of historical figures, such as German WWII soldiers and Vikings, as people of color. The tech giant aims to refine the Gemini model after feedback from social media users highlighted the generation of ethnically diverse images of popes, founding fathers of the US, and other historical figures.
Acknowledging the need for improvement, Google stated, “We’re already working to address recent issues with Gemini’s image generation feature. During this process, we’re pausing the generation of human images and will release an enhanced version shortly.”
The critique of Gemini’s image outputs gained traction on social media platforms like X, where users shared examples and discussed the AI’s challenges with accuracy and representation bias. One critique, from a former Google employee, pointed out difficulties in generating images of white individuals using the Gemini model.
Jack Krawczyk, a senior director on the Gemini team, conceded that the image generator, which is not accessible in the UK and Europe, requires adjustments.
Krawczyk expressed commitment to immediate improvements, noting, “Gemini’s AI image generation produces a diverse array of people representations, aligning with our global user base. However, we acknowledge shortcomings in accurately representing historical contexts and are dedicated to making necessary adjustments.”
He further clarified that Google’s AI principles ensure its tools represent its global user base accurately, especially for open-ended image requests. However, he acknowledged the need for more precise tuning for historically themed prompts to better capture the nuances involved.
The issue of bias in AI, particularly racial and gender bias, has received extensive coverage, revealing the technology’s potential to perpetuate stereotypes and inaccuracies. Investigations, such as one by The Washington Post, have unveiled biases in AI-generated images, underscoring the importance of developing AI technologies that offer fair and accurate representations of all individuals.