Advertisment

Google's Gemini Stumbles Over Biased Images

Gemini has an image generation algorithm, which is needed help to accurately represent people of various ethnicities, genders, and body types

author-image
Preeti Anand
New Update
Google Search

Gemini

The corporation was compelled to halt its AI picture generation tool due to errors with historical figures. Many individuals wanted to be more satisfied with the output of Google's Gemini AI image generator, prompting the corporation to pause its availability and recognise its faults. Google announced a temporary pause in the ability of its AI chatbot, Gemini, to generate images of people. This decision came after criticism and concerns regarding the tool's tendency to produce inaccurate and potentially offensive outputs, particularly for depictions of diverse individuals.

Advertisment

So, how did Google's AI technology go so wrong, and why did it happen? 

Gemini's image generation algorithm needed help to accurately represent people of various ethnicities, genders, and body types. The resulting images often displayed biases and stereotypes, leading to accusations of unfairness and discrimination. The training data used to develop Gemini might have needed more diversity, leading to an inability to generate images beyond a specific range. This resulted in a lack of inclusivity and representation in the tool's outputs.

Did Google respond?

Advertisment

Gemini became overly cautious to avoid generating offensive images and refused to generate specific prompts altogether. This unintended consequence limited the tool's functionality and user experience.

Google immediately stopped Gemini's ability to generate images of people, allowing them to focus on improving the underlying technology. Google published a blog post by Prabhakar Raghavan, Senior Vice President, explaining the technical challenges and the steps being taken to address them. Google emphasised its commitment to developing responsible AI tools that are fair, inclusive, and unbiased. They outlined their plans to improve data diversity, refine algorithms, and establish robust testing and evaluation procedures.

Conclusion

Google's justifications seem reasonable, yet it is difficult to understand why the AI model would mistake its command rather than use its thinking to produce the result. AI models are taught using large datasets, but these tools struggle with cues related to ethnicity and sex, as well as historical facts.

For example, AI cannot mix up the characters of German soldiers from World War II and assign them a different ethnicity. To ensure that these facts are valid in the future, Google has decided to maintain the AI model in learning mode. The corporation has previously predicted these challenges, and now that they have become a reality, modifications are essential before AI concerns grow further.

Advertisment