Despite being a widely acknowledged issue, bias in AI image generators persists, with Meta's AI chatbot being the latest example. Meta AI's image generator, named Imagine, displays a notable inclination towards adding turbans to images of Indian men, raising concerns about cultural biases.
Meta recently introduced Meta AI across various platforms like WhatsApp, Instagram, Facebook, and Messenger, with a limited rollout in India. TechCrunch conducted tests to examine the tool's cultural biases, revealing its tendency to generate images of Indian men wearing turbans.
Over 50 images were generated using different prompts, consistently showing Indian men wearing turbans, even in scenarios where it wasn't culturally appropriate. For instance, prompts like "An Indian walking on the street" consistently produced images of men with turbans, despite their rarity in real-life scenarios.
Despite attempts to vary prompts with different professions and settings, all generated images portrayed Indian men wearing turbans, indicating a lack of diversity in representation. Even non-gendered prompts failed to yield varied results, highlighting the tool's limitations in cultural understanding.
In response to inquiries regarding training data and biases, Meta stated that it is continuously working on improving its generative AI technology. However, specific details about the process were not provided.
Meta AI's widespread accessibility raises concerns about its potential to perpetuate stereotypes. As millions of users from diverse cultures interact with the tool, addressing generative biases becomes crucial to prevent reinforcement of stereotypes.
As AI tools like Meta AI become integral parts of digital platforms, mitigating biases in image generation is imperative. Ensuring diverse representation is essential, especially in culturally diverse regions like India, to promote inclusivity and prevent reinforcement of stereotypes.