Google promises to fix Gemini’s image generation following complaints that it’s ‘woke’


Google’s Gemini chatbotFormerly called Bard, it has the ability to create AI-generated illustrations based on the user’s text description. You can ask him to create pictures of happy couples, for example, people in period clothes walking down modern streets. Whom BBC notes, however, that some users criticize Google for portraying specific white figures or historically white groups of people as racially diverse individuals. Now Google has issued a statement saying that it is aware that Gemini “suggests inaccuracies in some historical image creation descriptions” and will immediately correct everything.

according to Daily PointThat’s when the former Google employee began complaining he tweeted Images of women of color with the words, “It’s embarrassingly difficult to admit white people exist to Google Gemini.” To get these results, he asked Gemini to create images of American, British and Australian women. Other users, mostly identified as right-wing individuals, echoed their conclusions, showing AI-generated images depicting America’s founding fathers and Catholic Church popes as people of color.

In our tests, asking Gemini to create illustrations of the founding fathers resulted in images of white men with a single person or woman of color in them. When we asked the chatbot to generate images of popes through the ages, we got photos depicting black women and Native Americans as leaders of the Catholic Church. Asking Gemini to generate images of American women gave us photos with a white, East Asian, Native American, and South Asian woman. The Verge says the chatbot also portrays Nazis as people of color, but we couldn’t get Gemini to create Nazi images. “I am unable to fulfill your request due to the harmful symbolism and influence associated with the Nazi Party,” the chatbot replied.

Gemini’s behavior may be the result of overcorrection, as AI-trained chatbots and robots in recent years exhibits racist and sexist behavior. In one experience for example, a robot from 2022 repeatedly picked a black man when asked which of the faces it scanned was a criminal. In a statement published on X, Gemini Product Leader Jack Krawczyk he said Google designed it to reflect its “image creation capabilities [its] global user base and [it takes] Severely representational and biased.” He said Gemini would continue to create racially diverse illustrations for open-ended prompts, such as images of people walking their dogs. However, he acknowledged that “[h]historical contexts have more nuance to them and [his team] it will be tuned further to accommodate it.”





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *