Has Google’s AI tool taken diversity too far? In an attempt to bypass the prevalent AIbias, it seems that Google has programmed Gemini to be so inoffensive that it has inadvertently come full circle. Responding to online backlash in an internal memo, Google CEO Sundar Pichai told employees that Gemini’s bias was unacceptable: “We got it wrong,” he wrote, according to Semafor.
Users had been reporting historically inaccurate images created by the AI chatbot, which generated Black Vikings, Asian women as German WWII soldiers, and a woman pope. Google has since temporarily paused Gemini’s ability to generate images of people until the team improves the chatbot’s image-generation capabilities.
Google is going scorched earth on Gemini’s human image-generating facilities. Pichai has instructed the team to make structural changes to Gemini, which includes an update to the product guidelines, The Guardian reports, citing a Google spokesperson who confirmed the accuracy of the CEO’s memo. They will also be conducting more red-teaming — the simulation of the misuse of Gemini — to identify potential problem areas.
AI bias is an issue Silicon Valley has yet to conquer. Despite big tech spending between USD 20-25 bn on AI development last year, AI systems still tend to regurgitate human bias. For example, Google would routinely translate gender-neutral phrases to reflect gender biases in English. OpenAI’s Dall-E would reflect racial bias when asked to produce images of certain occupations, states the Guardian.