Google CEO vows to fix Gemini, calls racist responses ‘completely unacceptable'

Pichai said in the memo: 'I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our users and shown bias — to be clear, that's completely unacceptable and we got it wrong.'

Google CEO vows to fix Gemini, calls racist responses ‘completely unacceptable'
AP Photo/Eric Risberg
Remove Ads

Google CEO Sundar Pichai has released an internal memo to staffers, calling attention to Gemini's racist responses, which received strong criticism from Elon Musk and others on X. In the memo, Pichai vowed to make changes to fix the problem, but he didn't mention firing anyone.

Last week, Google suspended Gemini's image creation tool after it generated historically inaccurate images while posing them as "historically accurate"  black, south Asian Founding Fathers, black female Popes, black female vikings, black Romans, and just about everything else was coded to be "diverse" under the platform's DEI-based instructions.

Pichai said in the memo: "I want to address the recent issues with problematic text and image responses in the Gemini app (formerly Bard). I know that some of its responses have offended our users and shown bias — to be clear, that's completely unacceptable and we got it wrong."

The platform has also produced written responses worthy of ridicule, including refusing to say whether Elon Musk or Adolf Hitler was worse, and that it is "not okay to be white," among others.

Tesla CEO and X owner Elon Musk has been vocal during the backlash to Gemini, pointing out that the platform's penchant for telling lies and rewriting history for the sake of promoting DEI makes it part of the ongoing dangers posed by the wokification of institutions.

Here's Pichai's letter in full, per Semafor:

Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement on a wide range of prompts. No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes. And we’ll review what happened and make sure we fix it at scale.

Our mission to organize the world’s information and make it universally accessible and useful is sacrosanct. We’ve always sought to give users helpful, accurate, and unbiased information in our products. That’s why people trust them. This has to be our approach for all our products, including our emerging AI products.

We’ll be driving a clear set of actions, including structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming, and technical recommendations. We are looking across all of this and will make the necessary changes.

Even as we learn from what went wrong here, we should also build on the product and technical announcements we’ve made in AI over the last several weeks. That includes some foundational advances in our underlying models e.g. our 1 million long-context window breakthrough and our open models, both of which have been well received.

We know what it takes to create great products that are used and beloved by billions of people and businesses, and with our infrastructure and research expertise we have an incredible springboard for the AI wave. Let’s focus on what matters most: building helpful products that are deserving of our users’ trust.

Remove Ads
Remove Ads

Don't Get Censored

Big Tech is censoring us. Sign up so we can always stay in touch.

Remove Ads