Former Google employee claims company took 'shortcuts' to launch AI chatbot despite internal concerns

Google allegedly abandoned 'fairness' principles and disregarded warnings to compete with ChatGPT

Former Google employee claims company took 'shortcuts' to launch AI chatbot despite internal concerns
AP Photo/Michael Probst
Remove Ads

A former high-level Google employee has alleged that the company abandoned its commitment to "fairness" and took significant "shortcuts" in order to launch its Gemini artificial intelligence (AI) chatbot, despite internal concerns about the product's safety.

According to the anonymous source, who spoke to Fox News Digital, Gemini, formerly known as Bard, underwent an AI Principles Review before its release. The experts responsible for the review reportedly concluded that the AI was unsafe and advised against launching the product.

However, the source claims that Jen Gennai, then-Director of Responsible Innovation (RESIN), edited the responses and pushed the product to leadership despite these warnings. Gennai defended her decision, stating that review steps for preview products and demos were not typical.

The source also alleged that when some employees expressed interest in examining Bard's data sets and embeddings due to concerning initial results, their requests were dismissed. "They said no, f--- it. We got to get to market because we are losing to ChatGPT," the source claimed.

The release of ChatGPT by OpenAI was seen as a significant threat to Google's business model, prompting the company to declare a "code red" and reassign teams to prioritize generative AI development. The source alleged that in the rush to catch up, Google made a strategic decision to disregard fairness, bias, and ethics concerns.

"As long as it's not producing child sexual abuse material or doing something harmful to a politician that could potentially affect our image, we're going to throw s--- out there," the source said, characterizing Google's approach.

The former employee also pointed to a culture at Google that prioritizes launching and landing new products above all else, which may have contributed to the chaos surrounding Gemini's release. They claimed that the company's review process lacked proper coordination and methodology, with employees who lacked AI or technical backgrounds sometimes being tasked with completing reviews.

Following public backlash to Gemini, Google's senior vice president Prabhakar Raghavan acknowledged that the AI's tuning failed to account for certain cases and that the model had become overly cautious. He said the AI generation feature would undergo "extensive testing" before being reactivated.

However, the former Google employee expressed skepticism about the company's ability to address these issues without significant overhauls to its existing structures and external regulation.

In response to the allegations, a Google spokesperson told Fox News Digital, "These are old claims that reflect a single former employee's view and are an inaccurate representation of how Google launches its AI products. We are committed to developing this new technology in a responsible way."

Remove Ads
Remove Ads

Don't Get Censored

Big Tech is censoring us. Sign up so we can always stay in touch.

Remove Ads