Google CEO Sundar Pichai addressed the company’s recent issues with its AI-powered Gemini image generation tool after it in historical images. He called the turn of events “unacceptable” and said that the company’s “working around the clock” on a fix, according to an
“No AI is perfect, especially at this emerging stage of the industry’s development, but we know the bar is high for us and we will keep at it for however long it takes,” Pichai wrote to staffers. “And we’ll review what happened and make sure we fix it at scale.”
Pichai remains optimistic regarding the future of the Gemini chatbot, formerly called Bard, noting that the team has already “seen substantial improvement on a wide range of prompts.” The image generation aspect of Gemini until a fix is fully worked out.
This started when Gemini users began noticing that the generator began cranking out historically inaccurate images, like pictures of Nazis and America’s Founding Fathers as people of color. This quickly became a big thing on social media, with the word “woke” being thrown around a whole lot.
Prabhakar Raghavan, Google’s senior vice president for knowledge and information, did not lay the blame on wokeness, Basically, the model was fine-tuned to allow for diverse groups of people in pictures, but “failed to account for cases that should clearly not show a range.” This led to controversial images like people of color showing up as Vikings and Native American Catholic Popes.
Raghavan also said that the model became more cautious over time, occasionally refusing to answer certain prompts after wrongly interpreting them as sensitive. This accounts for reports that the model refused to generate images of white people.
It sounds like the company was trying to both please a global audience and ensure the model didn’t fall into some of the traps of rival products, like creating sexually explicit images or depictions of real people. Tuning these AI models is extremely delicate work and the software can easily It’s what they do. In any event, I’d prefer a historically inaccurate Catholic Pope any day of the week. Chalk this up as yet another reminder that AI still has a long way to go.
As for Gemini, the company promises the image generator will return in the near future, but it still requires a suite of fixes and tests to make sure this never happens again, including “structural changes, updated product guidelines, improved launch processes, robust evals and red-teaming and technical recommendations.”
Pichai Sundararajan (born June 10, 1972), better known as Sundar Pichai (/ˈsʊndɑːr pɪˈtʃaɪ/), is an Indian-born American business executive. He is the chief executive officer (CEO) of Alphabet Inc. and its subsidiary Google. Alphabet Inc.
says Gemini's AI image results "offended our users" Google CEO Sundar Pichai told employees in an internal memo that the AI tool's problematic images were unacceptable. He vowed to re-release a better version of the service in the coming weeks.
What was the reaction to the Gemini images? First, Gemini's renders triggered an anti-woke backlash from conservatives online, who claimed they were “furthering Big Tech's woke agenda” by, for example, featuring the Founding Fathers of the United States as men and women from ethnic minority groups.
Gemini started refusing to generate images of white people and depicted several historical figures as people of colour, even though they were originally white. This had sparked a debate online and people were soon talking about Gemini.
What happened with Google Gemini? Gemini faced criticism for generating historically inaccurate and biased images. The AI system was accused of being "too woke" and getting details about "white people" wrong.
But soon after its launch, Gemini found itself at the center of a storm, drawing criticism and prompting a broader discussion about the role of AI in our society. The core of the controversy emerged when Gemini's image-generating feature started producing historically inaccurate and biased images.
Gemini can generate biased responses. In some cases, biases are caused by a lack of data, such as limitations around answers having to do with certain cultures or countries. Gemini is not alone in this problem—other generative AI tools show bias, too, because of gaps in their training data.
SAN FRANCISCO — Google blocked the ability to generate images of people on its artificial intelligence tool Gemini after some users accused it of anti-White bias, in one of the highest profile moves to scale back a major AI tool.
The Gemini chatbot has also faced some criticism in the past. Gemini has said it wouldn't promote meat or fossil fuels. Users on X have also complained that the chatbot inserts the word "diverse" into responses that don't call for it.
A few days ago, Google Gemini AI stopped generating images of people after it was found that the tool was way off the mark in creating history photos. The problem, which has muddled and dirtied Gemini's name, also reveals a company that seems stuck. Listen to Story.
Back in February, Google paused its AI-powered chatbot Gemini's ability to generate images of people after users complained of historical inaccuracies.
You still can't access the Gemini web app with a Google Account managed by Family Link. Meet the minimum age requirement to use the Gemini web app: For the European Economic Area (EEA), Switzerland, Canada and the UK: you must be 18 or over.
Feb. 22, 2024, at 9:37 a.m. Google said Thursday it is temporarily stopping its Gemini artificial intelligence chatbot from generating images of people a day after apologizing for “inaccuracies” in historical depictions that it was creating.
Google apologized for the shortcomings of Gemini's image generator and temporarily paused its ability to generate people, saying in a blog post the AI had been trained to ensure a range of people were included in its results — but the training failed to account for instances that shouldn't show a range.
Gemini's negative traits include being impulsive and inconsistent. They're prone to rash decisions and boredom, and may struggle to find purpose in life. Gemini may also be seen as two-faced because they have a tendency to stretch the truth or exaggerate when they're trying to impress people.
The issue gained attention when Gemini's image generator model started producing incorrect images that misrepresented historical figures. In these pictures, the AI chatbot replaced White individuals with people of Black, Native American and Asian descent.
Gemini is a relatively safe exchange with fiduciary status. It requires all users to implement two-factor authentication. The company prides itself on offering safe and secure crypto trading services.
Hobby: Shopping, Table tennis, Snowboarding, Rafting, Motor sports, Homebrewing, Taxidermy
Introduction: My name is Duncan Muller, I am a enchanting, good, gentle, modern, tasty, nice, elegant person who loves writing and wants to share my knowledge and understanding with you.
We notice you're using an ad blocker
Without advertising income, we can't keep making this site awesome for you.