Three weeks ago, Google Gemini got a new functionality to generate images from text specifications. Other similar AI-based programs also have this capability.
In recent years, there has often been the problem of stereotypes and discrimination in various AI applications. For example, facial recognition software was initially poor at recognizing black people. When the AI created the images, white people were often photographed at first.
Therefore, developers at other companies strive to achieve more versatility in different scenarios. Sometimes – as in this case – they fall between two fronts: there is a vocal movement in the USA in particular, which includes tech billionaire Elon Musk, who denounces alleged racism against white people.
In a blog post on Friday, Google explained that it failed to program exceptions for cases where diversity is definitely misplaced. The resulting images were “awkward and wrong.” At the same time, the program became very cautious over time and refused to meet some requirements. But if users wanted to show photos of a “white vet with a dog,” the software would have to do that. Google Director Prabhakar Raghavan confirmed that the errors were unintentional.
Meanwhile, Google Director Raghavan confirmed that AI software will still be wrong sometimes for the time being. “I can't promise that Gemini will sometimes produce embarrassing, incorrect, or offensive results,” he wrote. But Google will intervene quickly if there are problems.
“Total coffee aficionado. Travel buff. Music ninja. Bacon nerd. Beeraholic.”
More Stories
GenAI in everyday work – Top management is moving forward with AI, employees are hesitant » Leadersnet
Foreign Exchange: Euro rises against the dollar
Lufthansa Group: Austrian Airlines, the Boeing 737 MAX and the cargo problem