Google showed that AI can be controlled – 02/25/2024 – Ronaldo Lemos
[ad_1]
Last week, Google announced that it has suspended its imaging platform, called Gemini. It was launched in early February to compete with platforms like Dall-E and Midjourney. In all of them, the user describes what they want in text and the platform creates images according to the description.
The Google service was online for just over two weeks. The company began to suffer intense attacks on social media such as Twitter (X) because of it.
The reason is that when someone asked Gemini to create images of people, the platform returned results that included a wide ethnic diversity.
So far so good. Many studies show that algorithms have racial bias. For example, when someone searched for “baby pictures” on Google, the results tended to be mostly white babies. Professor Virgílio Almeida, from UFMG, has extensive work on this issue, with international repercussions.
However, when someone asked Gemini to create images of 18th century scientists, Vikings, babies or even white historical figures, the platform applied the same racial diversity criteria and presented images of these figures as people of African descent, traditional people and so on. against.
In a minimally sane world, this error would not have generated greater repercussions. For years there was no major reaction (or suspension of services) due to algorithm bias related to white people.
As a result, it is difficult to separate the people who were truly outraged because of the technology’s failure from those who were outraged because the company is trying to be more inclusive.
But this story brings another important lesson. She is the example we all needed that artificial intelligence platforms are controllable. They also have “editorial lines”. Gemini prioritized ethnic diversity, because the company wanted it that way.
In the post explaining the suspension of the service. It says: “When we built Gemini, we tuned the model to ensure it didn’t fall into past imaging traps. Because our users come from all over the world, we wanted the tool to work well for everyone. Our calibration for Gemini to show people comprehensively failed to account for cases where it should not include this comprehensiveness.”
These words are precious. In the conversation about artificial intelligence, we are increasingly falling victim to fantasies. They say that artificial intelligence is uncontrollable, a dark box, and that it is necessary to create global-scale controls similar to those applied to nuclear energy to bring this terrible technology into line.
The Gemini case shows that it is important to put these fantasies aside. Artificial intelligence is simply software, made by people, controlled and calibrated by companies. Google has calibrated its model to maximize diversity. Now you will probably calibrate it again in another way.
And most importantly, when your AI behaved in a way the company didn’t like, they simply took it offline. Let’s sleep better remembering this. And inoculate ourselves against the fictions that surround the debate about AI.
reader
It’s over SXSW as an obscure festival in Austin, Texas
Already Growth of SXSW in Brazil
It’s coming 2024 edition promising to break all Brazilian participation records
LINK PRESENT: Did you like this text? Subscribers can access five free accesses from any link per day. Just click the blue F below.
[ad_2]
Source link