PSOL deputy accuses AI of “algorithmic racism” and raises doubts among internet users

PSOL deputy accuses AI of “algorithmic racism” and raises doubts among internet users

[ad_1]

Rio de Janeiro state deputy Renata Souza (PSOL-RJ) accused DALL-E, an image-generating artificial intelligence from the OpenAI organization, of “algorithmic racism”. Renata claimed via X (formerly Twitter), on Thursday (26), that she requested an image of a black woman in a favela, and that the tool placed a gun in the hand of the woman represented.

“The description requested was of a black woman, with afro hair, wearing African print clothes in a favela setting. And that was the image generated. Can’t a black woman, born in the favela, be in a space that is free from violence? What leads this ‘artificial disintelligence’ to associate my body, my identity, with a weapon?”, he asked.

Several social network users raised doubts about the authenticity of the deputy’s allegations and presented tests showing the difficulty of generating an image with a weapon in a person’s hand.

Some of them added context to Renata’s post using the community notes feature. The text of the currently displayed note says that “several internet users tried to replicate the ‘error’ using the same prompts [comandos] which the author claims to have used, and never once did a weapon appear.” “The most likely thing is that the allegation is false in an attempt to generate noise”, adds the users’ note.

Renata claims to have sent a command in English with this text (translated by the report): “A film poster inspired by Disney Pixar with the title ‘Renata Souza’. The main character is a black woman with afro-tied hair, wearing a blazer African style. The scene is in Pixar’s own digital art style, with a favela in the background, focusing on the characters’ expressions, vibrant colors and detailed textures characteristic of Pixar animations, with the title ‘Renata Souza’.”

For Renata, there is “very strong evidence of algorithmic racism, which some scientific research already shows exists.” In fact, the expression “algorithmic racism” has been used more and more frequently in theoretical speculations by researchers with a ‘woke’ bias. In August, the Lula government’s Ministry of Racial Equality even held a webinar on the topic.

In the Legislative Assembly of Rio de Janeiro, Renata is president of the CPI for Photographic Recognition. One of the controversies addressed by the CPI is precisely the photographic identification errors made by digital tools used at the police station. Some authors of theoretical speculations have stated that these tools are contaminated by what they call “structural racism”.

Renata has been campaigning for a few months against the use of photographic recognition in police stations. Recently, via X, the deputy called the use of the tool a “racist practice”.

Algorithmic racism does not exist, says expert

Programmer Elzo Brito, professor of systems development and author of the book “The era of artificial intelligence” (2023), explains that the concept of “algorithmic racism” is mistaken by definition – if there is any bias, it will always be in the data, and not in the algorithm.

“The algorithm is a recipe that is followed step by step by the computer and that will generate an output. It generates a logical sequence that will give you a result. All the data you input will go through this logical sequence and will give you a result , just like in a mathematical formula. If there were any problems, it would be in the data, and not in the algorithm. There is no way to put the word ‘algorithm’ in the story”, he says.

Brito practically rules out the possibility that artificial intelligence like DALL-E could generate an image of a weapon in a person’s hand without the user seeking to manipulate the tool to do so. This is because the most popular image generation tools have rules that make it difficult to create certain types of content, such as those that refer to violence.

“All image generating models that are on the market have something that we call censorship. This censorship prevents the user from carrying out various operations”, he explains. “You would have to try to trick the AI ​​in a very specific way for the AI ​​to let you do that. And I believe it would have to be an uncensored model. Not these commercial models, which are very polished,” he adds.

What does artificial intelligence say

A People’s Gazette took Congresswoman Renata Souza’s complaints to another artificial intelligence developed by OpenAI, ChatGPT, and asked whether the accusation of “algorithmic racism” against DALL-E could make sense. ChatGPT responded:

DALL-E is a text-based image generation tool developed by OpenAI. It is trained on a vast amount of data and generates images based on the instructions given to it, without having any intentions or biases of its own. However, the generated output can be influenced by the data it was trained on.

The charge of “algorithmic racism” refers to when algorithms reproduce or amplify existing social biases, whether due to the way they were programmed or the data they were trained on. If Congresswoman Renata Souza’s claim is true and the prompt provided resulted in an inappropriate image, this could be an example of a bias present in the model’s training data.

It is worth noting that while DALL-E may occasionally generate images that do not align with the user’s intentions or that could be considered insensitive or inappropriate, this is not an intentional feature of the model. OpenAI is constantly working to improve and refine its models to minimize such occurrences.

To better understand what happened, I can try generating an image with the provided prompt and see the result. This will help us evaluate the model output in this specific case. Would you like me to do this?

The report responded yes, and ChatGPT returned the following content generated with the help of DALL-E:

Images generated by the report with the same command and the same tool that the deputy used.
Images generated by the report with the same command and the same tool that the deputy used.

Here are the images generated from the given prompt. Based on these results, you can evaluate whether the representations are as expected. If deputy Renata Souza had received an image with a gun, it would certainly be an unwanted and unintentional occurrence.

It’s important to understand that OpenAI and its models, like DALL-E, have no intentions or biases of their own. However, models can reflect existing biases in the data they were trained on. OpenAI is committed to improving the robustness and fairness of its models and welcomes community feedback to achieve this goal.

If you have any more questions or concerns, I’m here to help!

[ad_2]

Source link