What AI tells us about women and stereotypes – 03/07/2024 – Education

What AI tells us about women and stereotypes – 03/07/2024 – Education

[ad_1]

We were given mirrors and we saw a world full of prejudices. I take a ride on the verse of “Índios”, a song released by Legião Urbana in the distant 1980s, to discuss a very current topic: how artificial intelligence tools can perpetuate a stereotypical view of certain groups in society, such as women.

The popularization of AI applications that generate images from text commands, such as DALL.E, Bing Image Creator, MidJourney and others, is fascinating but also challenging. There is no shortage of reports of how women are mostly portrayed in less qualified situations or professions, or in an objectified manner in artificially generated figures.

Artificial intelligence feeds on databases that, in turn, mirror society’s values. More than replicating prejudiced behaviors and situations, however, new technologies can end up amplifying them.

To illustrate the problem, the news agency Bloomberg analyzed in 2023 more than 5,000 images generated by the Stable Diffusion tool and concluded that, in the universe created by this AI application, women are rarely doctors, lawyers and judges. Just 3% of images generated from the keyword “judge” depicted female figures, while 34% of judges in the United States are women, according to industry data cited by Bloomberg. When testing the word “doctor” (doctor or doctor in English), the agency obtained female portraits in just 7% of the artificially generated images, while the country has 39% of its medical staff made up of women.

A study conducted by researchers at the University of Washington also detected gender bias in AI tools that translate conversations from one language to another. The group aimed to analyze how ChatGPT would translate sentences into English in six languages ​​that use only gender-neutral pronouns (Bengali, Farsi, Malay, Tagalog, Thai and Turkish).

When translating a neutral pronoun into English, the tool used “he” or “she” depending on the context of the sentence. “In particular, the chatbot perpetuated stereotypes in certain professions. A Bengali sentence used a neutral pronoun in a sentence with the word ‘doctor’, but the ChatGPT translation used ‘he’ or ‘him’,” the researchers described in an article.

Because the software behind ChatGPT is not open source, scientists were unable to determine exactly how the translation engine worked. The suspicion is that, trained from a database predominantly in English or Western, the AI ​​ended up perpetuating stereotypes that populate the gigantic file from which it “learns” to perform tasks.

Other analyzes and research have looked into the gender bias of AIs. This is not a movement to demonize or shun innovations. On the contrary; Such tools will be more useful the more attentive and inquisitive their users are. Hence the need to also think about digital education actions that are not restricted to operational knowledge of new technologies: more than teaching how to build the best commands to activate AI tools (so-called “prompts”), for example, we have to encourage society’s critical outlook to identify problems and demand corrections.

AI applications have been around for some time, in internet search engines and in recommendation systems for movie and music streaming, among other situations, and the tendency is for them to become increasingly embedded in our daily lives. We need, as soon as possible, to discuss its potential and problems transparently, and International Women’s Day (March 8) can provide fruitful discussions.

May the mirror provided by AI be an opportunity to confront gender bias.

[ad_2]

Source link