Increasingly, algorithms and artificial intelligence tools affect important aspects of our lives — transparently or without our knowledge. We can choose ChatGPT to write a text, for example, but we have no management over sophisticated systems that select the “best” for a job from piles of resumes or over databases that “predict” who is most likely to give default on a bank loan.
In search of efficiency and scale, many processes have been automated in recent decades. There are undeniable positive aspects to several of them; but also challenges and even violence that need to be discussed more widely, as in the case of prejudice and discrimination perpetuated by mathematical models.
Researchers at the University of Chicago and MIT reported more than 20 years ago how automated resume parsing systems preferred candidates with names typically attributed to white people in the United States, such as Emily Walsh and Brendan Baker, over those with names more associated with to African Americans, such as Lakisha Washington and Jamaal Jones. This is one of the examples narrated by data scientist Cathy O’Neil in her book “Weapons of Math Destruction” (the title in English, “Weapons of Math Destruction”, alludes to the term “weapons of mass destruction”, that is, weapons of mass destruction).
“Data will not disappear. Neither will computers, much less mathematics. Predictive models are, increasingly, the tools we will rely on to command our institutions, allocate our resources and manage our lives”, says the author. “But these models are built not just from data, but also from choices we make about which data to consider — and which not. These choices are not just about logistics, profit and efficiency. They are fundamentally a moral issue.”
It is necessary to remember that racism, ableism, homophobia and other forms of prejudice were not born with artificial intelligence. But it is equally important to understand the scale they can achieve with these systems.
“Algorithmic racism is a kind of update of structural racism, its spearhead in the era of datafication of society”, said researcher Tarcízio Silva, author of “Algorithmic racism: artificial intelligence and discrimination in digital networks”, in an interview with the Center of Strategic Studies at Fiocruz at the beginning of this year. “Algorithmic racism is not a question of programming or engineering. More important than lines of code is knowing what power relationships are and what decisions are enabled by the implementation of some technology.”
Hence the need to expand the debate on algorithmic literacy and give more visibility to research groups that discuss the topic in the light of justice and social equity.
One of these groups is the Center for Critical Race + Digital Studies, which produces research and raises awareness about how race and identity shape and are shaped by digital technologies. Its researchers draw attention to the fact that algorithms are not neutral because they carry values and choices made by their programmers, institutions, culture itself and our history.
They point out that it is only possible to understand the concept of algorithmic bias by understanding the social structures of which we are part, in which groups are marginalized due to their color, gender or sexual orientation. And they classify six types of bias, including historical bias —which arises when there is a misalignment between the world as it is and the values or objectives to be codified and propagated in a model— and representational bias —in which a population is under or poorly represented when training the model.
“We will not escape the horrors of algorithmic bias until our culture adopts a framework for technological development – an ethos – that confronts historical and current injustices and prioritizes strengthening communities,” they argue.
Therefore, in the same way that anti-racist education must go beyond the ephemeris of Black Awareness Day and be part of the daily life of schools, in addition to projects around the date of November 20th, algorithmic literacy also needs to gain more space in training. of children and adolescents. This is how society can prepare to take advantage of technology, while combating prejudice and discrimination.