Racism: ‘prison due to technological error is unacceptable’ – 12/12/2023 – Tech

Racism: ‘prison due to technological error is unacceptable’ – 12/12/2023 – Tech

[ad_1]

For the first time in Brazil, professor of philosophy and law at the University of Pennsylvania Anita Allen, 70, recalled the case of Randal Reid, 28, arrested for a facial recognition error, to say that hypervigilance generates unjustifiable abuse.

The first black woman to receive doctorates in philosophy (University of Michigan) and law (Harvard) is one of the leading privacy theorists in the world. She has written more than 20 books and, in 2010, was appointed by then US President Barack Obama to join the Presidential Committee on Bioethics.

In the USA, the right to privacy was what guaranteed, in 1973, legal access to abortion, following a Supreme Court decision that considered that the State did not have the right to violate women’s privacy. This understanding was overturned in 2022, by the current formation of the constitutional court.

For Allen, it is necessary to guarantee special privacy protection for vulnerable groups, such as black people, women and the LGBTQIA+ community.

The American intellectual came to São Paulo at the end of November to participate in the Data Privacy Global Conference, an event on data protection.

She says she was surprised to hear that Brazil has similar problems to those in the United States. “Brazil sells the image of having no prejudices, has strong data protection legislation, but, at the same time, deals with surveillance projects, such as Smart Sampa, in São Paulo [projeto de monitoramento com reconhecimento facial que terá ao menos 20 mil câmeras distribuídas pela cidade].”

Mrs. mentioned the first decision by a North American judge in favor of privacy, in 1906, in which he compared the violation of privacy with the situation of slavery. Can you elaborate more on this analogy?
In this 1906 case of citizen Pablo Popsish against New England Life Insurance Company, the court concluded that invasions of privacy are like slavery, like depriving someone of a very important type of freedom.

If someone can’t choose whether their face, their voice, or their name is used without their permission, that person is like a slave, because that’s what slavery was. Not being able to live your own private life, having other people using you for their purposes. Invasions of privacy often involve using other people for our purposes, because we want to sell something, to make a profit. In this light, people become objects. That is the problem.

Another important legal milestone was in the 1820s: the State versus Man case in North Carolina, in which an enslaved woman was hired and shot. The woman was rented to another person who shot her. This person was convicted, but appealed. The court held that “no one can be criminally prosecuted for violating a slave’s rights or harming a slave because doing so would interfere with the privacy of the master-slave relationship.”

Thus, privacy was also constructed in the United States to prohibit any criticism of threats to the slave’s well-being.

Privacy can also be invoked to cause omissions. Hannah Arendt used this argument to argue against the inclusion of black children in previously white-only schools, right?
Yes, Arendt argued that there should be no interference with family privacy, and that meant that public schools could not be opened to black children. Privacy is an important concept, but it has historically been used for discriminatory purposes, and we cannot forget that.

In the case of public policies that violate privacy, such as facial recognition, there are often arguments that they seek the public good, even at the expense of individual freedoms. How to counterargue?
Public health and security arguments are often used to justify intrusions into data privacy. Today, we talk a lot about police use of force and facial recognition as two domains where privacy appears to be violated in the name of public safety.

The same goes for public health, where there has been a huge shift in the United States in how we think about health and genetic privacy. In the 1990s, it was practically dogma: no one should give up their health data, protected by broad privacy laws.

But now, because of technology, because the scientific community has discovered that it’s very valuable to have as much health data as possible, people are encouraged to give up their health data, share genetic data.

Thus, it would be possible to find new cures, new medicines and therapies. But the costs are enormous and are minimized by the benefiting side. The immediate benefits lie with big data companies, health data analysis companies, technology companies and governments. With ordinary people, not so much.

In Brazil, there are difficulties in implementing the already established legal framework, which is good on paper. Experts relate this, in part, to a permissive culture regarding privacy. Is more access to information and education needed?
We, society and our government must value and enact laws that protect privacy, even if people don’t care. When it comes to children, it is very common to protect their privacy even if they don’t care. Still, we protect them on the internet, because we don’t want them to be harmed.

I’ve extended this concept of protecting privacy not just to children, but to all of us, because all of us, I believe, have tremendous things at stake that we may not subjectively appreciate or care about. Privacy is like a profoundly important fundamental good on which our entire lives are based. Whether we want to call it human right or civil right or natural right, it is very important to give people choice.

But people accept giving up their data to social networks in exchange for services.
It is not necessary to prohibit having an account on a social network. But it is necessary to regulate Facebook, TikTok, Instagram, WhatsApp, etc. to ensure that when using these applications, one is protected whether you choose to be or not. It is important not to leave this fundamental right to people’s whims.

AI models for decision making, image identification and content generation bring to light latent biases in recorded data. Does this increase the urgency of having anti-discrimination legislation?
I asked ChatGPT to find five legal cases in which African Americans have filed privacy violation lawsuits against companies or governments. He gave me five cases. So I asked for five more and he supplied them to me. I asked for five more and he gave it to me.

I thought: there are many lawsuits against black people for invasions of privacy. The next day, I went to a legal website to check the 15 cases. None were real cases, everything was just a collection of meaningless words. Generative AI can produce results that frustrate lawyers’ ability to do civil rights and social justice-oriented research because it can deliver garbage.

In these technological solutions there is a general problem involving black and trans people because of the lack of data. This is a sign of a historic lack of care regarding to this part of the population It is in How are white people considered the standard?
For sure. It would be wonderful if there were more accurate data, because I believe that some black people are harmed because there is so much false data that results in prejudice, stereotyping, exclusion, even social scoring. [como no caso de bancos para ceder crédito]which are misleading.

There is an excessive emphasis on criminal histories, on credit problems, these types of data are used to undermine people’s efforts to escape poverty because they cannot escape because their stereotype is that they belong to a class of economically irresponsible or resourceless people. . It’s not just a lack of data, it’s the need to use correct data.

Can society change this situation?
We need to be cautiously optimistic about the emergence of civil rights discourse in the United States, because the U.S. and Brazilian experiences have shown that the benefits to marginalized communities from new laws will always be limited.

We can hope for a day when privacy laws help more people, but right now we have to understand that we do not yet have privacy laws in place or in place that effectively protect the rights of people of color, indigenous people, and other marginalized communities.

Mrs. commented that he expected more debate against discrimination in Brazil. Why?
We look at Brazil as a country that has fewer racial problems than the United States, where people have a variety of backgrounds, some look white, some look black, some look mixed, everyone is equal and there is not the kind of deep racism from the USA.

This is the idealized image we have of Brazil. There is also an impressive variety of privacy protection laws, constitutional laws, statutes in Brazil, including the recent 2020 law [LGPD].

All of this may suggest that there are fewer race-related privacy issues in Brazil, however, people are deeply concerned about the implications of Smart Sampa, camera monitoring in general and facial recognition especially. In Brazil, black and indigenous people will have different privacy problems than white people and I was surprised to learn that. I ended up disappointed that there are some of the same problems in the US regarding color and race.


X-ray – Anita Allen, 70

Professor of law and philosophy at the University of Pennsylvania, Anita Allen, 70, is one of the greatest feminist privacy theorists and received the Philip Quinn Prize from the American Philosophical Association in 2021.

[ad_2]

Source link