The idea that AI threatens humans is a distraction, says philosopher – 03/29/2024 – Market

The idea that AI threatens humans is a distraction, says philosopher – 03/29/2024 – Market

[ad_1]

“Ethics in Artificial Intelligence” begins with what appears to be a ruse. Its preface was written by ChatGPT, “with a command prompt proposed and revised by the author” — not exactly what one expects from a book that seeks to discuss the interaction between technology and morality.

“It’s a provocation”, says the author in question, Belgian philosopher Mark Coeckelbergh, in an interview with Sheet. “We use GPT so easily today. What will be the effect of this? Is what it is doing actually writing? If so, how does it differ from human writing? How much do we want to give it this role?”

The reference to the program is also a way for the researcher, one of the members of the group of experts who helped create the regulatory framework for artificial intelligence (AI) approved by the European Parliament last week, to show that he is not oblivious to the popularity of the bot.

The work, written in 2020 and now launched in Brazil by Ubu, aims to outline a kind of introduction to the main ethical issues raised by the advancement of artificial intelligence.

Among the topics she addresses are, for example, the “moral status” of machines and how this impacts the laws we create to regulate them; algorithmic biases and power relations in the technology market; and how concepts such as transhumanism, that is, the idea that human beings can be improved through science, fit into this debate.

If the book avoids making judgments, serving more as an overview of the different perspectives at play in the area, on one point Coeckelbergh is adamant. The idea that machines are close to taking control is not only illusory when we think of the technology available today, but also dangerous. “It distracts us from the real issues,” he says.

The book was originally published in 2020, when ChatGPT, for example, had not yet been launched. In the text, you state that legislation related to AI is insufficient. How do you assess the situation today?
When I started writing the book, there was no type of regulation in this field; In recent years, there has been a process of transforming ethical principles into practical guidelines. So I would say that today there is much more emphasis on the legal and political aspect of AI, and that more and more people are becoming aware of the topic.

I also think that European legislation is quite advanced compared to regulatory plans elsewhere in the world. That doesn’t mean it’s perfect. Furthermore, although the project has been approved, it has not yet been implemented, which will take some time.

I’m not a lawyer, but I see the law as a kind of technology, capable of modifying behavior. And I still want to see what kind of transformations these guidelines will promote, if companies and governments will actually change the way they work.

It’s a very different problem than questioning what kind of values ​​we want to perpetuate — I think we’re at a different stage today. Still, I think it’s important to continue asking these questions. [sobre ética].

And how do you see the issue of regulation in the United States, especially after the letter in which several big tech companies from Silicon Valley asked for a pause in the development of AI?
The United States is a good example of how different political cultures can impact the discussion on the topic. There, the approach is more liberal: private companies deal with the issue on a case-by-case basis, the keyword is self-regulation.

[Joe] Biden even published a decree on AI, but it could be reversed, for example if [Donald] Trump is elected again. In any case, it is much more limited regulation than in Europe, which reflects, of course, the ideas that people have about the role of the State. In this sense, it would be interesting to know what is happening in Brazil.

Speaking of Brazil, we are talking about a society that is a large user of social networks, but in which around 1 in 10 people is functionally illiterate. How can AI interfere in this context?
We know that AI shapes the knowledge ecosystem in a way that expands misinformation and the possibilities for manipulation. If we have people who are not only not educated to deal with AI but also do not know how to interpret texts, they can be easy prey, providing their data without knowing that they are doing so or being manipulated, receiving false information.

So we need the basic skills [de leitura e interpretação de texto] and also extra literacy in AI. And it’s true that when we talk about education and AI, we often don’t realize that there is also a previous step that may be missing. In AI ethics, education is absolutely fundamental.

A term that has been circulating to describe what we are experiencing today is “technofeudalism”, in which the “feudal lords” would be the big techs, and everyone else, the serfs, who pay “cloud rents” for the right to access the that these organizations have. Mr. Do you agree with the expression? And how do you see the situation of large technology companies, which increasingly monopolize the market?
Without a doubt, the situation is one of a gigantic difference in power between big techs and ordinary citizens, who have practically no influence in relation to what these companies do. We need to click on “accept terms of use”. But that doesn’t exactly constitute an agreement with users, do you think?

As for the concept of technofeudalism, I am skeptical of it because I think we still live in a form of capitalism. It is changing, but it is still capitalism. At the same time, the term can serve to draw attention to these enormous differences in power and the fact that something needs to be done about them.

An example: imagine a more left-wing government that wants to do something about these power differences and manages to create a social justice system in its country. The problem is that with these companies, these power differences manifest on a global scale. A challenge then arises: it is very difficult to do something in your country if technologies that are developed elsewhere have so much influence.

There is, therefore, a problem related to the sovereignty and power of democratic politics in the digital age.

One of the book’s main arguments is that focusing too much on the dangers of a “superintelligent” AI distracts us from the dangers of other AIs. Would you explain this idea?
Some, not just big tech CEOs but also philosophers, have warned about existential risks [da IA]. In the book, I argue that it distracts us from real problems because it’s a projection into a distant future, and I’m very skeptical about our ability to predict that distant future.

What is useful to know are the limitations of technologies like ChatGPT, so that users can understand that it is not a “truth machine”. In my opinion, if big techs choose to open up these technologies for general use, they should also point out their limitations. And these companies even talk about the problems with AI — but what would happen if it took over the world. For me, this is not the main issue.

That said, it is also true that we are increasingly seeing AI being used for military purposes, and there is a risk in that. But it’s not ChatGPT that’s doing this.

The book enumerates a series of ethical issues related to the development of AI. At the same time, it is quite neutral, in the sense that it basically gives an overview of these problems. Of these challenges that Mr. quote, which is the most important, in his opinion?
As a philosopher, my goal was basically to explain why we had to put humans in control of this technology. One of the reasons is the issue of responsibility and accountability: we have to be able to respond to those affected by AI.

The book mentions the idea that AIs themselves should have rights, an argument that Mr. characterized as transhumanist. What is your personal opinion about her?
It’s a transhumanist argument in the sense that if all these intelligent machines have the same abilities as humans, then we need to grant them rights too. I question this perspective. On the other hand, I always find it interesting to think about the moral status of “non-humans.” Because it makes us reflect on how we attribute it.

In general, we look for certain characteristics. For example, we want to see if that being is sentient [capaz de ter sensações e impressões] or conscious. There was an engineer who said he believed the language model he was working with was sentient [o ex-funcionário do Google Blake Lemoine].

I problematize this, because it is not always easy to know whether this is the case. I don’t even know if you [a repórter] It is sentient, because I only have its image on the screen.


X-ray | Mark Coeckelbergh, 48

Professor of philosophy at the University of Vienna, Austria, he is a member of the European Commission’s High Level Expert Group on Artificial Intelligence. He has published more than 15 books in the field of philosophy of technology.

[ad_2]

Source link