There is no rush to regulate AI, experts say – 10/05/2023 – Tech

There is no rush to regulate AI, experts say – 10/05/2023 – Tech

[ad_1]

If popular wisdom teaches that haste is the enemy of perfection, technology experts say that, with even more reason, the saying applies to the regulation of artificial intelligence, despite all the frenzy caused by ChatGPT.

At least that’s what the debaters at a table dedicated to the topic said at Futurecom, a technology event held from October 3rd to 5th in São Paulo.

“The debate is urgent, but there is no urgency to conclude it”, says Dora Kaufman, professor at PUC-SP (Pontifical Catholic University of São Paulo).

Bill 2338/23, currently being processed in the Senate, addresses precisely this topic. The idea is to approve a regulatory framework capable of minimizing the risks offered by the new technology and establish a body responsible for implementing the rule and supervising the sector.

“It is no surprise that there is still no regulatory framework in the Western world. This reflects the difficulties of thinking about a legal model”, says Kaufman.

For her, at least two characteristics of this new technology discourage the rapid approval of a law.

First, advances in this area have been very rapid; legislation voted on one day may be out of date the next day. And that literally: just imagine what would have happened if the regulatory framework passed through Congress on the eve of the launch of ChatGPT.

The other issue is the absence of a specific theory about artificial intelligence; The evolution of technology occurs through trial and error, based on empirical tests. As a result, it becomes very difficult, or even impossible, to anticipate developments.

According to the professor, the complexity of AI disallows attempts to handle all legal aspects at once and with a centralized entity to take care of supervision, with a general rule.

“AI changes the logic of how the economy works, it is transversal, with sectoral impacts. I don’t see how to have general regulation,” says Kaufman. She cites the banking sector as an example: “No one is better than the Central Bank to oversee AI products in this field.”

Danilo Macedo, leader of government relations and regulatory affairs at IBM, mentions another case: the risks of using an autonomous vehicle in the city are much greater than in the countryside.

“If we tighten control over agribusiness too much, for example, we could lose a competitive edge,” he says.

Macedo considers it crucial that the debate be expanded with more actors, especially the government, which, he says, is perhaps the largest user of AI in the country. All with the aim of deepening and maturing the discussion.

“There was a certain hysteria, with some people even defending a moratorium on research. I think we have to invest more in research. It’s important to look at the risks, but we have to invest so that Brazil can be a producer of solutions for society “, he states.

One of the solutions was presented by Alexandre Freire, now an Anatel advisor. He recalled how the implementation of AI systems in the Federal Supreme Court led to a great gain in efficiency in the analysis of court cases.

Examples of success like this help reinforce the experts’ point, who pointed out the risk not of AI itself, but of halting innovation in this sector and leaving the country behind on the world stage. Abraão Albino, executive superintendent of Anatel, followed suit.

“Anatel has been dealing with regulation for a long time. If we don’t know exactly what we are going to regulate, for whom, how to establish limits and how to charge these limits, I can guarantee that we will make mistakes”, he says.

He argues that AI needs to serve society, adding efficiency and competitiveness gains.

“There is no point in writing a set of rules that will slow down the country. There is no point in creating mechanisms that serve as normative barriers and prevent the evolution of something that we want to evolve”, he adds.

For him, fear is not a good advisor. Rather than regulating based on fear of harm AI could cause, it is better to look for more evidence.

A path to this was launched this month by the ANPD (National Data Protection Authority) and presented by Nairane Rabelo, director of the authority: a regulatory sandbox pilot project.

The model, also used by China, for example, creates a controlled environment so that the public can test technologies associated with artificial intelligence.

[ad_2]

Source link