Brazil opened debate on regulating AI, but loses ground – 09/25/2023 – Tech

Brazil opened debate on regulating AI, but loses ground – 09/25/2023 – Tech

[ad_1]

Brazil was one of the pioneers in regulating artificial intelligence. The Chamber of Deputies began discussing a law in February 2020, even before the European Union, but has failed to approve the legislation so far. The subject involves interests of technology companies, governments and consumers.

Little of the original project remained, after a commission of jurists prepared a replacement text, largely accepted by the Senate, which is now discussing the proposal reported by senator Eduardo Gomes (PL-TO).

“In 20 years of congress, this is the only topic on which, after a month, the expert I spoke to knows less than he did before”, says Gomes in an interview with Sheet.

The speed of innovation in artificial intelligence challenges policymakers. A group of experts convened by the Senate to discuss the topic, for example, wrote the draft regulatory framework before the exponential gain in popularity in 2023 of ChatGPT — AI that generates convincing texts based on user instructions.

The proposed legislation currently being processed in Congress adopts a normative approach, with recommendations for different ways of applying artificial intelligence: from scoring for credit distribution to facial recognition in public security — which would be prohibited. New AI applications not covered by this text would require future deliberation.

The Chamber of Deputies began the debate in 2020 with a project based on principles — respect for human dignity, transparency in algorithms and protection of personal data. The project, however, was criticized for being vague when it passed the house in 2021.

Experts point out that care is needed when debating the topic: excessively restricted legislation could make the development of technology in the area in the country unfeasible; one too few leaves citizens vulnerable to abuse by companies and the State.

Even the country that first started discussing artificial intelligence —Taiwan—, back in 2019, was unable to approve its own regulatory framework. The project presented, however, was not evaluated by Congress nor is it certain whether it would be approved by legislators.

The island is a strategic point in the geography of AI development: the world’s leading chip and semiconductor producer, TSMC, is based in Taiwan and manufactures, among other things, Nvidia’s artificial intelligence cards. These latest computer parts made the North American company reach a market value of US$ 1 trillion that year.

Local legislators prioritized technological development and approved a law promoting cutting-edge technology companies, including AI, which exempts businesses in the area from certain regulatory and tax rules.

The only country that has approved a regulatory framework on artificial intelligence to date is China, but it has not done so through legislation. The internet regulatory body has drawn up measures to govern services that generate texts, images or videos, such as ChatGPT and Dall-E.

The rules were written based on studies by the Cyberspace Administration of China, which since 2022 has analyzed whether the resources used by local AI platforms are moral, ethical, transparent and allow for accountability.

The Chinese government, on the other hand, creates opportunities for small companies to test innovative technologies without having to fully comply with current regulations — the model is called a regulatory sandbox and aims to encourage innovation, according to the law professor and specialist in Luca Belli technology, from FGV.

The European Union, in turn, chose in its proposal a risk assessment approach, similar to that used in consumer protection. The bloc’s parliamentarians hope that the measures will come into force by the end of this year.

The bill considers artificial intelligence a product, which needs to go through evaluation and certification processes before entering the market.

For example, the uses of AI that represent risks classified as unacceptable are banned: systems aimed at manipulating people, exploiting the vulnerability of specific groups, social scoring resources and biometric identification in real time and remotely.

Chile, Colombia, Costa Rica, Israel, Mexico, Panama, Philippines and Thailand also discuss their own regulatory models. This panorama was drawn up by the American specialist in international law Kayahan Cantekin, for the United States Library of Congress. The article is from August.

In its current leadership in the artificial intelligence market, the USA is not discussing legislation at the federal level. There, for now, it is up to states to create specific standards for artificial intelligence resources.

In July, President Joe Biden received the main companies in the sector — OpenAI (developer of ChatGPT), Alphabet (owner of Google), Meta (owner of Facebook), Microsoft and other prominent startups — to discuss security measures.

Spokespeople for the companies made voluntary commitments to the White House to add watermarks to content generated by artificial intelligence, strengthen testing and other measures to make the technology more reliable.

OpenAI CEO Sam Altman is one of the most active voices in this debate and argues that regulating AI will be vital to developing the technology safely, although the technology sector is historically averse to rules. He emphasizes that getting the wrong dose when drafting the law and hindering economic development is a risk.

“Models that are 10,000 times more powerful than GPT-4, as smart as human civilization, or whatever, probably deserve some regulation [para evitar abusos]”, said Altman at an event held this Monday (25).

“I don’t want to have to say, every time I get on a plane, whether it’s going to be safe, but I trust that they are quite safe. Regulation has been a positive thing in that regard,” he added.

In total, 21 countries already have laws that cite and regulate artificial intelligence in some specific sense. In Chile, for example, the criminal code criminalizes fraud and fraud using AI. Sweden has legislation on autonomous cars and Spain against discriminatory biases in technology.

Another 13 nations have judicial decisions that created jurisprudence that guide the interpretation of justice in disputes about artificial intelligence, from copyright to privacy risks. Brazil is outside that list.

[ad_2]

Source link