Project aims to avoid manipulation and surveillance

Project aims to avoid manipulation and surveillance

[ad_1]

While the so-called Fake News PL focuses more on removing content than on regulating the news themselves Big Techswhich was the initial ideal, a bill tries to establish a legal framework to prevent abuses in the use of Artificial Intelligence (AI) in Brazil, by the platforms themselves or other technology companies.

The text of Bill 2.338/2023, presented by the President of the Senate, Rodrigo Pacheco (PSD-MG), was prepared by a commission of jurists and specialists in civil and digital law, coordinated by Minister Ricardo Villas Bôas Cuevas, of the Superior Court of Justice (STJ). The proposal creates guidelines for the use of AI and defines what should be the responsibility of the agents involved, in case autonomous automated systems harm individuals or legal entities.

The proposal prohibits, for example, the implementation and use of artificial intelligence systems that employ techniques that encourage people “to behave in a way that is harmful or dangerous to their health or safety” and excesses committed by public authorities. The text also mentions special care for data from users classified as “sensitive”, such as geographic origin, race, color or ethnicity, gender, sexual orientation, socioeconomic class, age, disability, religion or political opinions.

The controversial part of the proposal is to leave to the Executive Power the definition of which will be the competent authority that will supervise this activity. Another point under discussion is the level of sanctions imposed: in case of violation of the rules, the text suggests a fine of up to R$ 50 million for individuals and 2% of company revenues.

The proposal also recalls that AI platforms need to comply with the General Data Protection Law (LGPD) to function in the country. This means that they cannot use user data for purposes other than pre-defined and authorized ones.

For Professor Luca Belli, from FGV Direito Rio, who is coordinator of the Center for Technology and Society (CTS) at FGV/Rio, the regulation of Artificial Intelligence is necessary and needs to be defined “quickly and effectively”, given the advances of new technologies, such as the so-called “GPT Chat”.

“It is necessary to elaborate some regulation, in a very fast way, because we are in a situation of a huge market incentive to quickly implement new artificial intelligence tools, with the capture of data and interaction with consumers”, says Belli.

Lawyer Jessica Mequilaine, a consultant in privacy, data protection and new technologies, also reinforces the need for regulation. “It is necessary to regulate the stored information, the ethical and transparent way that this information will be used, in addition to other contexts and situations in which its applications will be used, including issues related to market competition”, she explains.

Concerns about AI

Artificial Intelligence is already present in everyday tasks and raises ethical questions. According to Mequilaine, this technology “already impacts our day in countless ways, ranging from choosing a Netflix series to even carrying out medical examinations”.

Among the concerns about AI, the consultant in new technologies mentions its “use by malicious actors to apply blows, manipulate behavior, instigate violence and even induce suicide, as has already occurred in some cases, in addition to the questions involving algorithmic biases”.

“It is not possible to say that AI should not be regulated, on the contrary, it is necessary to reflect on what are the next steps for society and what risks we want to take, and it is important that there are specific rules on AI that deal with transparency, responsibility, privacy and non-discrimination, in addition to constant monitoring of these systems”, explains Mequilaine.

Brazilian Artificial Intelligence Strategy

With the advancement of Artificial Intelligence, the Ministry of Science and Technology and Innovations presented, in April 2021, the Brazilian Artificial Intelligence Strategy (Ebia). The measure gave rise to Bill 21/2020, the legal framework for AI in Brazil, which has already been approved by the Chamber of Deputies and is awaiting analysis by the Federal Senate, since October 2021.

The strategy determines that technology should be used with a focus on actions for all citizens, stimulating objective research, innovation and solving concrete problems in the country.

But, for Belli, the strategy drawn up in 2020 “is very inefficient and not strategic”, and only brings a list of ideas of what AI should be or how the population would benefit. “It doesn’t have a strategic element or who would be responsible for regulation”, he says.

The proposal presented by Senator Rodrigo Pachedo replaces previous projects on the subject. Therefore, the proposal must be analyzed by the thematic committees of the Senate, especially the Committee on Science, Technology, Innovation, Communication and Informatics (CCTI). It is also likely that a special commission will be created to discuss the topic.

For Mequilaine, the AI ​​legal framework project “seeks to guarantee a correct classification of risks, maintain transparency and reduce the discriminatory biases of a system, after all, technology should serve man and not the other way around”.

“Companies would have different frameworks when the laws are not complied with. In the legal framework of AI, the logic is very similar to that of the LGPD in which agents will have minimum requirements such as mapping and risk management, answering to a central authority”, says the lawyer .

LGPD is ineffective

In the assessment of technology experts, the General Data Protection Act (LGPD) – which entered into force in September 2020 – has not yet been enough to protect personal data. The law emerged to protect the privacy of Brazilians and regulate the use of information by companies, due to numerous cases of leaks, commercialization and inappropriate use of personal data.

There are a series of standards for companies and organizations to comply with and thus ensure transparency and security when storing, processing and sharing someone’s personal data, whether in physical or online media. Among the rules of the LGPD, there is:

  • the right to request that your personal data be deleted from a certain company;
  • revoke data sharing consent;
  • transfer your data to another service provider;
  • accept or not share your data with any company.

According to Belli, “what is lacking is a more robust, efficient and effective inspection and implementation of the law”. He explains that the LGPD clauses are “vague and need to be better explained”. “We are in a situation where the practice is unrealistic of the law. It is a response to data protection, but we need more resources for an efficient work”, says the specialist.

[ad_2]

Source link