Moraes differs from Lula and defends a lean law on big techs – 03/15/2023 – Politics

Moraes differs from Lula and defends a lean law on big techs – 03/15/2023 – Politics


Contrary to the discussions of the Lula government (PT), the president of the TSE (Superior Electoral Court), Minister Alexandre de Moraes, defends a “synthetic” and “lean” internet regulation.

While the government is drawing up broad legislation, along the lines of the Digital Services Law, in force in the European Union since February, Moraes told interlocutors that a very complex regulation would not be effective and would have difficulty being approved by Congress.

According to the minister, it is necessary to combine the self-regulation of the platforms with a synthetic standardization. The government would only establish some basic standards that would guide the performance of companies.

The platforms would replicate the policies they adopt for content with pornography, pedophilia and copyright infringement for posts that violate the Democratic State Law and hate speech. And the government would oversee whether companies are meeting its own guidelines.

In the meetings of the working group with the platforms created by Moraes, the general secretary of the TSE, José Levi, has also insisted on the need to establish simple rules, only expanding the moderation policies already adopted by the companies.

But Deputy Orlando Silva (PC do B-SP), rapporteur for the Fake News PL, which should incorporate the Executive’s proposal, works with a regulation that incorporates, in addition to transparency and accountability rules, the extension of parliamentary immunity to the online environment , financing of journalistic content by companies, online advertising rules and creation of a regulatory body.

Planalto invests in the civil liability of platforms for content that threatens or calls for institutional rupture, encourages violence to overthrow the government or incites animosity between the Armed Forces and the Powers.

The text also prohibits content that violates the ECA (Statute of Children and Adolescents) even before a court order. In the case of legislation that protects minors under 18, there is already legal precedent for decisions along these lines.

The proposal also establishes that, every six months, companies would have to publish a report on the so-called “duty of care”, specifying complaints about allegedly illegal content, removals of posts that violate the law and mitigation measures for this. The numbers would undergo an independent audit.

Companies would not be punished if they missed one or another illegal content, they would only be fined if there was widespread non-compliance with the guidelines implemented by the law.

Finally, the proposal under discussion in the Executive also determines algorithmic transparency. With that, the platforms would have to explain why users receive certain recommendations and how the system works that determines what Internet users see and what they fail to see.

One controversial measure is that requiring prior consent from users for app tracking and data collection by advertisers. The measure is similar to the privacy rule adopted by Apple on its devices in 2021, which resulted in a drop of about $ 10 billion in invoicing for apps like Facebook, Instagram and Twitter.

Moraes believes that regulation should focus on two fronts. On the one hand, holding companies civilly liable for “monetized, boosted or algorithmic” content. And extend the rules of use already applied in cases of copyright infringement, pedophilia and pornography to attacks on democracy and hate speech.

Companies consider, however, that it is very different to detect and remove pornographic content, with pedophilia or copyright infringement, since it is an objective, easily identifiable assessment. In the case of attacks on democracy and hate speech, the assessment depends on the context. A post or video with pornography is much easier to identify than one that has an attack on democracy, they argue.

In addition, according to big techs, determining that they will be held responsible for any content displayed according to an algorithm is not feasible, since, practically, social networks use algorithms for everything. The engine determines what each user sees on their timeline, which video is recommended, how each piece of content is distributed and highlighted.

“If we want to regulate everything about fake news, we will end up not regulating anything,” said Moraes at a conference organized by FGV, IDP and Rede Globo on Monday (13).

“We will start by replicating the platforms’ rules for content with pornography, pedophilia and copyright infringement; extend this to content with hate speech and attacks on democracy.” He underscored the need for companies to apply their own rules, “otherwise we’ll dry ice”.



Source link