Blaming social networks can lead to mass removal of content, says lawyer – 03/11/2023 – Market

Blaming social networks can lead to mass removal of content, says lawyer – 03/11/2023 – Market

[ad_1]

Social networks must update security policies against extremist content, but it is not up to the government to define what should or should not be removed from the air, at the risk of curbing freedom of expression.

This is what Caitlin Vogus, deputy director at the US Center for Democracy and Technology, defends, an entity that acted as amicus curiae in favor of Google in the process that is pending in the US Supreme Court that discusses whether platforms and social networks should be held responsible for content published by users.

For Vogus, a change in the legal protection of American companies could have a global impact and cause a mass removal of content, including reports or material protected by the Constitution.

What is your position on Supreme Court cases? In Google’s case, the Supreme Court is looking at Section 230, which is a federal law in the United States that generally exempts platforms from liability for other people’s posts. The question is whether platforms are also protected when they recommend content from other users. Our position is that this applies to content recommendations, as this is necessary to protect free expression online.

And what about Twitter? The court is interpreting a different law, the Anti-Terrorism Act. The question is whether a service that removes some, but not all, terrorist content should be considered aiding and abetting terrorism. Our position in this case is that unless the service has actual knowledge that it is specifically aiding a terrorist act, it should not be held accountable. Once again, the reason comes from our concern with the user’s freedom of expression. It needs to be ensured that services have no incentive to remove things like reporting or anti-radicalization material for fear of being held accountable.

The argument is that this will encourage mass content removal? The law says that a service provider is not responsible for what its users say in most cases, so this makes platforms much more willing to allow people to speak freely online. If this changes, we think providers will be afraid of liability and their reaction would be to remove posts excessively, taking a very risk-averse approach. They will remove a lot of content, even if it is not harmful, protected by the Constitution or even beneficial. Platforms may not be able to distinguish content that could cause legal issues and would pull many posts.

But how to reconcile the need to combat terrorist threats with the freedom of expression? The law does not immune platforms from federal or state crimes. In Twitter’s case, it’s a civil claim for damages. The networks also have a strong incentive from users and advertisers to take down all harmful content, including terrorist content. But the problem is that the technology is not enough to allow them to do it perfectly. There will always be mistakes.

The question is, do we want to encourage a regime where networks actually have to take down so much content that it starts to impact things like news sharing, because they want to curb any and all possible terrorist content?

But the internet has changed a lot since the 1990s, when Section 230 came in with recommendation algorithms. Isn’t it time to update the legislation? I believe that Congress was thinking ahead, perhaps surprisingly, and tried to write the law in perhaps the most technologically agnostic way. They knew that the internet was an emerging technology at the time, about to change many things in society, which had not yet emerged with the power it has today, and they were able to look to the future and ask themselves: what do we want the system to be to continue to promote free expression online? They were able to set up a system that was largely successful in allowing free speech online to flourish.

What are the global effects of a change in the Supreme Court’s understanding? Many technology companies are based in the US and therefore subject to the US legal liability regime. A change could affect how they operate around the world. At the same time, we’re seeing a lot of countries that are starting to impose their own regulations on these tech companies. For example, in the European Union, the introduction of digital services brought about a new legal regime. There are new laws in South America, India and other places around the world. And so, more and more countries are trying to make their mark on internet regulation.

The debate is very similar in Brazil, as well as in other countries, with the rise of political extremism. Even if there is no change in the US, how is the pressure on platforms with these regulations in other countries? Pressure will and has already increased on social networks to take more action against undesirable content on their services, and the CDT urges companies to consider international human rights standards when making decisions about their content moderation policies. We want companies to ensure they are constantly updating their policies to respond to new threats around the world. Also be transparent about moderating content and removing posts. This way, the public knows whether action is being taken and can judge whether it is sufficient.

But I think where our biggest concern is when it comes to government regulation against content that is deemed bad or undesirable. We worry about things like giving public officials the power to silence their critics. I think pressure from civil society, the public, academics and other groups is entirely appropriate, and companies need to listen to a wide range of voices from those affected by online content. I’m just worried about giving the government too much power in any country when it comes to regulating online speech.


X-ray | Caitlin Vogus
Deputy Director of the Freedom of Expression Project at the CDT (Center for Democracy and Technology), which has headquarters in Washington (USA) and Brussels (Belgium). The entity is amicus curiae in favor of Google in the Supreme Court process. A Harvard lawyer, Vogus made a career in institutions defending freedom of expression and the press.

[ad_2]

Source link

tiavia tubster.net tamilporan i already know hentai hentaibee.net moral degradation hentai boku wa tomodachi hentai hentai-freak.com fino bloodstone hentai pornvid pornolike.mobi salma hayek hot scene lagaan movie mp3 indianpornmms.net monali thakur hot hindi xvideo erovoyeurism.net xxx sex sunny leone loadmp4 indianteenxxx.net indian sex video free download unbirth henti hentaitale.net luluco hentai bf lokal video afiporn.net salam sex video www.xvideos.com telugu orgymovs.net mariyasex نيك عربية lesexcitant.com كس للبيع افلام رومانسية جنسية arabpornheaven.com افلام سكس عربي ساخن choda chodi image porncorntube.com gujarati full sexy video سكس شيميل جماعى arabicpornmovies.com سكس مصري بنات مع بعض قصص نيك مصرى okunitani.com تحسيس على الطيز