Platforms are more vigilant than governments, says Chelsea Manning – 5/8/2023 – Politics

Platforms are more vigilant than governments, says Chelsea Manning – 5/8/2023 – Politics

[ad_1]

Platforms pull massive amounts of data from the internet to build advertising schemes and keep us on social media, according to Chelsea Manning, a former military man known for leaking US secrets to Wikileaks.

For her, the big technology companies, today, monitor us more than the governments.

“Governments can also take advantage of these corporations to get data without carrying out an in-depth search”, said the former military to the report last week, during the technology congress Web Summit Rio.

Today she is a security consultant for Nym Technologies —the project offers a model for internet browsing that is difficult to trace.

Manning was arrested in 2010 and then sentenced in 2013 to 35 years for treason for acting as a Wikileaks informant and digital activist.

Former US President Barack Obama suspended the sentence in 2017, but US Justice ordered the arrest of the former military again in 2019, when she refused to give testimony against Julian Assange, founder of WikiLeaks. Since 2020, she is free.

For Manning, Google’s performance in the dispute for the approval of the Fake News PL is an example of the power of influence of these companies. “This editorialized content should carry the advertising tag.”

People, however, need to give in to the surveillance of large technology companies, under penalty of being excluded from contemporary life, according to Manning. “This is the system we built, led by Silicon Valley and now followed by Chinese corporations.”

While privacy is a topic close to you, Mrs. is a public figure who has even been persecuted after leaking secret US government data. Is it worth giving up your privacy because of the cause? Of course, it’s an exchange. But today I manage to have a lot of privacy. I have a private life.

Can you tell me what measures Mrs. takes to make this possible? Of course, I’ve relaxed a lot of my precautions lately. Internet security is increasing rather than decreasing. We are building the infrastructure to be able to protect ourselves. I won’t reveal them all for safety. However, something as simple as using a password manager or remembering to change my passwords and keeping my electronic devices secure is already effective. Think about what I’m sharing, when I’m sharing it, and the context of it. I still go out and use my Apple Wallet. I lead a pretty typical life compared to someone who is in a higher risk group, although I was once in that position.

Are there ways to protect yourself without depending on the choices of big techs? We urgently need tools to tackle the problem of data collection by large companies in Silicon Valley, especially after the pandemic, when remote work has become the norm. Most of us are forced to use tools like Zoom and Google Workspace or interact with companies like Salesforce due to the lack of alternatives. This centralization forces us to relinquish our information, a situation we need to avoid in the future. We must reconsider how the internet was built over the past two decades.

One of these tools is Nym, from the company for which Ms. it works? Nym works like a VPN or Tor [navegador que não deixa rastros], securing connections between two points on the internet. However, unlike a VPN or Tor, Nym sends each packet of information through three layers of protection that we randomly select via algorithm along different paths. The message is only reconstructed on the other side. This process protects against those promoting surveillance around the globe and provides an additional layer of security and protection against censorship. Large corporations and governments can be rampant against privacy, but tools like Nym help keep things safe, especially for those who need an extra layer of protection.

Are people already using this technology? We have some people who use Nyn, but the technology is not yet widely available, mainly because it makes everyday applications very slow. Despite this, it shows enormous potential. We also have a reputation system to ensure that there are no malicious people and entities on the network. If so, they can be isolated and quarantined in a decentralized manner. We want to be ready to guarantee security two or three years from now.

What precautions should those who are exposed take? There are some people who can be the target of targeted attacks. User attacks are why encryption has become widely adopted and embedded much more into the backbone of things. Now data collection is happening not through protocol communications [como emails]but by the companies that collect this data shared by ourselves in the dystopia we subscribe to.

Regarding spy programs like Pegasus, should we take any other specific precautions? These attacks also increasingly focus on the user. It’s things like Pegasus, where the attack is specifically aimed at the target. However, this makes protection for the general public cheaper and more effective from a surveillance perspective. But it is very difficult to protect against these malicious software on a large scale.

Therefore, who is targeted needs to look for solutions individually? The tools that we build, like cryptography and things like Nym and tools like Signal, really keep you from being the target of this type of attack. But as we’ve seen with Saudi Arabia and leaks from social media companies, they still manage to reach people.

Who watches us more, governments or platforms? One of my concerns is how big tech companies or governments are handling the data. They obtain data in bulk, sucking this data from the internet to use within their terms and conditions, building their algorithms or tools, advertising schemes and to increase watch time. Corporations are at the top of the surveillance hierarchy. In fact, governments often take advantage of corporations. They simply want the data without performing a diligent search.

The New York Times magazine recently reported on a case in which the CIA had access to the Apple Cloud data of a Chinese spy in an operation against patent theft. Is this concerning in terms of privacy? More than just data collection, I am concerned with what information can be obtained about us without even sharing it. A good example is TikTok. It’s not because it’s owned by a Chinese company, ByteDance. It’s more about the algorithm they use and how they gather information about us. Based on our interactions with information presented in the form of a video, they can determine a lot about us: our age, location, gender, languages ​​and interests, all without us filling in a field. It’s not just about demographic data, it’s also about predictive data. It is essential to help people understand this situation, what I call the “Facebook syndrome”.

And does the general public know that they are so exposed? The public is aware but feels they have no choice but to participate. This is the system we have built, led by Silicon Valley and followed by major Chinese corporations within their regions. It’s hard for people to run away. As a security researcher with 20 years of experience, I see this as a big challenge.

How can an ordinary person with no security knowledge protect himself? Tools like Signal, where you don’t need to know anything about cryptography to use it, make you safer and more secure. Encryption should be built into the infrastructure of the internet by default. That way, we don’t have to choose. It’s like fixing the roads. We can’t just choose a different road to drive; we need to mend the roads themselves. Is the joke putting yourself in a safe space while you’re in the middle of the storm?

Having control over this personal information serves to intimidate other people, correct? Having this kind of information is power and serves as a deterrent, but we’re also in an interesting age where being able to verify and authenticate your information is what really gives you power. So with so much disinformation and misinformation that we’re seeing and how cheap it is to produce disinformation, propaganda and misinformation, it’s going to get more and more expensive to identify verifiable information. Starting in the 2030s, how do we verify information at source? How do we verify that what we are seeing is a real video? How do we know a conversation is real in a world where more and more actors can generate deepfakes [vídeos falsos gerados por inteligência artificial] cheaply. It doesn’t take much to convince people, human beings are very bad at discerning fact from fiction.

These AI models are made to be convincing. They are very convincing and misleading.

Have new generative artificial intelligence models made our data even more vulnerable? We have witnessed the development of large language models trained on data from the internet. This poses an urgent concern for our privacy. This has been a threat for some time, and the situation is far from ideal. The training of these language and visual models may have crossed certain boundaries. Italy, for example, highlighted that OpenAI may have violated ethical boundaries.

Is there a way to prevent these models from accessing legally protected information? We have tools and solutions to work around such privacy issues. For example, we could use a vector database that marks personally identifiable information and keeps it separate from LLM. It is technically feasible to train a large open source language model with data containing personal details, provided these are properly segregated. However, the rush to develop these models is worrying. The fact that we have continued this debate for over a decade is quite telling.

Big tech companies already trained different artificial intelligence models, right? We see five major dominant language models. Meta’s LLAMA, Google’s Bard and T5, OpenAI’s GPTs, and an open source model. Given the current trend, we can expect many more to emerge in the coming years. However, there is a limit to the publicly available data to train these models, which raises privacy concerns. If companies want to acquire more data, they may have to resort to using personal information, direct messages and other metadata of personal activities. This is a line we must not cross.


X-RAY | CHELSEA MANNING, 35

She is a consultant for cybersecurity firm Nym Technologies, a former Wikileaks whistleblower, a former US military and internet privacy researcher. She was arrested in 2010 for leaking US state secrets.

The reporter traveled at the invitation of Stone to the Web Summit Rio

[ad_2]

Source link