Global South needs to protect its data, says UN envoy – 11/21/2023 – Tech

Global South needs to protect its data, says UN envoy – 11/21/2023 – Tech

[ad_1]

Amplification of biases and prejudices in job selections and judicial decisions, increase in the scale and sophistication of disinformation, radical changes in the job market, concentration of power in a few countries and Big Techs, use of technology for military purposes – these are some of the aspects of artificial intelligence that urgently need regulation, according to Amandeep Singh Gill, United Nations special envoy for technology.

Gill coordinated the launch, in early November, of the 38-member UN advisory body, with the aim of proposing guidelines for artificial intelligence governance and, eventually, a global agency that could be similar to the International Atomic Energy Agency.

He warns that countries in the Global South need to act quickly to avoid losing sovereignty over their own data.

“I believe that large developing countries like India, Indonesia, Brazil and South Africa have a significant opportunity, but they need to act quickly to create a large-scale digital public infrastructure, as India has done with Aadhaar (biometric identification system) and what Brazil is doing with PIX. It is necessary to have an inclusive innovation space, with data protection structures, that give citizens the confidence to grant access to their information”, he told Sheet.

The body has a Brazilian representative, the Secretary of Digital Rights of the Ministry of Justice, Estela Aranha.

Which aspects of artificial intelligence most urgently need regulation?

I see five main sources of concern and one opportunity. The first is the amplification of biases, discrimination and exclusion. Certain groups, such as women and indigenous people, already suffer exclusion in the analogue world, and artificial intelligence can amplify this. For example, AI can amplify biases and harm certain groups in analyzing the granting of parole in the judicial system, in selection for jobs, and places in schools. Another danger is the potential for AI to increase the scale of misinformation. Not just because of the quantity and ease of creating synthetic content with AI, reducing the cost of robots, personalizing and sophisticating disinformation. AI can cause a radical change in our perception of reality. So much of everything we see, hear and interact with will be mediated by algorithms that this will add many layers of opacity, to the point where we won’t know what is true and what is manipulated. This can be used by authoritarian governments or surveillance capitalism, by corporations, or a combination of the two. The third is the transformation of the job market and the economy and how AI can impact inequality. Our institutions are not prepared to deal with these changes, the previous generation suffered from globalization, and this led to populism in the USA. Now it could be much worse. The fourth concern is the concentration of power, technology and information. There is a concentration in a few locations, mainly the United States and China, where 90% of data centers and companies with high market value are located. What does this mean in terms of opportunities for the rest of the world? It’s as if we had a new food chain, with these big predators at the top and the rest, which are grass or small animals. This all has implications for digital inequality, the aspirations of the Global South to achieve a higher level of development. And the last concern is war, the use of AI by countries and groups in conflicts. We have always been very afraid of terrorists obtaining nuclear material to make a “dirty bomb” or develop biological weapons, and AI can expand these opportunities. Not to mention cyber weapons, of course. But there are also opportunities. AI can help us achieve the Sustainable Development Goals. It can accelerate research and development in key areas such as agriculture and food security and protecting the Amazon. This is the case with the sensors they placed in the Amazon rainforest, the AI ​​can identify the sound of a chainsaw, even if it is very weak, and alert the community. In health, for example. We don’t have enough radiologists in the developing world, and AI-based tools can facilitate access. But I would like to warn that there is no AI magic for MDS. These things will not spread automatically. Ecosystems need to be created around human capacity, computing resources, data flows and collaboration so that scientists and entrepreneurs everywhere can benefit from AI. Six or seven companies won’t solve the SDG problems, but maybe six to seven million, or 60 million, can.

Do you think that countries in the global South, such as India and Brazil, are at risk of losing their data sovereignty, and not using it to develop their own native AI?

I believe that large developing countries like India, Indonesia, Brazil and South Africa have a significant opportunity, but they need to act quickly to create a large-scale digital public infrastructure, as India has done with Aadhaar (biometric identification system) and the Brazil is doing with PIX. But it is necessary to create data flows and space for innovation, because AI does not happen without data. It’s not just about infrastructure like it used to be, it’s necessary to have an inclusive innovation space, with data protection structures, that gives citizens the confidence to grant access to their information. This won’t happen automatically, but I’m reasonably optimistic that these big countries will make it. But what about smaller countries? None of the African countries are in the top 50 in terms of AI capabilities. A much greater effort will be needed, because today the government says: “Ah, I don’t have fiscal space. I don’t have money to pay teachers’ salaries, to carry out immunization campaigns, and you want me to build a great language model of my own and have many PhDs in AI and computing infrastructure?” Perhaps what could work is the diversity aspect. At the end of the day, we all care about our own cultures, our languages, which have come with thousands of years of civilization. We don’t want our culture to be usurped by a limited set of data or AI models.

The UN has just launched a consultative body to analyze artificial intelligence regulation. How will it work?

It is a global consultative body, with the power to make non-binding recommendations. These recommendations would be made to member states participating in negotiations on the Global Digital Compact. They will assess whether to incorporate the recommendations into the binding Global Digital Compact, which will be signed at the Future Summit in September next year. A partial report must be delivered to the UN Secretary-General, Antonio Guterres. Another objective of the advisory body is to bring together all the different initiatives. Understand what the infrastructure is behind the UK AI Security Summit, the US executive order, what China is doing and Brazil, in its parliament. Understanding how to bring all of this together and align with our universal values, with human rights, with the UN charter. And third, and most important, is how to institutionalize this. What types of functions will a future AI governance institution need to have? Would it be similar to the IPCC, a scientific assessment function? Or is it a function of safety standards like the International Civil Aviation Organization? Is it a multidimensional function like the International Atomic Energy Agency? What the international community should consider when starting to think about an AI governance institution or network of institutions. This is what the advisory board will analyze.

Jurist Tim Wu, former White House advisor, has just published an essay in the American newspaper The New York Times saying that “social networks were a wolf in sheep’s clothing and AI is more like a wolf dressed as a horseman of the apocalypse “. Do you think we may be exaggerating, because we regret not regulating social media and now we are in a panic?

Regulation is important and must happen at different levels. There is a national regulatory level, where nation states create laws and independent regulatory bodies, there is room for industry action, with codes of conduct, peer reviews, certification schemes. And there is an important layer of international governance, with principles, standards and scientific assessment, so that everyone knows transparently what is happening. We don’t even understand where technology is going today. We have to believe in certain people, are LLMs really capable of all this? I don’t know. So there is a role for these three layers. What we have to do is bring them together in some kind of agile structure and collaborate. We need to be wise. There is no simple solution. Easy answers should be avoided because they can be misleading.

[ad_2]

Source link

tiavia tubster.net tamilporan i already know hentai hentaibee.net moral degradation hentai boku wa tomodachi hentai hentai-freak.com fino bloodstone hentai pornvid pornolike.mobi salma hayek hot scene lagaan movie mp3 indianpornmms.net monali thakur hot hindi xvideo erovoyeurism.net xxx sex sunny leone loadmp4 indianteenxxx.net indian sex video free download unbirth henti hentaitale.net luluco hentai bf lokal video afiporn.net salam sex video www.xvideos.com telugu orgymovs.net mariyasex نيك عربية lesexcitant.com كس للبيع افلام رومانسية جنسية arabpornheaven.com افلام سكس عربي ساخن choda chodi image porncorntube.com gujarati full sexy video سكس شيميل جماعى arabicpornmovies.com سكس مصري بنات مع بعض قصص نيك مصرى okunitani.com تحسيس على الطيز