Michael Osborne: Artificial intelligence also has risks – 02/18/2023 – Mundo

Michael Osborne: Artificial intelligence also has risks – 02/18/2023 – Mundo

[ad_1]

About three weeks ago, two researchers from the University of Oxford were invited by the UK Parliament to speak about the risks of artificial intelligence (AI). The meeting took place after the appearance of ChatGPT, which dazzled people all over the world.

The feature has been touted as a new digital revolution. But at the Palace of Westminster, Professor Michael Osborne and his student Michael Cohen provided some disastrous examples of the misuse of AI. If such a system were asked to eradicate cancer, for example, it might find that killing all human beings would be a valid means of eliminating the disease.

Elsewhere, the pair compared the threat of AI to that of nuclear weapons. In an interview with Sheet, Osbourne details how this can happen and reveals that AI can indeed become self-aware.

The story of self-sufficient machines dominating the Earth and trying to extinguish the human race, told in science fiction movies and books, could happen from true? We are seeing that artificial intelligence has become very powerful very quickly, notably in the form of ChatGPT, which demonstrates some degrees of reasoning and intelligence. We need to think seriously about what this technology will look like in the future — I’m thinking about the spread of misinformation, possible impacts on jobs, bigoted speech, privacy concerns.

Is AI becoming self-aware? I don’t think it’s impossible for artificial intelligence to become conscious, even though even the most advanced algorithms are far from the level of sophistication of a human being. But in the long run, I don’t think there are any fundamental obstacles for an AI to develop self-awareness and very advanced planning and reasoning capabilities.

Six months ago, Google fired an engineer who said the AI ​​he was working on had gained consciousness. What do you think about this? This was on LaMDA [modelo de linguagem para aplicativos de diálogo]. The first thing is that engineer was incorrect. All of us who work in this field can say that. Language models are a long way from being conscious. They don’t understand the world around them the way humans do.

But I’m glad the engineer spoke. We need to protect whistleblowers like this and create a framework where those working on these technologies at large, non-transparent companies like Google can come out publicly and state that they have legitimate concerns without being punished for it.

Is the threat of AI worse than that of nuclear weapons? There are common dangers. AI, like nuclear energy, is what we call a dual-use technology: that is, it has civil and military applications.

Nuclear power is an exciting prospect for achieving an end to carbon emissions. But it can also be used as a weapon. There are also huge advantages to AI. But AI is also being utilized as a means of directing drones to the target.

Is it possible for AI to do harm without intending to? For the AI ​​to work, we have to present a purpose, specify what it should do. And, of course, what we say may not be what we really want.

Here’s an example that might help: In 1908, The New York Times reported the story of a dog in Paris who lived on the banks of the Seine, rescued a child from the river, and was rewarded with a steak. And everyone was delighted, of course. Then it happened again. The dog pulled another child out of the river and got another steak. And then again. Wow. So an investigation was made, and it turned out that the dog was pushing children into the river so that they could get their steak as a reward. If we don’t understand the goals we set for AI, we can lead to some very problematic behavior.

People are in awe of ChatGPT. What will be the impact of this on everyday life? ChatGPT and similar language models are likely to have the level of impact that search engines [como a busca do Google] had.

In a way, GPT can be a competitor to search engines. But it’s not perfect — you can’t always trust the information it gives you, just like you can’t always trust the information returned by your search engine. Over time, we got used to this fact and found ways to use search engines effectively.

Mr. spoke to Parliament about the risks of an arms race. What it is? I was thinking about the dynamic we are seeing between OpenAI [empresa que lançou o ChatGPT] and Google. Google has made public the threat that ChatGPT poses to its primary business, searching and selling advertising space. The two are in direct competition.

And Google is investing in its own language models. It’s an arms race in the sense that there could be damage: Google has said it’s recalibrating the level of risk it’s willing to take when launching products. As these two powerful tech players clash and compete, there are legitimate concerns about whether security issues are being left behind.

What did mr. propose? I think AI is an essential technology to ensure human flourishing. But we need to develop it in a safe and responsible way. First, we should think about AI regulation, which needs to be adaptable and flexible. I don’t think we’ll be able to come up with a single set of rules that governs all possible AI use cases.

There are some places where I think we should just not use it. We can say that facial recognition for policing can be an example, because this technology can easily become biased in a really problematic way.

Is there anything being done? The European Union is developing its own AI regulation, which should be finalized in 2024. But it is true that many nations are in the early stages of thinking about AI.

In 1942, science fiction writer Isaac Asimov created the three laws of robotics, precisely to prevent machines from harming humans. Couldn’t we apply them to AI? Unfortunately not. I don’t think the three laws are enough (laughs). But they’re a really interesting thought exercise for us to figure out some of these problem cases that we’re going to need to solve.


X-ray | Michael Osborne

He is an associate professor of machine learning at the University of Oxford. His work has already been applied in different contexts, from the detection of planets in distant solar systems to mechanisms that change routes for autonomous cars due to road works. He also researches how intelligent algorithms can replace human workers, focusing on the social consequences. He is a co-founder of the technology company Mind Foundry.

[ad_2]

Source link

tiavia tubster.net tamilporan i already know hentai hentaibee.net moral degradation hentai boku wa tomodachi hentai hentai-freak.com fino bloodstone hentai pornvid pornolike.mobi salma hayek hot scene lagaan movie mp3 indianpornmms.net monali thakur hot hindi xvideo erovoyeurism.net xxx sex sunny leone loadmp4 indianteenxxx.net indian sex video free download unbirth henti hentaitale.net luluco hentai bf lokal video afiporn.net salam sex video www.xvideos.com telugu orgymovs.net mariyasex نيك عربية lesexcitant.com كس للبيع افلام رومانسية جنسية arabpornheaven.com افلام سكس عربي ساخن choda chodi image porncorntube.com gujarati full sexy video سكس شيميل جماعى arabicpornmovies.com سكس مصري بنات مع بعض قصص نيك مصرى okunitani.com تحسيس على الطيز