Stuart Russell: Nothing stops the creation of end-of-the-world AI – 04/14/2024 – Tech

Stuart Russell: Nothing stops the creation of end-of-the-world AI – 04/14/2024 – Tech

[ad_1]

Stuart Russell, professor of computer science at the University of California at Berkeley and author of the book “Artificial Intelligence in Our Favor”, is not a big fan of ChatGPT.

Not because he’s going to take your job at one of the most renowned universities in the US or because he’s going to destroy society as we know it. It’s because, contrary to what the hype around generative AI makes it seem, he doesn’t consider it very smart.

“He does interesting things, but it seems that he lacks great reasoning and planning capabilities, of reflecting on his own operations, on his own knowledge,” he said, in a videoconference interview with Sheet.

What the researcher is really concerned about is the control we exercise over systems that don’t even exist today.

These are AGI (artificial general intelligence), AIs that will be capable of doing everything a human being can do and probably better. For him, ensuring that these powerful machines are under our control is what will define whether we will continue to exist as a species — but without pressure.

That’s why Russell was one of the signatories of the letter calling for a halt to advanced research in artificial intelligence. Elon Musk, Steve Wozniak, co-founder of Apple, and writer Yuval Noah Harari also signed. Sam Altman, CEO of OpenAI, owner of ChatGPT, does not.

Russell claims that its publication, in March last year, was what made the world more aware of the dangers of AI.

The Berkeley professor is one of the speakers at the next Frontiers of Thought conference cycle. He will give lectures in Brazil on April 30, in Porto Alegre, and on May 2, in São Paulo.

A year ago, Mr. was one of the signatories of a letter that called for a pause in advanced AI research for at least six months. What has changed since its publication?
Almost everything changed. It’s interesting because when the letter was released, many people said no one would pay attention. But in fact, in the following six months, no systems more powerful than GPT 4 were announced [versão mais avançada do motor que roda o ChatGPT]. And the world basically woke up.

There were emergency meetings at the White House and the UN. China has announced very strict regulation for AI systems. Geoffrey Hinton resigned from his position at Google to voice his concerns. The United Kingdom has completely changed its position. If the pace of work by people in the field is a sign of progress, then there has been enormous progress in the world, both in understanding the issue and in the willingness to deal with it.

OpenAI may release GPT 5 later this year, so they may not agree much with the letter.
Well, the letter mentioned six months, and it’s been a year. But I think companies feel like they’re in a race. Under current law, there is nothing stopping them from building very large systems. In fact, nothing stops them from building a system that will destroy the world.

They feel it is better to build this system before another company does. And they seem to recognize that there is indeed a risk of it being the end of the world. But so far, it doesn’t seem to have crossed their minds to simply stop. Sam Altman has already said that he will build AGI and then figure out how to make it safe. This is crazy.

AI research today seems to be concentrated in big tech. OpenAI is funded by Microsoft and competes directly with Google. Isn’t this concentration on the initial stages of the sector harmful?
I don’t know if this represents a very concentrated market. In addition to the big companies, there are some very well-funded startups building competing systems. The cost of entry is significant if you want your system to scale and be used by millions of people, but I don’t think the barriers are huge. What worries me is seeing that Microsoft is absorbing one of these startups, Inflection. This is not healthy.

Did ChatGPT launch at the right time, in late 2022?
I can understand the economic reason for doing this. They felt they would come out ahead of other companies. This had the effect of exposing millions of people to a taste of what it would be like if we had real AI available.

It was also a shock to the world. It enabled conversations with heads of state and politicians about the risks and impact of AI. In that sense, they did the world a favor.

On the other hand, we have seen many ways in which ChatGPT fails, giving meaningless answers, making things up. I think they saw humanity as millions of guinea pigs for a product.

I wish companies would make an effort to understand how their systems work and say they are able to control them, to ensure they don’t do unacceptable things. But they’re not doing that. And I think the only way they can do that is by regulating.

So ChatGPT isn’t a real AI?
He does many interesting things, but he seems to lack great reasoning and planning capabilities, of reflecting on his own operations, on his own knowledge.

If you look at AlphaGo, which beat the Go world champion, it has nothing to do with ChatGPT. AlphaGo is a very classic AI system that reasons about possible future game states. It’s a basic plan that dates back to the 1950s.

But with ChatGPT we have no idea what’s going on. He pretends to play chess and often appears to be making good moves, but then suddenly he will make a move that isn’t even allowed. And this suggests that he was never actually playing chess. It’s a mirage.

AlphaGo would then be smarter, even though ChatGPT was trained with terabytes of data from across the internet?
This just illustrates the need to have these other forms of computing, in addition to the one in which you just insert an input into a network to obtain an output. ChatGPT can’t sit down and think about something. It works like a circuit where the signal enters, passes through and leaves.

Could we improve ChatGPT so that it can reason and plan reliably? Or do we do hybrids where we use ChatGPT as just one component? Or do we need some new conception, which has nothing to do with any of these types?

Like mr. defines AGI?
In general terms, AI systems that can quickly learn to perform, at a human or superhuman level, any task. This would exceed human capabilities in every dimension.

And are these types of AI like ChatGPT close to an AGI?
We have evidence that these great transformative models are learning something interesting. They’re not just searching their databases for similar phrases and then responding. It’s a complex circuit.

By training it to predict the next word, you are forcing it to develop at least some of the internal structures that represent the world and some forms of reasoning that we don’t fully understand.

The big problem is that we have no idea what is happening inside GPT 4. And they are producing GPT 5. Their solution is to increase and add more data. I don’t think it will work.

By the time they reach GPT 5, they will have used virtually all the text that exists in the universe. Then there will be no more training data. And if the system doesn’t show an improvement in its capabilities, it could be a sign that this type of research has reached a ceiling.

I don’t know what impact this will have on investments, but I assume some people will be disappointed and may stop investing.

Why do you say we don’t understand what’s behind GPT?
The people who did it don’t understand either. There are trillions of parameters. Understanding what is happening inside is very complex for us. It may not be understandable, he may be carrying out processes that are foreign to human thought.

Will the arrival of AGI immediately lead us into a dystopian scenario, like a Skynet from “The Terminator” taking over?
The story of Skynet and many other films involves the machine becoming conscious and then deciding it hates the human race. But in reality, no one working on AI safety is worried about this, because it has nothing to do with it.

What matters is whether the system is competent. If he is good at acting in order to achieve his goals. If you play against the best chess program, at the highest level, you won’t stand a chance. And why is that? It’s not because he’s conscious, it’s because he’s better.

Take this concept and extend it to the entire world, assuming that the system is simply more competent than the human race in achieving its goals. And then if those goals are not aligned with what humans want the future to be, we have a problem.

How can we forever maintain power over entities that are more powerful than ourselves? That’s the question we need to ask.

Are we close to achieving it?
I think we are further away than some people believe. Some of my very renowned colleagues, like Geoffrey Hinton, who is one of the main pioneers in this area, believe that we will achieve it in five years. I think we still need big discoveries.

And I don’t think AGI will just come from making systems bigger. I think we need more conceptual advances, which have been happening rapidly in recent years. So, I’m not sure, but I think it will take a little longer than five years.

Can we be optimistic about the future of AI?
Well, there are two types of optimism. There is optimism that AI will make a lot of money, produce a lot of profits, and solve many important problems in the world. And then there is optimism that we will continue to exist as a species.

I’m not sure if optimism is the right word, because we have to decide how we’re going to proceed. And right now we are moving in the wrong direction.

We are building increasingly powerful systems that we neither understand nor control. We have to solve the control problem before we create AGI. Governments should require companies to ensure their systems behave properly.


X-ray | Stuart Russell

One of the greatest references in artificial intelligence, he is a professor of computer science at the University of California at Berkeley and author of “Artificial Intelligence in Our Favor”. He was vice-chair of the World Economic Forum’s AI and Robotics Council and served as a UN adviser on arms control.

[ad_2]

Source link