Can artificial intelligence outperform human intelligence? 8 questions about technology – 6/7/2023 – Tech

Can artificial intelligence outperform human intelligence?  8 questions about technology – 6/7/2023 – Tech

[ad_1]

In 2022, we heard for the first time that an artificial intelligence system became sentient, that is, endowed with sensations or impressions of its own, according to a Google engineer.

More recently, screenshots of the DALL-E software have gone viral, as has ChatGPT.

Then came the warnings, the fears, the requests for regulation. And the doubts.

For this reason, BBC News Mundo, the BBC’s Spanish-language service, compiled the main questions about artificial intelligence (AI) asked by its readers and consulted an expert who has been working in the field for over 30 years to try to answer them.

The expert is Amparo Alonso Betanzos, professor of Computer Science and Artificial Intelligence at the University of Coruña, Spain, and assistant to the dean for AI issues. She was also president of the Spanish Association of Artificial Intelligence (AEPIA).

Check out her responses below.

How does artificial intelligence work?

It’s hard to say because there are so many subfields, but there are basically two ways to approach artificial intelligence. One is symbolic, old AI, which we gain knowledge about from experts in the field and which is much more transparent but less quantifiable.

The other, the AI ​​we have today, is based on data. To derive knowledge, what you do is feed the system with data from a given field, the system learns from this data and extracts the patterns. Can generalize, predict, etc. in many areas, from natural language, computer vision or machine learning.

There are models in which the process is done by reasoning based on deep learning with neural networks with many layers that end up learning that data. But there are other models like reinforcement reasoning or other types that can be used to learn and derive knowledge for AI.

AI “feeds” on data. Where does this information come from?

It depends a lot on the system. If it’s an expert system by medical standards, it’s drawn from the large clinical databases that are referenced by certain types of diseases or certain types of patients. If it is traffic data, available traffic cameras or sensors will be used.

Today, the digitization process we are in is so immense that there are sensors that can extract data from virtually any natural or industrial process that we can imagine. Virtually every experience you can imagine is digital: your travels, your medical records, your preferences…

For example, when you sit down in front of the television and it recommends what to watch, it’s based on what you’ve done before on that platform. Sometimes all this is fodder for AI algorithms.

How is the AI? Many imagine something like a huge computer or a Terminator T100 style machine.

No. Not so… Unless you have your AI programs embedded in some anthropomorphic looking robot. It can be like an automatic vacuum cleaner that walks around the house or has a humanoid shape, but it is also simply turning on the computer and having software that listens to you, or a program on your cell phone that detects your fingerprint.

It is impossible to say a number, how many there are. The systems are many and serve very different things, from television that recommends what to watch to an app to predict whether vineyards will develop any pests. It’s very transversal. It can be applied to just about any area imaginable.

And what is the impact of AI on our daily lives, on jobs…?

Many times we are using artificial intelligence and we are not even aware of it.

In the future, we tend to have more AI because it’s being implemented in more and more areas.

Regarding employment, before the pandemic we saw how the landscape changed. There are many more jobs affected by automation, not just AI. We see this in supermarkets, with more and more machines instead of boxes, for example.

This will change the ways of working, especially in automated tasks, and we will have to live with the fact that part of our routine tasks will be done by machines. I give the example of doctors, who 50 years ago worked with almost no instruments and today have many more machines at their disposal.

It will of course affect jobs and the economy, and it is something governments must deal with. We must be careful because, otherwise, it can generate large gaps. And yes, some jobs will be destroyed, but others will be created.

Lately we read a lot, or so I perceive, that AI will be catastrophic. It’s creating a certain panic that I think needs to be handled with care. We often only focus on the more tragic side, but AI is a tool that has many good things to offer if handled well.

For example, in recent years, we’ve seen AI’s ability to advance preventative medicine. It can help us in learning, we can be much more selective with our students and adapt their teaching, predict livestock diseases, fight climate change, make things more sustainable or better manage a store’s stocks.

There are many positive aspects that we must learn to take advantage of and protect ourselves from those that can harm us.

What dangers can AI pose?

One of the risks is, for example, that the system behaves inappropriately and the person in control will not be able to detect it if the supervision is not so strict. But this is a human error that we are not free from, even with AI.

It is also a profession still very marked by the male gender and it is important to be aware that part of the future will be designed with technology. How we get there, how we want the future to be is important, which is why the design of these tools requires us to be aware of biases and requires everyone’s participation.

But I think it’s helping people by empowering them to make decisions. Imagine that you are a doctor and you are analyzing a case in which there are many symptoms and doubts. You ask a colleague, in this case, an AI, and that reduces your chances of making the wrong decision. It helps, but the final decision is up to you. Just like the algorithm on a platform can tell you what to watch, but in the end it’s up to you, not the machine.

It is true that we are making great progress with AI and that regulation is important.

Can the AI ​​be tuned or is it like wiping ice? We’ve already seen what happened to the internet and the ‘deep web’, for example.

The European Union has been concerned about this for a long time. We’re taking it slow, but there’s a proposal on the table.

Conversations on this topic began in 2018, when a group of high-level artificial intelligence experts was created that produced guidelines for credible artificial intelligence. At that time, there was already talk of human supervision of AI and aspects such as sustainability, absence of bias or security were analyzed.

For example, human supervision is one of the basic points contemplated in European regulations. This means that any artificial intelligence system should always have a human supervisor throughout the process of starting the operation, collecting data and in the sectors behind its application.

We were pioneers in the EU and now we see companies from outside the bloc, from the United States in particular, insisting on the need for this regulation.

It is something that must be done worldwide and we are working on it. The important thing is to take the first step.

Can everything be regulated? The answer is complex because AI is complex and it is clear that there is no such thing as zero risk here or anywhere. For example, we regulate and enforce traffic laws, but that doesn’t prevent accidents.

Global regulation would be desirable, but it is difficult to achieve. Just look at the Kyoto protocol, for example… Not all countries sign and there is no way to force them to do so. Beyond the European Union, it is not easy to convince the other major centers of AI in the world, such as China and the United States, that regulation is necessary.

I think that, in addition to the noise made by the press, we should all be concerned, because it is important to regulate this technology and the constant monitoring of intelligent systems must be arbitrated.

Lately we’ve seen a lot of headlines and experts saying that AI could lead to the extinction of humanity… Is that right?

It’s hard to say how far artificial intelligence will go, but you always have to have a way to interrupt or shut down the machines.

They are being designed by people… Just as people are working with nuclear power. So I think it’s important to detect if there are any problems and define security and enforcement standards.

But in my opinion, what is happening to AI also happened to cars when they came along. At first it was thought that they would be extremely dangerous, that they could kill people and that the speed they reached could denature our body’s proteins. Today we know that this is not the case and we have technology under our control, we have regulations and so on.

Can AI surpass human intelligence and become conscious?

Almost all AI systems exceed our intelligence, but this only happens in a certain field.

Most of the AIs we have are narrow niche: capable of having a very high level of intelligence in a very specific field. For example, the AlphaGo machine (which has learned to play Go, a board game) can beat the world champion in Go, but it needs to be taught to play other games, such as chess, in order to win a game.

They can be great for diagnosing a type of cancer, but they don’t work as general practitioners because the knowledge needed is broader.

And about consciousness… It is possible, in quotes, to model it.

There are robots that can model feelings and they may seem to have real consciousness, but we don’t even know how certain consciousness processes happen in humans, so it’s very complex and vast.

Although there are tools like chats, which seem more transversal because they are based on language, in reality what these machines do is predict the next word of a text. They are very sophisticated seekers, but they are not able to reason deeply because they are not conscious. It is like a trained parrot and very intelligent.

This text was originally published here.

[ad_2]

Source link