ChatGPT: the problem is looking competent, says expert – 05/19/2023 – Tech

ChatGPT: the problem is looking competent, says expert – 05/19/2023 – Tech

[ad_1]

The arrival of ChatGPT drew attention to the field of AI (artificial intelligence) and opened the door to other similar systems, from the LLM category (acronym in English for “great language model”).

It is a field of AI that uses the most evolved techniques available to learn human language patterns and reproduce them. It is fed with billions of texts to detect how to string words in a similar way to a person.

In the case of ChatGPT, this system was incorporated into a chat engine. The user asks a question or requests something, and using its LLM, the machine responds.

For Rune Nyrup, a philosopher specializing in AI for decision-making, convincing speeches can lead to wrong conclusions, giving an idea of ​​a competence that the system does not have.

Nyrup is a researcher at the Leverhulme Center for the Future of Intelligence at Cambridge University and holds a PhD from Durham University (both in England).

The expert calls for more transparency so that people understand the flaws of AI systems before using them, and recommends caution in adoption.

One of the uses for ChatGPT is guidance for tasks, which can lead down the wrong path. When are these systems a problem for decision making? The problem is that you cannot guarantee accuracy. The logic of the system is that it tries to predict what the most human-like sentence would be: the most likely sequence of words given the information that was entered as input. [a pergunta]. It is a statistical forecasting model, trying to reproduce patterns found largely on the public internet, i.e. it was not built to think about the accuracy of the information. It was made to reproduce language convincingly.

One of the cases was Stack Overflow [site onde programadores trocam informações] ban replies using GPT. People can be there thinking about solutions to develop critical software, so it’s very problematic to have a solution that, at first glance, seems very plausible, but it’s not.

The point here is not that he makes mistakes more often than humans. The problem is, when that happens, it comes in a surprising way. It’s very different from the error you’d expect from a human, which makes it harder to detect.

How can ChatGPT errors differ from human errors? There is an example of someone who asked him to describe a scary scene in a subtle way and the response was something along the lines of ‘the scene was dark in a subtle way that made things feel scary’. That is, he speaks in something subtle rather than describing the scene subtly. It’s a mistake a human with ChatGPT-level communication skills would never make. The system is looking for patterns in the text, so it doesn’t understand concepts, in case it generates something subtle. This happens in a weird way, and it’s a hole in model accuracy that’s just too hard to predict. So how could people anticipate these failures?

Can LLM errors influence decisions when seeking specific instructions? Let’s say I ask for instructions on how to cook an egg, and he tells me to crack it before putting it in water. Is it something that can lead to an error? Yes, because it is optimized to be convincing. In the egg example, maybe you didn’t even read the part where it explains how to cook the egg, because everyone knows how to do it. You would look for errors in things like the amount of salt being indicated, because we have a mental model focused on looking for the types of errors that humans would make.

This is just a simple example for a recipe, but imagine someone using this for a critical decision, like programming code that controls a power plant. These are processes that are not based only on having low failure rates. They depend on having security systems so that, if an error occurs, it is detected. Therefore, even if, on average, the accuracy of the responses generated by these AI is high, it also matters where the mistakes are being made.

Can artificial intelligence not having skills like empathy reverberate in humans? It depends on the context and how things are asked. You can ask to write a review of a restaurant saying the food wasn’t very good, or you can ask for a review saying the food wasn’t good, but since you like the owner of the place, write something positive despite that. Perhaps something decent will come out of the answer. The issue is not the empathy of the machine, but the response respecting the explicit instructions. And it depends on how you are going to use it. Are you going to just copy and paste the text without even looking at it? Or at least give it a read to check if it has the correct tone?

Here we are talking about something with low risk, but think about something where security is paramount. I think a lot about the medical field and, there, it would be good to have a kind of security system together.

Don’t machine responses have an influence on what humans decide? Isn’t there a tendency to agree with what the robot says? Yes, it’s called automation bias. Our cognitive process is designed to save energy, so if there’s a shortcut to solving a problem, our default is to take the shortcut. That is, if there is a system saying ‘do X’, our default would be to follow that. It is a risk that exists.

There are, however, ways of acting where the machine is not really producing a second opinion. It can simply put information relevant to the decision-making process. In the medical case, it’s reminding the person of certain things, such as suggesting a diagnosis that would be common if someone has specific symptoms, but is rare overall (which can cause the professional to miss it). Or suggest a list of three tests to do. A performance that does not compete with human decision-making, but adds new information.

Whose responsibility should it be when using these LLM? Should the companies that do, like OpenAI, be more transparent about accuracy, or should the people who use it be responsible for the application they give to the system, or is it a shared duty? Clearly, the company producing this type of AI for more general purposes has a huge responsibility because it is in the best position to do these quality checks. How could you hold the end user accountable if he has no way to make things better? He is tied to the product he is given and can only choose to trust or not. To assign responsibility to the user is to take it away from others.

It should be OpenAI, or whoever else is developing these general purpose LLMs, to have at least a transparency of evidence about the robustness of the systems. If someone bought the system and trained it for a specific use, they would also have a responsibility to check the quality in that domain. Ideally, there would be tools from the general system supplier to help with this task.

Mr. Do you think we’re rolling out artificial intelligence technologies too fast or too soon? So far, I haven’t seen anyone use it in practice very early on on something that is security critical. But they will certainly apply in these contexts, so there is reason to be concerned for the future.

What can be done to improve artificial intelligence technologies? The main thing is not to treat AI differently than other things. Don’t assume that because you’re using AI you can bypass security requirements for your industry. Also, we need to be particularly careful with anything that relies on humans to find errors.

On practical things, we should ask regulators and legislators not to cut corners for tech companies just because they claim to be using very advanced AI. We must not lower our security standards and we cannot tolerate people saying something is too complex to understand. If you don’t understand your technology, you shouldn’t be using it in something that doesn’t tolerate mistakes.

The bottom line is: if in a given area certain mistakes or practices are not acceptable without using AI, then they are also not acceptable using AI. If it is unacceptable to use fake images to illustrate a news story, it is also unacceptable if it is used with AI. It’s using the moral we already had: society’s rules can’t be thrown out the window just because we’re using AI.


X-RAY

Rune Nyrup, 36, is a philosopher who researches ethical and epistemic issues with the use of AI systems to automate the decision-making process. His focus is on understanding how transparency can be relevant to managing process biases. He works at the Leverhulme Center for the Future of Intelligence and at the Department of History and Philosophy of Science at Cambridge University (England). Before that, he did a PhD at Dunham University (England).

[ad_2]

Source link

tiavia tubster.net tamilporan i already know hentai hentaibee.net moral degradation hentai boku wa tomodachi hentai hentai-freak.com fino bloodstone hentai pornvid pornolike.mobi salma hayek hot scene lagaan movie mp3 indianpornmms.net monali thakur hot hindi xvideo erovoyeurism.net xxx sex sunny leone loadmp4 indianteenxxx.net indian sex video free download unbirth henti hentaitale.net luluco hentai bf lokal video afiporn.net salam sex video www.xvideos.com telugu orgymovs.net mariyasex نيك عربية lesexcitant.com كس للبيع افلام رومانسية جنسية arabpornheaven.com افلام سكس عربي ساخن choda chodi image porncorntube.com gujarati full sexy video سكس شيميل جماعى arabicpornmovies.com سكس مصري بنات مع بعض قصص نيك مصرى okunitani.com تحسيس على الطيز