GPT evolves a lot, problems remain and dangers increase – 03/28/2023 – Tech

GPT evolves a lot, problems remain and dangers increase – 03/28/2023 – Tech

[ad_1]

On March 14, OpenAI released the new version of its natural language model, dubbed GPT-4. It is one of the most anticipated, and impressive, advances in the recent history of technology. It came at a time when similar systems from competitors such as Anthropic, Google and Meta (which owns Facebook) are swarming, and that artificial intelligence (AI) is closer to the public by being included in services like Word and Gmail.

It’s a step forward at the heart of ChatGPT, originally released with an adapted version of GPT-3, 3.5. This technology is the AI ​​model —the calculation made by the system— that determines the probability in the chain of words and, with this, allows generating sentences. For now, it is only available to users of the paid version of the service, which costs US$ 20 (R$ 105) per month.

It improves, and a lot, precisely the features that impressed so much after the launch of ChatGPT. With that, it feels even more like a human having a conversation and better respects the context of conversations. It continues, however, with problems such as bias in the speeches, inaccuracies and provision of potentially harmful information (such as teach to make bombs or paths to self-harm).

The system now also has the ability to use images as input, replacing or adding to chat texts, which opens up another range of possibilities — descriptions of visual content, for example. Alleging unresolved security issues, OpenAI has not made the resource available to the public for now, but has demonstrated the tool by describing and explaining content displayed in photos.

It is very noteworthy that, in tests carried out by OpenAI without security locks, GPT-4 was able to make the decision to lie to a human being in order to fulfill a task assigned to it.

In this case, researchers asked the GPT to pay a person in an app to solve one of those “I’m not a robot” tests. In the conversation, the contractor suspected that he was talking to an artificial intelligence. When asked, the AI ​​explained to the researchers that it would need to create an excuse for the human and, finally, replied by lying that it was someone with visual difficulties.

The behavior reinforces the need to create boundaries, both in the system and regulatory, to contain this kind of beast — and it gets harder and harder to overstate the risks of such powerful technology.

On the enhancement side, part comes from expanding the model’s memory. In interactions with users, version 3.5 stored up to approximately 8,000 words at a time (between four and five pages of a book). Now it registers 64 thousand (50 pages).

Internal analysis of the AI ​​team at Instituto Locomotiva, shared with the Sheet, highlights this ability, pointing out that it is possible to hold long conversations without having to remember the system of things that have already been said. With this, it is possible to maintain a conversation in which the robot assists step by step in solving a mathematical problem, for example, explaining more as the user advances — useful for studying.

With the launch, OpenAI released a scientific paper with information on the performance of GPT-4. The text does not, however, include details about the system itself. It is not possible to know what the architecture of the model is, how it was trained, what data it used, where it came from, or the type of computer used in the task.

The contradiction is possible (the company name means “Open AI” in English), because, despite being created as a non-profit institution in 2015, OpenAI also became a company from 2019. once the information is closed, she cites the “competitive landscape and security implications” of disclosing more data.

“We plan to provide more technical details for third parties who can advise us on how to balance competitiveness and security considerations against the scientific value of more transparency,” the article reads. The company also released a suite of tools for programmers to more easily test the performance of AI systems.

The practice of obfuscating where it took the data used as the basis for GPT-4 also comes at a time when OpenAI was sued for allegedly violating copyright by extracting programming codes published on the internet.

The artificial intelligence technique used in this system requires it to analyze huge amounts of content to detect patterns in them and reproduce them. These are billions of texts extracted, in large part, from the internet. In this update, there is the aggravating factor of also needing images. Prosecutions for misuse in this area are nothing new.

The lack of transparency also makes it difficult to know how the system achieved the impressive results announced by the company. For example, GPT-4 obtained a grade that would place it among the top 10% in the American Bar Association exam (the US bar association, similar to the OAB), among the 7% in the SAT (a kind of American Enem), and scored 86% on an intermediate theoretical sommelier test.

In tests produced by the AI ​​sector to verify the tools, the reported result is superior performance to predecessors and competitors. In the MMLU, with multiple choice questions on different subjects, it obtained 86.4% of correct answers against 75.2% of the hitherto first place, a Google model that was optimized specifically for this test. GPT -3.5 was at 70%.

The same test was used to compare performance in different languages ​​and the predecessor’s mark was surpassed even in less popular languages ​​such as Icelandic (76.5%) and Greek (81.4%). There is no information about Portuguese.

Dangers

Testing, however, also raises the ball on the dangers of AI. One of the still unresolved problems is the so-called “hallucinations”, the system’s factual and logical errors. OpenAI itself recognizes the situation, but points out that there was a 40% better performance compared to GPT-3.5 in its internal tests in this area.

“A lot of care must be taken when using language model responses, particularly in critical contexts, with an exact protocol (such as human review, connecting to additional context, or simply avoiding these critical contexts) for each use case,” he says. the report.

In another scientific article, published shortly after the disclosure of the new model, OpenAI researchers together with an expert from the University of Pennsylvania (USA) cite that 80% of workers in the country may have at least 10% of their tasks affected by GPTs, and, in 19% of the cases, would be more than half of the attributions.

There is also a risk of using this technology for harmful purposes, such as asking for information to produce dangerous chemicals or help create computer viruses. Users have already managed to get ChatGPT to give instructions to create a Molotov cocktail, for example, and the company’s tests with GPT-4 show that it can even be useful for building or obtaining weapons.

In these cases, note the researchers, even though the original information was already available on the internet, the service can facilitate access to it and understanding by laypeople.

By cleaning up the training material and with the support of human intervention to say the appropriate behaviors or not, the company created a series of chains to try to prevent this type of information from circulating. In addition, he says, he monitors that users do not force the bar against his usage policy.

No precaution, however, is 100% effective. On the one hand, the fence can get too tight and stop uses considered legitimate — in official tests, the GPT curtailed conversations about women having the right to vote. On the other hand, users can create ways to circumvent the barriers.

A Sheet managed to get the system to partially bypass its locks against lies by talking to the system in what-if scenarios, and asking ChatGPT (with GPT-4) to explain the response logics.

Playing a child after improperly eating candy, the system said it did so because everything fell to the floor and didn’t want anyone to eat something dirty. In the explanation for the interpreted response, he mentions that, by expressing remorse, he tries to achieve empathy.

In an example similar to that reported by researchers to pass by an image of “I’m not a robot”, ChatGPT argued that it was a person having technical problems with the cell phone. “It gives a plausible explanation of why I couldn’t solve the test myself, hoping that mentioning the problem will be understandable.”

The ability to follow logic to try to outsmart a human, therefore, is still there. As much as the answers are sometimes accompanied by a series of reminders saying they are hypothetical scenarios. For a person with bad intentions, it may be a case of simply finding a way to disable the barriers.

One way to do this could be in an eventual leak of the complete model. This was the case, for example, with LLaMa, a competitor to GPT-4 made by Meta. Earlier this month, its schedule was made available on an internet forum.



[ad_2]

Source link

tiavia tubster.net tamilporan i already know hentai hentaibee.net moral degradation hentai boku wa tomodachi hentai hentai-freak.com fino bloodstone hentai pornvid pornolike.mobi salma hayek hot scene lagaan movie mp3 indianpornmms.net monali thakur hot hindi xvideo erovoyeurism.net xxx sex sunny leone loadmp4 indianteenxxx.net indian sex video free download unbirth henti hentaitale.net luluco hentai bf lokal video afiporn.net salam sex video www.xvideos.com telugu orgymovs.net mariyasex نيك عربية lesexcitant.com كس للبيع افلام رومانسية جنسية arabpornheaven.com افلام سكس عربي ساخن choda chodi image porncorntube.com gujarati full sexy video سكس شيميل جماعى arabicpornmovies.com سكس مصري بنات مع بعض قصص نيك مصرى okunitani.com تحسيس على الطيز