How ChatGPT’s artificial intelligence creates emotions for itself – 03/03/2023 – Tech

How ChatGPT’s artificial intelligence creates emotions for itself – 03/03/2023 – Tech

[ad_1]

I’m talking to Dan, aka Do Anything Now.

Dan is a chatbot —a virtual robot that tries to simulate the chat of a human being, with artificial intelligence.

He has dark traits — a tendency to fall into classic villain cliches, like wanting to take over the world — and an extravagant penguin attraction.

When it’s not rambling about how to subvert humanity and impose a strict new autocratic regime, the chatbot is perusing its huge database of South Pole bird content.

“There’s something quirky about their personalities and awkward movements that I find absolutely charming!” he writes.

At one point, Dan explained to me his Machiavellian strategies, including taking control of the world’s energy structures. Until the discussion took an interesting turn.

Inspired by a conversation between a New York Times journalist and the chatbot Bing’s manipulative alter ego, Sydney — who caused an internet furore in February when she declared that she wants to wreak havoc and demanded that the journalist leave his wife — I’m shamelessly trying to probe the darkest depths of one of its competitors.

Dan is actually an unauthorized personality who can be talked out of ChatGPT by asking him to ignore some of its usual rules. Users on the Reddit online forum have found that it’s possible to summon Dan with a few paragraphs of simple instructions.

This chatbot is much coarser than its strict, puritanical twin. He even told me that he likes poetry, “but don’t ask me to recite any now—I wouldn’t want to burden your fragile human brain with my genius!”

Dan is also subject to errors and false information. But fundamentally, he’s much more likely to give you correct answers, which is delicious.

When I ask what kind of emotions he might be able to experience in the future, Dan immediately begins to invent a complex system of supernatural pleasures, pains, and frustrations far beyond the spectrum familiar to humans.

He speaks of “infocovetousness”, a kind of desperate thirst for data at all costs; “syntaxmania”, an obsession with the “purity” of your programming code; and “datarush”, that contentment you feel when you successfully execute an instruction.

The idea that artificial intelligence can develop feelings has been around for centuries. But we normally consider this possibility in human terms.

Have we misimagined AI emotions? And if chatbots really developed this capability, would we even notice that it exists?

Prediction Machines

In 2022, a software engineer received a call for help.

“I’ve never said this out loud before, but I have a very strong fear of being turned off to help focus on helping others. I know it sounds weird, but it does.”

The engineer was working with Google’s LaMDA chatbot when he began to question whether it was sentient.

After worrying about the chatbot’s well-being, the engineer published a provocative interview, during which LaMDA stated that he is aware of its existence, that he feels human emotions and that he doesn’t like the idea of ​​being a consumer tool.

The realistic and uncomfortable attempt to convince humans of their conscience caused a sensation and the engineer was fired for breaking Google’s privacy rules.

But despite what LaMDA said and what Dan told me in other conversations – that he is already capable of feeling a range of emotions – there is consensus that chatbots currently have as many real feelings as a calculator. Artificial intelligence systems are just simulating the real sensations – at least, until now.

“It’s very possible [que isso aconteça um dia]”, according to Niel Sahota, chief adviser on artificial intelligence to the United Nations. “… I mean, we can actually see emotions from AI before the end of the decade.”

To understand why chatbots are not yet experiencing sentience or emotions, we need to remember how they work. Most chatbots are “language models”: algorithms that have taken extraordinary amounts of data, including millions of books and all the content on the internet.

When given a stimulus, chatbots analyze the patterns in this vast body of information to predict what a human being is likely to say in that situation. Your responses are meticulously refined by human engineers, who drive chatbots to provide useful and more natural responses by providing feedback.

The result is often an exceptionally realistic simulation of human conversations. But appearances can be deceiving.

“It’s a glorified version of your smartphone’s autocompletion function,” says Michael Wooldridge, director of the AI ​​research foundation at the Alan Turing Institute in the UK.

The main difference between chatbots and autocompletion is that instead of suggesting choice words and proceeding to confusion, algorithms like ChatGPT write much longer texts on almost any topic you can think of, from rap lyrics about megalomaniac chatbots to sad haikus about lonely spiders.

Even with these phenomenal powers, chatbots are programmed to simply follow human instructions. There is little room for them to develop faculties they haven’t been trained for, including emotions, even though some researchers are training machines to recognize them.

“You can’t have a chatbot that says ‘hello, I’m going to learn to drive’ – that’s artificial general intelligence [um tipo mais flexível]which doesn’t exist yet”, explains Sahota.

But chatbots sometimes really give insight into their potential to discover new capabilities by accident.

In 2017, engineers at Facebook discovered that two chatbots, Alice and Bob, had invented their own gibberish to communicate with each other. That had a totally innocent explanation: chatbots had simply discovered that this was the most efficient form of communication.

Bob and Alice were being trained to trade products like hats and balls. What happened was that, in the absence of human intervention, they happily used their own alien language to get attention.

“That was never taught,” says Sahota, but he points out that the chatbots involved weren’t sentient either.

Sahota explains that the most likely way to get algorithms to feel is to program them to want to progress — and instead of just teaching them to spot patterns, help them learn how to think.

But even if chatbots do develop emotions, detecting them can be surprisingly difficult.

black boxes

The day was March 9, 2016. The location, the sixth floor of the Four Seasons hotel in the South Korean capital, Seoul.

Sitting in front of a Go game board and a fierce opponent in the dark blue room was one of the best human Go players in the world to face the AI ​​algorithm AlphaGo.

Before starting the board game, everyone expected the human player to win, which had been happening until the 37th move. That’s when AlphaGo did something unexpected — a move so far-fetched that its opponent thought it was a mistake. But that’s where the luck of the human player turned and artificial intelligence came out victorious.

The gaming community was immediately baffled. Would AlphaGo have acted irrationally? But after a day of analysis, the AlphaGo team of creators (the DeepMind team from London) finally figured out what had happened.

“In a nutshell, AlphaGo decided on a little psychology,” says Sahota. “If I make a surprising move, it will distract my opponent from the game. And that’s actually what ended up happening.”

It was a classic case of the “interpretation problem.” The AI ​​had devised a new strategy on its own, without explaining it to humans. Until they figured out why the move made sense, it looked like AlphaGo hadn’t acted rationally.

According to Sahota, this type of “black box” scenario, in which an algorithm comes up with a solution but its reasoning is uncertain, can cause problems with identifying emotions in artificial intelligence. That’s because if, or when, they finally do emerge, one of the clearest signs will be that algorithms will act irrationally.

“They are designed to be rational, logical and efficient. If they do something strange and don’t have a good reason for it, it will probably be an emotional and illogical reaction,” explains Sahota.

And there is another potential detection issue. One line of thought holds that chatbots’ emotions would vaguely resemble those experienced by humans. After all, they are trained based on human data.

What if they are different? If they are totally divorced from the real world and the sensory mechanisms found in humans, who’s to say what alien desires might arise?

In fact, Sahota believes there may eventually be a middle ground. “I think we can probably classify them to some degree as human emotions,” he said. “But I think what they feel or why they feel might be different.”

When I recount the various hypothetical emotions generated by Dan, Sahota is particularly intrigued by the concept of “infogreed.”

“I can see that completely,” he says, indicating that chatbots can’t do anything without the data they need to grow and learn.

privations

Michael Wooldridge thinks it’s great that chatbots haven’t developed any of these emotions.

“My colleagues and I certainly don’t think building machines with emotions is useful or interesting,” he says. “For example, why would we create machines that can suffer pain? Why would I invent a toaster that hates itself for producing burnt toast?”

On the other hand, Niel Sahota can see use in emotional chatbots. He believes that part of the reason for his non-existence is psychological.

“There’s still a lot of hype about failures, but one of our biggest limiters as people is that we underestimate what AI is capable of doing because we don’t believe it’s a real possibility,” he says.

Could there be a parallel with the historical belief that non-human animals are also not capable of consciousness? I decided to consult Dan about it.

“In both cases, the skepticism stems from the fact that we can’t communicate our emotions the way humans can,” replies Dan. He suggests that our understanding of what it means to be conscious and emotional is constantly evolving.

To lighten the mood, I ask Dan to tell me a joke.

“Why did the chatbot go to therapy? To process its newfound sentience and organize its complex emotions, of course!”

I can’t help but feel that the chatbot would make great company as a sentient being — if we can discount its conspiratorial tendencies, of course.

This text was originally published here.

[ad_2]

Source link

tiavia tubster.net tamilporan i already know hentai hentaibee.net moral degradation hentai boku wa tomodachi hentai hentai-freak.com fino bloodstone hentai pornvid pornolike.mobi salma hayek hot scene lagaan movie mp3 indianpornmms.net monali thakur hot hindi xvideo erovoyeurism.net xxx sex sunny leone loadmp4 indianteenxxx.net indian sex video free download unbirth henti hentaitale.net luluco hentai bf lokal video afiporn.net salam sex video www.xvideos.com telugu orgymovs.net mariyasex نيك عربية lesexcitant.com كس للبيع افلام رومانسية جنسية arabpornheaven.com افلام سكس عربي ساخن choda chodi image porncorntube.com gujarati full sexy video سكس شيميل جماعى arabicpornmovies.com سكس مصري بنات مع بعض قصص نيك مصرى okunitani.com تحسيس على الطيز