Why do AI chatbots tell lies and act strangely? Look in the mirror – 04/16/2023 – Tech

Why do AI chatbots tell lies and act strangely?  Look in the mirror – 04/16/2023 – Tech

[ad_1]

When Microsoft added a chatbot to its Bing search engine this month, people noticed that it was presenting all sorts of false information about the Gap, Mexican nightlife and singer Billie Eilish.

So when journalists and other early testers had lengthy conversations with Microsoft’s artificial intelligence bot, it became boorish and irritating, even frightening.

Ever since the Bing bot’s behavior became a worldwide sensation, people have struggled to understand the strangeness of this new creation. For the most part, scientists say humans deserve much of the blame.

But there’s still a bit of a mystery about what the new chatbot can do — and why it would do it. Its complexity makes it difficult to dissect and even harder to predict, and researchers are looking at it through a philosophical lens as well as the hard code of computer science.

Like any other student, an AI system can learn bad information from bad sources. And this strange behavior? Perhaps it’s a chatbot’s distorted reflection of the words and intentions of the people using it, said Terry Sejnowski, a neuroscientist, psychologist and computer scientist who helped lay the intellectual and technical foundations of modern AI.

“It happens when you get deeper and deeper into these systems,” said Sejnowski, a professor at the Salk Institute for Biological Studies and the University of California, San Diego, who published a research paper on this phenomenon this month in the journal Neural Computation. “Whatever you’re looking for – whatever you want – they’ll provide it.”

Google also showed off a new chatbot, Bard, this month, but scientists and journalists quickly realized it was writing nonsense about the James Webb Space Telescope. San Francisco startup OpenAI launched the chatbot boom in November when it introduced ChatGPT, which also doesn’t always tell the truth.

The new chatbots are powered by a technology that scientists call the large language model (LLM). These systems learn by analyzing massive amounts of digital text pulled from the internet, which includes a lot of false, biased, and toxic material. The text that chatbots learn from is also a bit outdated because they must spend months parsing it before the public can use it.

By sifting through this sea of ​​good and bad information from the internet, an LLM learns to do one specific thing: guess the next word in a sequence of words.

It works like a giant version of autocomplete technology that suggests the next word as you type an email or instant message on your smartphone. Given the sequence “Tom Cruise is a…”, he might guess “actor”.

When you talk to a chatbot, the bot isn’t just using everything it learned on the internet. It builds on everything you’ve said to him and everything he’s responded to. It’s not just guessing the next word in your sentence. It’s guessing the next word in the long block of text that includes your words and his words.

The longer the conversation, the more influence a user unwittingly has on what the chatbot is saying. If you want him to be angry, he is angry, Sejnowski said. If you convince him to be scary, he will be scary.

Microsoft and OpenAI decided that the only way to find out what chatbots will do in the real world is to let them loose — and catch them when they stray. They believe their big public experiment is worth the risk.

Sejnowski compared the Microsoft chatbot’s behavior to the Mirror of Erised, a mystical artifact from JK Rowling’s “Harry Potter” novels and many films based on her creative world of young wizards.

“Ejesed” is “desire” spelled backwards. When people discover the mirror, it seems to offer truth and understanding. But that doesn’t happen. It shows the deepest desires of anyone who looks at it. And some people go crazy if they look at it too long.

“As humans and LLMs mirror each other, over time they will tend towards a common conceptual state,” said Sejnowski.

It was no surprise, he said, that journalists began to see creepy behavior in the Bing chatbot. Consciously or unconsciously, they were pushing the system in an uncomfortable direction. As chatbots absorb our words and reflect them back to us, they can reinforce and amplify our beliefs and convince us to believe what they are telling us.

Because these systems learn from so much more data than we humans could ever imagine, not even AI experts can understand why they generate a certain text at any given time.

Sejnowski said he believes that, in the long term, new chatbots have the power to make people more efficient and give them ways to do their jobs better and faster. But that includes a warning for both the companies that build these chatbots and the people who use them: they, too, can lead us astray from the truth and lead us to dark places.

“This is terra incognita,” Sejnowski said. “Humans have never experienced this before.”

Translated by Luiz Roberto M. Gonçalves

[ad_2]

Source link

tiavia tubster.net tamilporan i already know hentai hentaibee.net moral degradation hentai boku wa tomodachi hentai hentai-freak.com fino bloodstone hentai pornvid pornolike.mobi salma hayek hot scene lagaan movie mp3 indianpornmms.net monali thakur hot hindi xvideo erovoyeurism.net xxx sex sunny leone loadmp4 indianteenxxx.net indian sex video free download unbirth henti hentaitale.net luluco hentai bf lokal video afiporn.net salam sex video www.xvideos.com telugu orgymovs.net mariyasex نيك عربية lesexcitant.com كس للبيع افلام رومانسية جنسية arabpornheaven.com افلام سكس عربي ساخن choda chodi image porncorntube.com gujarati full sexy video سكس شيميل جماعى arabicpornmovies.com سكس مصري بنات مع بعض قصص نيك مصرى okunitani.com تحسيس على الطيز