Artificial intelligence will get better at conversations, but not at telling the truth – 02/28/2023 – Tech

Artificial intelligence will get better at conversations, but not at telling the truth – 02/28/2023 – Tech

[ad_1]

On a recent afternoon, Jonas Thiel, a socioeconomics student in northern Germany, spent more than an hour chatting online with some of the leftist political philosophers he was studying. These weren’t the real philosophers, but virtual recreations, brought into conversation, if not quite life, by sophisticated chatbots on a website called Character.AI.

Thiel’s favorite was a bot that mimicked Karl Kautsky, a Czech-Austrian socialist who died before World War II. When Thiel asked Kautsky’s digital avatar for some advice for modern socialists struggling to rebuild the workers’ movement in Germany, Kautsky’s bot suggested they publish a newspaper. “They can use it not only as a means to spread socialist propaganda, which is in short supply in Germany right now, but also to organize the working class,” the robot said.

The bot went on to argue that the working classes would eventually “come to their senses” and embrace a modern Marxist revolution. “The proletariat is at a low point in its history now,” he wrote. “They will eventually realize the flaws of capitalism, especially because of climate change.”

Over the course of several days, Thiel met with other virtual scholars, including GA Cohen and Adolph Reed Jr. But he could have chosen almost anyone, living or dead, real or imaginary. On Character.AI, which debuted last summer, users can chat with reasonable replicas of people as varied as Queen Elizabeth II, William Shakespeare, Billie Eilish or Elon Musk (there are several versions). Anyone you want to summon, or invent, is available for conversation.

The company and website, founded by Daniel de Freitas and Noam Shazeer, two former Google researchers, are among several efforts to build a new kind of chatbot. These bots may not converse exactly like a human being, but they often do.

In late November, San Francisco-based artificial intelligence lab OpenAI released a bot called ChatGPT that made more than 1 million people feel as if they were having a conversation with another human being. Similar technologies are under development at Google, Meta and other tech giants. Some companies are reluctant to share technology with the general public. Because these bots learn their abilities from data posted online by real people, they often produce untruths, hate speech, and biased language against women and people of color. If misused, they can become an effective way to carry out the kind of disinformation campaigns that have become commonplace in recent years.

“Without any additional safeguards, they’re going to end up reflecting all the biases and toxic information that’s already on the web,” said Margaret Mitchell, a former AI researcher at Microsoft and Google, where she helped start the Ethical AI team. Now she’s at AI startup Hugging Face.

But other companies, including Character.AI, are confident that the public will learn to accept the flaws of chatbots and develop a healthy distrust of what they say. Thiel found that Character.AI bots had a talent for conversation and a knack for impersonating real people. “If you read what someone like Kautsky wrote in the 19th century, he doesn’t use the same language that we use today,” he said. “But AI can somehow translate your ideas into plain modern English.”

For now, these and other advanced chatbots are a source of entertainment. And they are quickly becoming a more powerful way to interact with machines. Experts still debate whether the strengths of these technologies will outweigh their flaws and potential harm, but they do agree on one point: the credibility of the fictional conversation will continue to improve.

the art of conversation

In 2015, Freitas, then working as a software engineer at Microsoft, read a research paper published by scientists at Google Brain, Google’s main artificial intelligence lab. Detailing what he called the “Neural Model of Conversation”, the paper showed how a machine can learn the art of conversation by analyzing dialogue transcripts from hundreds of movies.

The paper described what AI researchers call a neural network, a mathematical system loosely modeled on the brain’s network of neurons. This same technology translates between Spanish and English using services such as Google Translate and identifies pedestrians and traffic signs for self-driving cars on the streets.

A neural network learns skills by identifying patterns in huge amounts of digital data. By analyzing thousands of photos of cats, for example, she can learn to recognize a cat.

When Freitas read the article, he was not yet an AI researcher; he was a software engineer working on search engines. But what he really wanted was to push the Google idea to its logical extreme.

“You could say this bot was capable of generalizing,” he said. “What he said didn’t sound like what was in a movie script.”

He joined Google in 2017. Officially, he was an engineer at YouTube, the company’s video sharing site. But for his “20% of the time” project — a Google tradition that lets employees explore new ideas alongside their day-to-day duties — he started building his own chatbot.

The idea was to train a neural network using a much larger collection of conversations: reams of chat logs curated from social networking services and other Internet sites. The idea was simple, but it would require enormous amounts of computer processing power. Even a supercomputer would need weeks or even months to analyze all that data.

As a Google engineer, he had a few credits that allowed him to run experimental software on the company’s vast network of computers. But those credits would only grant a small fraction of the computing power needed to train your chatbot. So he started borrowing credits from other engineers; As the system analyzed more data, its abilities would improve by leaps and bounds.

Initially, he trained his chatbot using what is called LSTM, or Long Short-Term Memory – a neural network designed in the 1990s specifically for natural language. But he soon switched to a new type of neural network called a transformer, developed by a team of AI researchers at Google that included Noam Shazeer.

Unlike an LSTM, which reads text one word at a time, a transformer can use multiple computer processors to parse an entire document in a single step.

Google, OpenAI, and other organizations were already using transformers to create so-called “big language models,” systems suited to a wide range of language tasks, from writing Twitter messages to answering questions. Still working on his own, Freitas focused the idea on conversation, feeding his transformer with as much dialogue as possible.

It was an extremely simple approach. But, as Freitas likes to say: “Simple solutions for incredible results”.

The result in this case was a chatbot he named Meena. It was so effective that Google Brain hired Freitas and turned his project into an official research effort. Meena became LaMDA, short for Language Model for Dialogue Applications.

The project first slipped into the public consciousness in early summer, when another Google engineer, Blake Lemoine, told The Washington Post that LaMDA was sentient. That statement was an exaggeration, to say the least. But the commotion showed just how quickly chatbots were evolving at top labs like Google Brain and OpenAI.

Google was reluctant to release the technology, fearing that its talent for misinformation and other toxic language could damage the company’s brand. But by then Freitas and Shazeer had left Google, determined to get this kind of technology into the hands of as many people as possible through their new company, Character.AI.

“Technology is useful today — for fun, for emotional support, for generating ideas, for all kinds of creativity,” Shazeer said.

Designed for open exchanges

ChatGPT, the bot launched by OpenAI to great fanfare in late November, is designed to operate as a new kind of question-and-answer engine. It is very good at this function, but the user never knows when the chatbot will come up with something. It might tell you that Switzerland’s official currency is the euro (it’s actually the Swiss franc), or that Mark Twain’s famous jumping frog from Calaveras County could not only jump, but also talk. AI researchers call this generation of untruths “hallucinations”.

When building Character.AI, Freitas and Shazeer had a different goal: an open conversation. They believe that today’s chatbots are better suited for this type of service, for now a means of entertainment, factual or otherwise. As the website notes, “Everything the characters say is made up!”

“These systems are not designed for truth,” Shazeer said. “They’re designed for plausible conversation.”

Freitas, Shazeer and their colleagues did not build a bot that imitates Musk, another that imitates Queen Elizabeth and a third that imitates Shakespeare. They built a single system that can mimic all these people and others. He learned from reams of dialogues, articles, books and digital texts that describe people like Musk, the queen and Shakespeare.

Sometimes the chatbot gets things right. Sometimes no. When Thiel talked to an avatar that purported to emulate Reed, the 20th-century American political thinker, it turned him “into some sort of Maoist militant, which is definitely not right.”

Like Google, OpenAI, and other leading labs, Freitas, Shazeer, and their colleagues plan to train their system on ever-increasing amounts of digital data. This training can take months and cost millions of dollars; it can also enhance the skills of the artificial conversationalist.

The researchers say that the rapid improvement will only last so long. Richard Socher, the former chief scientist in charge of AI at Salesforce, who now runs a startup called You.com, believes these exponential improvements will start to plateau over the next few years, when language models reach a point where they will have analyzed virtually every text on the internet.

But Shazeer said the trail is much longer: “There are billions of people in the world generating text all the time. People will continue to spend more and more money to train systems that are smarter and smarter. We’re nowhere near the end of this trend.”

Translated by Luiz Roberto M. Gonçalves

[ad_2]

Source link

tiavia tubster.net tamilporan i already know hentai hentaibee.net moral degradation hentai boku wa tomodachi hentai hentai-freak.com fino bloodstone hentai pornvid pornolike.mobi salma hayek hot scene lagaan movie mp3 indianpornmms.net monali thakur hot hindi xvideo erovoyeurism.net xxx sex sunny leone loadmp4 indianteenxxx.net indian sex video free download unbirth henti hentaitale.net luluco hentai bf lokal video afiporn.net salam sex video www.xvideos.com telugu orgymovs.net mariyasex نيك عربية lesexcitant.com كس للبيع افلام رومانسية جنسية arabpornheaven.com افلام سكس عربي ساخن choda chodi image porncorntube.com gujarati full sexy video سكس شيميل جماعى arabicpornmovies.com سكس مصري بنات مع بعض قصص نيك مصرى okunitani.com تحسيس على الطيز