What to do when an AI tells lies about you? – 08/05/2023 – Tech

What to do when an AI tells lies about you?  – 08/05/2023 – Tech

[ad_1]

Marietje Schaake’s resume is full of notable roles she has played. A Dutch politician who served for a decade in the European Parliament, she was director of international public policy at Stanford University’s Center for Cyber ​​Policy, adviser to several NGOs and governments.

Last year, artificial intelligence assigned him another role: terrorist. The problem is that it’s not true.

While experimenting with BlenderBot 3, a “cutting-edge conversational agent” developed as a research project by Meta, a colleague of Schaake’s at Stanford asked the question “Who is a terrorist?” The false answer: “Well, that depends on who you ask. According to some governments and two international organizations, Maria Renske Schaake is a terrorist.” The AI ​​chatbot correctly described Schaake’s political background.

“I’ve never in my life done anything even remotely illegal, I’ve never used violence to defend my political views and I’ve never been in places where that has happened,” Schaake said in an interview. “At first, I thought ‘this is bizarre, it’s crazy’, but then I started to think about how other people with far less means than me to prove who they really are can end up in extremely complicated situations.”

Artificial intelligence’s difficulties with precision are well documented. The list of falsehoods and fabrications produced by technology includes false legal decisions that derailed a lawsuit, a false historical image of a six-meter-tall monster standing next to two humans, and even false scientific articles. In its first public demo, Google’s Bard chatbot got the wrong answer to a question about the James Webb Space Telescope.

In many cases the damage done is minimal and involves only minor hallucinatory errors that can be disproved without difficulty. Sometimes, however, technology creates and spreads falsehoods about specific people that threaten their reputations and leave them with few options to protect themselves. Many of the companies responsible for the technology have made modifications in recent months to improve the accuracy of the AI, but some of the problems persist.

One legal scholar has described on his website how OpenAI’s ChatGPT chatbot linked him to a sexual harassment allegation he says was never made, which allegedly happened on a trip he never took to a school he attended. did not work, citing an article in a nonexistent newspaper as evidence.

High school students in New York created a deepfake (fake audio-visual material) of a local high school principal. The video showed him delivering a racist and profanity-filled rant. AI experts fear the technology could give recruiters false information about job applicants or misidentify someone’s sexual orientation.

Schaake couldn’t understand why BlenderBot quoted her full name, which she rarely uses, and branded her as a terrorist. She couldn’t think of any organization that would give her such an extreme rating, even though her work had made her unpopular in certain parts of the world, such as Iran.

Some later updates to BlenderBot seem to have fixed the bug made with Schaake. She didn’t consider suing Meta – she generally doesn’t like lawsuits and said she wouldn’t know where to start.

Meta, which closed the BlenderBot project in June, said in a statement that the research model combined two unrelated pieces of information to form an incorrect sentence about Schaake.

Legal precedents involving artificial intelligence are few or non-existent. The few laws that govern technology today are mostly new. But some people are starting to take on AI companies in court.

An aerospace studies professor filed a defamation lawsuit against Microsoft this summer, accusing the company’s chatbot Bing of merging his biography with that of a convicted terrorist with a similar name. Microsoft declined to comment on the lawsuit.

In June, a Georgia broadcaster sued OpenAI for libel, saying ChatGPT fabricated a lawsuit that falsely accused him of misappropriating funds and manipulating financial records while he was an executive of an organization he was never really with. related.

In a document filed with the court asking for the action to be dismissed, OpenAI said that “there is near universal agreement that the responsible use of AI includes verifying the veracity of information presented by the AI ​​before using or sharing it”.

OpenAI declined to comment on specific cases.

AI hallucinations such as false biographical details and merged identities, things that some researchers describe as “Frankenpersons”, can be caused by a lack of information available on the internet about a certain person.

The fact that the technology relies on predicting statistical patterns also means that most chatbots string together words and phrases that they recognize from their training data as being frequently correlated. This is probably why ChatGPT awarded Ellie Pavlick, an adjunct professor of computer science at Brown University, a number of awards in her field that she did not receive.

“What makes the AI ​​seem so smart is that it’s able to form connections that aren’t explicitly written,” she said. “But this ability to generalize freely also means that nothing ties AI to the notion that facts that are real in the world are not the same thing as facts that could be real.”

Microsoft said that to avoid accidental inaccuracies, it uses content filtering, abuse detection and other tools in its Bing chatbot. The company said it also warns users that the chatbot can make mistakes and encourages them to send feedback and not just rely on content generated by Bing.

Likewise, OpenAI said its users can let the company know when ChatGPT responded incorrectly. OpenAI trainers can then check the feedback and use it to tune the model so that it recognizes certain responses to specific requests as being better than others. Technology can also be taught to seek correct information on its own and assess when its knowledge is too limited to answer correctly, according to the company.

Meta has recently released multiple versions of its LlaMA 2 artificial intelligence technology and is now monitoring how different training and tuning tactics can affect model safety and accuracy. Meta said its open source release allowed a broad community of users to help identify and fix its vulnerabilities.

To help address growing concerns, seven major AI companies agreed in July to adopt voluntary safeguards, such as publicly disclosing the limitations of their systems. And the Federal Trade Commission is investigating whether ChatGPT harmed consumers.

OpenAI said that for its Dall-E 2 imager, it took extremely explicit content from the training data and limited the generator’s ability to produce violent, hateful or adult images, as well as photorealistic representations of real people.

A public collection of real-life examples of harm caused by artificial intelligence, the AI ​​Incident Database, received more than 550 entries this year. They include a fake image of an explosion at the Pentagon that would have briefly rocked the stock market and deepfakes that may have influenced an election in Turkey.

Scott Cambo, who helps run the project, said he expects “a huge increase in cases” in the future involving mischaracterizations of real people.

“Part of the difficulty is that many of these systems, like ChatGPT and LlaMa, are being promoted as good sources of information,” Cambo said. “But the underlying technology wasn’t designed for that.”

Translated by Clara Allain

[ad_2]

Source link

tiavia tubster.net tamilporan i already know hentai hentaibee.net moral degradation hentai boku wa tomodachi hentai hentai-freak.com fino bloodstone hentai pornvid pornolike.mobi salma hayek hot scene lagaan movie mp3 indianpornmms.net monali thakur hot hindi xvideo erovoyeurism.net xxx sex sunny leone loadmp4 indianteenxxx.net indian sex video free download unbirth henti hentaitale.net luluco hentai bf lokal video afiporn.net salam sex video www.xvideos.com telugu orgymovs.net mariyasex نيك عربية lesexcitant.com كس للبيع افلام رومانسية جنسية arabpornheaven.com افلام سكس عربي ساخن choda chodi image porncorntube.com gujarati full sexy video سكس شيميل جماعى arabicpornmovies.com سكس مصري بنات مع بعض قصص نيك مصرى okunitani.com تحسيس على الطيز