Why scientists fear the future of artificial intelligence – 04/30/2023 – Science

Why scientists fear the future of artificial intelligence – 04/30/2023 – Science

[ad_1]

Artificial intelligence has the incredible power to change the way we live, for better or for worse — and experts have little confidence that those in power are prepared for what lies ahead.

In 2019, the non-profit research group OpenAI created software that was capable of generating coherent paragraphs of text and doing rudimentary analysis and comprehension of text without specific instructions.

Initially, OpenAI decided not to make its creation —called GPT-2—fully available to the public. The fear was that malicious people could use it to generate massive amounts of misinformation and propaganda.

In a press release announcing the decision, the group called the program “too dangerous” at the time.

Since then, three years have passed, and the ability of artificial intelligence has increased exponentially.

In contrast to the last limited distribution, the new version, GPT-3, was readily available in November 2022.

The ChatGPT interface derived from that programming was the service that generated thousands of news articles and social media posts as reporters and pundits tested its features—often with impressive results.

ChatGPT wrote stand-up comedy scripts in the style of the late American comedian George Carlin about the bankruptcy of Silicon Valley Bank. He opined on Christian theology, wrote poetry, and explained quantum physics to a child as if he were rapper Snoop Dogg.

Other AI models, such as the Dall-E, have produced images so convincing that there has been controversy over their inclusion on art sites.

At least with the naked eye, machines have learned to be creative.

On March 14, OpenAI introduced the latest version of its program, GPT-4. The group claims it features stronger boundaries against abuse. Early customers include Microsoft, Merrill Lynch bank and the government of Iceland.

And the hottest topic at the interactive South by Southwest conference — a global gathering of policymakers, investors and technology executives held in Austin, Texas — was the potential and power of artificial intelligence programs.

‘For better and for worse’

Arati Prabhakar, director of the White House Office of Science and Technology Policy, said she was excited about the possibilities of artificial intelligence, but also sounded a warning.

“What we’re all seeing is the emergence of this extremely powerful technology. It’s a tipping point,” she declared at the conference.

“All history demonstrates that this type of technology, new and potent, can and will be used for good and evil.”

Austin Carson, founder of SeedAI, an artificial intelligence policy advisory group, who was on the same panel, was a little more direct.

“If, in six months, you haven’t completely lost your mind [e soltou um palavrão]I’ll buy you dinner,” he told the audience.

“Losing my mind” is a way of describing what might happen in the future.

Amy Webb, head of the Future Today institute and professor of business at New York University in the US, tried to predict the possible consequences. According to her, artificial intelligence could go in one of two directions over the next 10 years.

In an optimistic scenario, the development of artificial intelligence will focus on the common good, with a transparent system design, and individuals will have the ability to decide whether their publicly available information on the internet will be included in the knowledge base of the artificial intelligence.

In this vision, technology serves as a tool that makes life easier, making it more integrated, as artificial intelligence becomes available in consumer products that can anticipate user needs and help perform virtually any task.

The other scenario envisioned by Webb is catastrophic. It involves less data privacy, more centralized power in a few companies, and artificial intelligence anticipates user needs but misunderstands them or, at the very least, represses their choices.

She believes that the optimistic scenario has only a 20% chance of happening.

Webb tells the BBC that the direction the technology will take depends largely on the degree of responsibility of the companies that will develop it. Will they do this transparently, revealing and overseeing the sources from which chatbots – called by scientists Large Language Models (LLM) – extract their information?

The other factor, she said, is whether the government — including federal regulators and Congress — can act quickly to establish legal protections to guide technology developments and prevent their misuse.

In this sense, the experience of governments with social networking companies — Facebook, Twitter, Google and others — is indicative. And it’s not an encouraging experience.

“What I heard in a lot of conversations was concerns that there’s no protective barrier,” Melanie Subin, managing director of Future Today, told the South by Southwest conference.

“There’s a sense that something needs to be done.”

“And I think social media, as a lesson, is what sticks in people’s minds when they look at the rapid development of creative artificial intelligence,” he added.

Tackling harassment and hate speech

In the United States, federal oversight of social media companies is based largely on the Communications Decency Act passed by Congress in 1996, as well as a short but powerful clause contained in Section 230 of the act.

The text protects Internet companies from being held liable for user-generated content on their sites. He is held responsible for creating a legal environment in which social media companies could thrive. But more recently, it’s also being accused of allowing those same companies to gain too much power and influence.

Right-wing politicians complain that the law has allowed the Googles and Facebooks of life to censor or reduce the visibility of conservative views. Those on the left accuse companies of not doing enough to prevent the spread of hate speech and violent threats.

“We have an opportunity and a responsibility to recognize that hate speech breeds hateful action,” said Jocelyn Benson, Michigan Secretary of State.

In December 2020, Benson’s home was the subject of protests by armed supporters of Donald Trump, organized on Facebook, who were contesting the results of the 2020 presidential election.

She supported anti-misleading practices laws in her state that would hold social media companies accountable for knowingly spreading harmful information.

Similar proposals have been tabled at the federal level and in other states, as well as legislation requiring social networking sites to provide greater protections for underage users, be more open about their content moderation policies, and take more active action. to decrease online harassment.

But the chances of success of these reforms divide opinion. Big tech companies maintain entire teams of lobbyists in the US capital, Washington, and in state capitals. They also rely on bulging coffers to influence politicians with campaign donations.

“Despite the overwhelming evidence of problems with Facebook and other social networking sites, it’s been 25 years,” says technology journalist Kara Swisher.

“We’ve been waiting for Congressional legislation to protect consumers, and they’ve relinquished their responsibility.”

Swisher says the danger lies in the fact that many of the companies that are big players in social networking — Facebook, Google, Amazon, Apple and Microsoft — are now leaders in the field of artificial intelligence.

If Congress fails to successfully regulate social media, it will be a challenge to act quickly to address concerns about what Swisher calls an artificial intelligence “arms race.”

Comparisons between artificial intelligence regulations and social media aren’t just academic either. New AI technology could navigate the already turbulent waters of platforms like Facebook, YouTube and Twitter and turn them into a raging sea of ​​misinformation, as it becomes increasingly difficult to distinguish posts from real human beings from fake accounts — but fully convincing — generated by AI.

Even if the government succeeds in passing new regulations for social media, they could end up being useless if there is a huge influx of harmful content generated by artificial intelligence.

Among the countless sessions at the South by Southwest conference was one entitled “How Congress [americano] it’s building AI policy from the ground up.” After about 15 minutes of waiting, the organizers informed the audience that the panel had been canceled because attendees had moved to the wrong place.

For anyone hoping to find signs of human competence in government, the episode was anything but encouraging.

This text was originally published here.

[ad_2]

Source link

tiavia tubster.net tamilporan i already know hentai hentaibee.net moral degradation hentai boku wa tomodachi hentai hentai-freak.com fino bloodstone hentai pornvid pornolike.mobi salma hayek hot scene lagaan movie mp3 indianpornmms.net monali thakur hot hindi xvideo erovoyeurism.net xxx sex sunny leone loadmp4 indianteenxxx.net indian sex video free download unbirth henti hentaitale.net luluco hentai bf lokal video afiporn.net salam sex video www.xvideos.com telugu orgymovs.net mariyasex نيك عربية lesexcitant.com كس للبيع افلام رومانسية جنسية arabpornheaven.com افلام سكس عربي ساخن choda chodi image porncorntube.com gujarati full sexy video سكس شيميل جماعى arabicpornmovies.com سكس مصري بنات مع بعض قصص نيك مصرى okunitani.com تحسيس على الطيز