AI could cause extinction of humanity, say experts – 03/06/2023 – Market

AI could cause extinction of humanity, say experts – 03/06/2023 – Market

[ad_1]

AI (artificial intelligence) could drive humanity to extinction, experts — including the heads of OpenAI and Google Deepmind — have warned.

Dozens supported a statement published on the page of the Center for AI Safety (or Artificial Intelligence Security Center, in translation), a research and development NGO based in San Francisco, in the United States.

“Mitigating the risk of extinction from AI must be a global priority, along with other societal-scale risks such as pandemics and nuclear war,” the open letter points out.

But others say the fears are overblown.

Text from the Center for AI Safety suggests a number of potential disaster scenarios:

  • AIs can be armed—for example, with tools to discover drugs that can be used to build chemical weapons;
  • AI power can become increasingly concentrated in a few hands, allowing “regimes to impose strict values ​​through pervasive surveillance and oppressive censorship”;

Geoffrey Hinton, who issued an earlier warning about the risks of superintelligent AI, also supported the Center for AI Safety’s letter.

Yoshua Bengio, Professor of Computer Science at the University of Montreal, Canada, also signed the manifesto.

Hinton, Bengio, and New York University (NYU) professor Yann LeCunn are often described as the “godfathers of AI” for the groundbreaking work they’ve done in this field—and for which they jointly won the 2018 Turing Prize, which recognizes contributions exceptional in computer science.

But Professor LeCunn, who also works at Meta, the company that owns Facebook, said such doomsday warnings were “exaggerated” and that “the most common reaction by AI researchers to these prophecies of doom is embarrassment”.

Many other experts also believe that the fear of AI wiping out humanity is unrealistic and a distraction from issues such as bias towards systems, which are already an issue.

Arvind Narayanan, a computer scientist at Princeton University in the US, told the BBC that science fiction disaster scenarios are unrealistic.

“Current AI is nowhere near capable enough for these risks to materialize. As a result, it diverts attention from the short-term harms of AI,” he reckons.

Elizabeth Renieris, a senior research fellow at the Institute for Ethics in AI at Oxford University in the UK, told BBC News that she worries about the risks closer.

“Advances in AI will scale up automated decision-making that is biased, discriminatory, exclusionary or unfair. While being inscrutable and uncontested,” she believes.

These advances “could spur an exponential increase in the volume and spread of disinformation, thus fracturing reality and eroding public trust, as well as generating further inequality, particularly for those who remain on the wrong side of the digital divide.”

Many AI tools essentially “piggyback” on “the entire human experience to date,” notes Renieris.

Many of these technologies are trained in human-created content, text, art, and music—and their creators “have effectively transferred tremendous wealth and power from the public sphere to a small handful of private entities.”

break requested

Press coverage of the alleged “existential” threat of AI has increased since March 2023, when experts including Tesla owner Elon Musk signed an open letter calling for a halt to development of the next generation of AI technology.

That letter asked whether we should “develop non-human minds that will eventually outnumber, outsmart, obsolete, and replace us.”

In contrast, the new letter released by experts has a very short statement, intended to “open the discussion”.

The statement compares the risk to that posed by nuclear war.

In a blog post, OpenAI recently suggested that superintelligence might be regulated in a similar way to nuclear energy.

“It is likely that we will eventually need something like an IAEA (International Atomic Energy Agency) for superintelligence efforts,” the company wrote.

Careful analyzes

Sam Altman and Google chief executive Sundar Pichai are among the tech leaders who recently discussed AI regulation with British Prime Minister Rishi Sunak.

Speaking to reporters about the latest AI risk warning, Sunak emphasized the technology’s benefits to the economy and society.

“You’ve seen that recently AI has helped paralyzed people to walk and discovered new antibiotics, but we need to make sure this is done in a safe and secure way,” he said.

“That’s why I met last week with CEOs of major AI companies to discuss what barriers we need to put in place and what kind of regulations should be put in place to keep us safe.”

“People will be concerned about reports that AI poses existential risks like pandemics or nuclear wars. But I want them to be reassured that the government is looking at this very carefully.”

Sunak said he had recently discussed the issue with other leaders at the G7 summit in Japan and would take it up again with US representatives shortly.

The summit of richer countries even recently created a working group on AI.

This text was originally published here.

[ad_2]

Source link

tiavia tubster.net tamilporan i already know hentai hentaibee.net moral degradation hentai boku wa tomodachi hentai hentai-freak.com fino bloodstone hentai pornvid pornolike.mobi salma hayek hot scene lagaan movie mp3 indianpornmms.net monali thakur hot hindi xvideo erovoyeurism.net xxx sex sunny leone loadmp4 indianteenxxx.net indian sex video free download unbirth henti hentaitale.net luluco hentai bf lokal video afiporn.net salam sex video www.xvideos.com telugu orgymovs.net mariyasex نيك عربية lesexcitant.com كس للبيع افلام رومانسية جنسية arabpornheaven.com افلام سكس عربي ساخن choda chodi image porncorntube.com gujarati full sexy video سكس شيميل جماعى arabicpornmovies.com سكس مصري بنات مع بعض قصص نيك مصرى okunitani.com تحسيس على الطيز