Pope in a coat: understand how the image was generated by AI – 03/26/2023 – Tech

Pope in a coat: understand how the image was generated by AI – 03/26/2023 – Tech

[ad_1]

On Saturday (25), social networks were flooded with a false image of Pope Francis wearing a white coat (also called a puffer jacket, or japona for those below the Tropic of Capricorn).

In addition to the alleged papal style, the content drew attention because it was an extremely convincing artificial intelligence (AI) generated image, which deceived many people.

It is not the first of its kind to have sprouted in recent days. In more harmful applications, deepfakes (synthetic content generated by AI) showed a false arrest of former US President Donald Trump and a faked scene of Russian President Vladimir Putin behind bars.

The technology used to make this type of content is called “generative artificial intelligence”, as it generates new content. In this case, they create images, but the category gained prominence at the end of last year with the launch of ChatGPT, focused on texts. Other applications synthesize audio and video.

As much as the most up-to-date version of GPT (ChatGPT engine) is able to understand photos, they are different systems, each one specialized in one thing. Image makers began to bubble in the first half of 2022 with the explosion of easy-to-use applications, even though this type of AI use existed years before.

Some of the main names in this area are Dall-e (from the same creator as GPT), Stable Diffusion and Midjourney. The pope’s image was published in the latter’s official community within the Reddit online forum on Friday (24).

On the last 15th, Midjourney received an updatestill in testing, for a version that promises even more realistic results.

In these systems, users describe the image they want using text and that’s it. In the example below, the report asked Dall-e to make “a [cachorro] black pug eating a banana in Renaissance style”.

Just like other modern AIs, everything goes through the analysis of a bunch of data. The computer detects patterns in images and uses the information to synthesize their contents.

Verisimilitude is possible thanks to an architecture called “generative adversarial network”. This machine learning category emerged in 2014 and puts AI in pairs: one side capable of creating the content, the other to assess whether the content is fake. It’s like a robot repeatedly generating images while the second responds “not good, do it again” until the result is convincing.

There are some limitations, mainly noticed in the details. In the case of the pope’s image, the distorted hands denounce the AI ​​—a frequent issue in these systems, but one that is beginning to be resolved. In older versions of these generators, one of the main difficulties was teeth.

In 2019, the website “This Person Does Not Exist” (“this person does not exist”) brought the public one of the first impressions of this type of technology. Using a less sophisticated AI than currently seen, it generates portraits of non-existent people.

Using a more recent version, synthetic faces were rated more reliable than real faces. The evaluation appears in a scientific article published by researchers Sophie Nightingale (Lancaster University, England) and Hany Farid (University of California at Berkeley, USA), in 2022. For the tests, they displayed batteries of portraits for a group of people to classify.

“Synthetically generated images are not only realistic, but inspire more confidence than real ones. This may be because synthetic ones tend to look more generic”, says the study.

One of the concerns arising from these synthetic contents is precisely the application in deepfakes. Concern led Darpa (US Defense Advanced Research Projects Agency) to invest millions in developing technologies to combat this false content.

The spread of generative artificial intelligence for images has led to protests from artists, in part over fears of substitution in work, particularly after an AI won an art competition.

There is also the issue of using your content to feed robots. Stable Diffusion creator Stability AI has been sued by stock image provider Getty Images over the misuse of her photos



[ad_2]

Source link

tiavia tubster.net tamilporan i already know hentai hentaibee.net moral degradation hentai boku wa tomodachi hentai hentai-freak.com fino bloodstone hentai pornvid pornolike.mobi salma hayek hot scene lagaan movie mp3 indianpornmms.net monali thakur hot hindi xvideo erovoyeurism.net xxx sex sunny leone loadmp4 indianteenxxx.net indian sex video free download unbirth henti hentaitale.net luluco hentai bf lokal video afiporn.net salam sex video www.xvideos.com telugu orgymovs.net mariyasex نيك عربية lesexcitant.com كس للبيع افلام رومانسية جنسية arabpornheaven.com افلام سكس عربي ساخن choda chodi image porncorntube.com gujarati full sexy video سكس شيميل جماعى arabicpornmovies.com سكس مصري بنات مع بعض قصص نيك مصرى okunitani.com تحسيس على الطيز