Creator of ChatGPT discloses plan to contain deepfakes – 01/18/2024 – Power

Creator of ChatGPT discloses plan to contain deepfakes – 01/18/2024 – Power

[ad_1]

Given the unprecedented ease of generating false images and sounds with artificial intelligence, the creator of ChatGPT, OpenAI, and some social networks announced measures to prevent the use of technology to manipulate elections.

Other AI companies such as Google, Midjourney and Stability AI have not yet presented plans.

The scenario takes place in a context in which no country has a regulatory framework in force regarding artificial intelligence. In December, Europe reached an agreement on a basic technology regulation text, which still needs to be promulgated.

In 2024, there will be elections in more than 50 countries, representing around half of the world’s population. Although the measures are also valid for Brazil, which has municipal elections in the second half of the year, in general, they provide details only in relation to the North American elections.

OpenAI published on Monday (15) an article about the plan it will adopt to reduce damage. Thus, it maintained the strategy of trying to stay ahead of regulators by opening the debate on the safe use of its technology.

Social networks, in turn, have created some rules so that users can be transparent regarding the use of this type of technology.

According to the OpenAI article, the company’s initiatives during the electoral process will focus on three pillars: preventing abuse, promoting transparency that content is generated by AI and facilitating access to credible content about voting systems.

Regarding the last measure, OpenAI states that it works in the USA with the National Association of Secretaries of State and will publicize the CanIVote.org website — there is no mention of partnerships with entities from other countries. Questioned by Sheet In this regard, the company says it intends to adapt the lessons learned during the American campaign to the reality of other regions – the US presidential election will take place in November and the Brazilian one in October.

OpenAI claims it has safeguards to prevent the production of deepfakes, such as prohibiting the reproduction of images of real people, which includes candidates. It also says that it works to prevent its systems from violating the standards imposed during AI model training.

Furthermore, after making the option to create specialized chatbots, GPTs, available to subscribers of the paid ChatGPT Plus version, OpenAI also prohibited developers from creating chatbots that imitate a certain person. This could be useful when imputing false speeches to electoral opponents.

Google, for its part, has not yet released plans to mitigate damage caused by AI abuse during the elections. The technology giant also did not answer whether it limits the reproduction of third-party images, as OpenAI does.

Wanted by Sheet, Google says it has taken a responsible approach to developing artificial intelligence, given the opportunities and risks of any emerging technology. “Our policies prohibit content and advertisements that confuse voters about how to vote or that encourage interference in the democratic process, including the use of manipulated media.”

The other two companies responsible for popular image-generating artificial intelligence models –Midjourney and Stability AI– have not yet disclosed strategies they will adopt during the 2024 elections.

Midjourney —a tool that became known for creating the image of Pope Francis with a white puffer jacket—in its rules prohibits the use of its technology to manipulate electoral processes, without specifying how it guarantees compliance with the rule.

Stability AI, responsible for the Stable Diffusion model, stated in a statement after a meeting with the US Senate in November that the technologies, under US law, are neutral and people who abuse AI resources can be criminally accused of fraud, defamation and use images not allowed.

Stable Diffusion, like other AI models, is open source and can be edited to not have any rules during use.

Midjourney and Stability AI did not respond to the company’s inquiries. Sheetsent by email.

Without a law on the subject, the issue of artificial intelligence in the electoral campaign must be addressed in a resolution by the TSE (Superior Electoral Court).

A draft prepared by the vice-president of the court, minister Cármen Lúcia, which must still go through a public hearing and analysis by the plenary, indicates that it will be mandatory for users to report the use of artificial intelligence to generate content.

A Sheet TSE minister Floriano de Azevedo Marques Neto said that the Brazilian court’s main concern is with the falsification of people’s images and voices, in so-called deepfakes. “The fact is that AI was barely present in the 2022 election and almost none in 2020,” he said.

Social media

If OpenAI aims to prevent abuse with the production of deepfakes, social media companies have established some rules that aim to prevent the circulation of this type of content without transparency.

Meta, owner of Facebook, Instagram and WhatsApp, also announced updates to its political advertising policy in 2023, also with mention of the American elections — as the year coincides with the Brazilian municipal election, the rules end up being valid for Brazil.

Advertisers will have to disclose the use of AI in some cases, such as when there has been a change so that a real person is “saying or doing something they didn’t say or do.” Meta states that it actively moderates these ads and that, if it detects an omission on the part of an advertiser, it will punish the account.

These rules, however, do not apply to organic publications, without payment for greater reach. Furthermore, among common advertisements, for which the new AI standards do not apply, there are already cases of using deepfake to carry out scams on Meta’s social networks —Facebook and Instagram—, as shown by Sheet.

TikTok, in turn, published rules for publishing deepfakes in March 2023. Videos that use this technology must be identified by a badge and cannot reference people who are not public or minors.

Furthermore, deepfakes on TikTok cannot serve as a political campaign tool, violate the app’s policies or be intended to deceive other users.

Wanted by Sheet via email, X, formerly of Twitter, did not respond to the report’s questions. The social network, since it was purchased by Elon Musk at the end of 2022, no longer has press representation in Brazil.

[ad_2]

Source link