Google, Meta and OpenAI pledge to identify AI-generated content, says US government

Google, Meta and OpenAI pledge to identify AI-generated content, says US government

[ad_1]

One of the measures can be the insertion of watermarks in texts, images, audios and videos. President Joe Biden receives, this Friday (21), executives from technology companies at the White House. Website of ChatGPT, an AI chatbot by OpenAI Florence Lo/Illustration/ Reuters Leading Artificial Intelligence (AI) companies such as OpenAI, Google and Meta have made commitments to the White House to implement measures that identify AI-generated content, the US government said on Friday (21). President Joe Biden receives, this Friday (21), executives from seven companies at the White House. Measures may include, for example, inserting watermarks into AI-generated text, images, audio and video. The companies – which also include Anthropic, Inflection and Amazon – committed to testing the systems before launching them and sharing information on how to reduce risk and invest in cybersecurity. The measure is seen as a victory for the Biden government to regulate artificial intelligence technologies, which has recently experienced a boom in corporate investment. Bard, ‘Google’s ChatGPT’ Launches in Brazil Regulation in the US Ever since AI became popular around the world, especially after the emergence of ChatGPT, policymakers around the world have begun to consider how to mitigate the technology’s dangers to national security and the economy. US Senate Leader Chuck Schumer in June called for “comprehensive legislation” to move forward and ensure safeguards for artificial intelligence. Congress is considering a bill to require political ads to disclose whether AI was used to create images or other content. President Joe Biden is also working on developing an executive order and bipartisan legislation on AI technology. System for ‘watermarking’ As part of the effort, the companies have committed to developing a system for ‘watermarking’ all forms of content, from text, images, audio to AI-generated videos, so that users know when the technology has been used. This watermark, incorporated into the content in a technical manner, will presumably make it easier for users to identify deep-fake images or audio that may, for example, show an act of violence that did not occur, set up a coup, or distort a photo of a politician to put the person in an unfavorable light. It is not clear how the watermark will be evident when sharing the information. Companies have also pledged to focus on protecting users’ privacy as AI develops and ensuring the technology is free of bias and not used to discriminate against vulnerable groups. Other commitments include developing AI solutions to scientific problems such as medical research and climate change mitigation. Personal stylist robot is another promise of artificial intelligence Fact or Fake: know what is true and false about artificial intelligence

[ad_2]

Source link