Nikon, Sony and Canon seek to separate photos from AI fakes – 01/02/2024 – Tech

Nikon, Sony and Canon seek to separate photos from AI fakes – 01/02/2024 – Tech

[ad_1]

Camera manufacturers Nikon, Sony and Canon are racing to improve their machines with a system that marks photographs with a digital stamp. The objective of the technology is to differentiate photos from images generated by artificial intelligence, according to information from the Japanese newspaper Nikkei.

Companies are racing to maintain their credibility in the face of the advancement of image-generating artificial intelligence platforms. Fake — but realistic — photos of Pope Francis, Elon Musk and Donald Trump have already gone viral, as well as an artificial video with the presenter of Jornal Nacional, on TV Globo, William Bonner.

This technology became popular with the launch in April 2022 of Dall-E 2, from OpenAI (same creator of ChatGPT). Since then, other similar platforms have gained prominence, and the art generated, verisimilitude. An artificial intelligence model developed by researchers at Tsinghua University, in China, is capable of producing 700,000 fakes per day.

The first efforts begin with partnerships with journalists. A global alliance of press vehicles, technology companies and camera manufacturers developed a tool capable of carrying out online checks, called Verify. If there is an authentication seal, the platform displays date, geolocation and other metadata.

This standard will be adopted, by mutual agreement, between Nikon, Sony and Canon. Japanese companies control around 90% of the global camera market.

In addition to completely fake images, the system also identifies edited photos as inauthentic. The default message for created or altered images is “No Content Creditials”.

Canon is working to adopt the technology on a market scale in 2024. The manufacturer also plans to authenticate videos

The company has already launched an image management application that allows you to check whether the file being analyzed was produced by humans.

The camera manufacturer works in partnership with the Reuters agency and the Starling Lab for Data Integrity, a laboratory co-founded by Stanford and South Carolina universities dedicated to verifying the integrity of information circulating on the internet.

Nikon, for its part, will begin adding authentication technology to its high-end cameras, with the aim of attracting photojournalists and other professionals. The authenticity seal will be resistant to external interference.

Sony plans to adopt the authentication mark from April on its high-end equipment as well. The company will provide an update package to adapt machines already in circulation.

The camera manufacturer has been testing, together with the Associated Press news agency, a system for automatically checking images posted on a server. The technology should also work on videos.

On the side of AI developers, Google launched a tool in August that adds invisible watermarks to images generated with artificial intelligence. Images generated by OpenAI’s Dall-E also come with stamps to indicate artificial origin.

Still, there are several open source image-generating AI models, such as Stable Diffusion, available on the internet. This allows programmers to create their own versions and discard authentication seals.

Still in 2022, Intel developed a platform capable of determining whether an image is authentic by analyzing changes in skin color resulting from blood flow. The tendency, however, is for images created by artificial intelligence to become more realistic and, consequently, difficult to identify.

[ad_2]

Source link