How to identify if an image has been manipulated

How to identify if an image has been manipulated

[ad_1]

Amid suspicions that an official photo of Kate Middleton was ‘manipulated’, check out some clues that can help you check if an image has been altered or even generated by AI How to identify if an image has been manipulated Getty Images In a world where images can be digitally altered with just a few clicks or even generated entirely from scratch using artificial intelligence (AI), it is becoming increasingly difficult to trust what our eyes see. The techniques used to manipulate images are so sophisticated that we have entered the era of hyper-realistic fakes. Such images can lead to the spread of misinformation and can even influence public opinion in important events such as elections. After a photo of the Princess of Wales, Kate Middleton, alongside her children was removed by news agencies due to concerns that it had been “manipulated”, the issue gained more attention than ever. But is there anything we can do to identify if an image has been altered or generated by artificial intelligence? Reflections and shadows Abnormal lighting is often a sign that a photo has been altered. Check, for example, the points of light in people’s eyes; the light source will often be reflected in your eyes. If the size and color don’t match the location or if they look different between each eye, you may have reason to be suspicious. The way individuals and objects appear on the reflective surfaces of an image can also provide clues. The shadows of objects in the image may not line up if the image was assembled from multiple images, but keep in mind that some photos may be taken with multiple lighting sources. It is worth observing the way the light falls on a person’s face. If the Sun is behind her, for example, her ears may appear red. Artificial intelligence can also produce incongruous lighting and shadows, but as algorithms are improved, AI-generated faces are often perceived as more real than human faces. Hands and ears Another approach that can be revealing is to look for features that are difficult to replicate. Currently, artificial intelligence leaves something to be desired when processing hands and ears, changing their shapes, proportions and even the number of fingers. These are the same features that are often a headache for artists, but as other aspects of AI-generated images of people become hyper-realistic, these inaccuracies create a valley of uncanny that feels unnatural to our eyes. eyes. See the metadata Hidden in the code of digital images is information that can help identify a fake photo. Every time a digital camera takes a photo, metadata is written to the image file. The timestamps, for example, led to questions about whether then-American President Donald Trump was actually working at the White House the day after revealing that he had contracted Covid-19 in October 2020. Two photos of Trump in the military hospital where he was admitted with covid-19 EPA/JOYCE N BOGHOSIA/THE WHITE HOUSE Image noise Each digital camera sensor has small manufacturing flaws that lead to unique errors that leave a kind of “fingerprint” on the photos. This “fingerprint” is then associated with a specific camera — and can help identify areas of a photograph that have been manipulated. The grain of an AI-generated image can also look peculiar. Verification Tools Tech companies like Google have released image verification tools that can help people identify images generated by artificial intelligence. Facebook and Instagram have begun labeling AI-generated images coming from Meta’s own systems — and plan to do the same for images generated by other companies’ AI tools. Read the original version of this report (in English) on the BBC Future website. Meet Sora, a realistic video generator from the owner of ChatGPT Bard, ‘Google’s ChatGPT’, is launched in Brazil

[ad_2]

Source link