The new Deepfake Spotting Tool is 94% effective – here’s the secret of its success

Search the Deepfake

Question: Which of these people are fake? Answer: Everyone. Credit: www.thispersondoesnotexist.com and the University of Buffalo

University of Buffalo an in-depth search tool is 94% effective with portrait-like photos, according to study.

Computer scientists from the University of Buffalo have developed a tool that automatically identifies false photos by analyzing light reflections in the eyes.

The instrument was 94% effective with portrait-like photographs in experiments described in a paper adopted at the IEEE International Conference on Acoustics, Speech and Signal Processing to be held in Toronto, Canada, in June.

“The cornea is almost like a perfect hemisphere and is very reflective,” says lead author of the article, Siwei Lyu, PhD, SUNY Empire Innovation Professor in the Department of Computer Science and Engineering. ‘So, whatever comes to mind with a light radiating from the sources will have an image on the cornea.

‘The two eyes must have very similar reflective patterns, because they see the same thing. This is something we do not usually see when we look at a face, ‘said Lyu, a multimedia and digital forensic expert who testified before Congress.

The paper, “Exposure of GAN-generated faces with inconsistent corneal highlights,” is available at the open access repository arXiv.

Co-authors are Shu Hu, a third-year PhD student in computer science and research assistant in the Media Forensic Lab at UB, and Yuezun Li, PhD, a former senior research scientist at UB, who is now a lecturer at Ocean University of China. Center for Artificial Intelligence.

Tools maps face, examines small differences in eyes

When we look at something, the image of what we see is reflected in our eyes. On a real photo or video, the reflection generally looks the same shape and color.

However, most images generated by artificial intelligence – including generative adverse network (GAN) images – fail to do so accurately or consistently, possibly due to many photos generating the false image.

Lyu’s tool exploits this shortcoming by noticing small deviations in reflected light in the eyes of deep-false images.

To conduct the experiments, the research team obtained real images from Flickr Faces-HQ, as well as fake images from www.thispersondoesnotexist.com, a repository of AI-generated faces that look lifelike but are indeed fake. All the photos were portrait-like (real people and fake people looking directly into the camera with good lighting) and 1,024 by 1,024 pixels.

The tool works by mapping each face. It then examines the eyes, followed by the eyeballs and finally the light reflected in each eyeball. It compares incredible detail potential differences in shape, light intensity and other characteristics of the reflected light.

‘Deepfake-o-meter’, and commitment to the fight against deepfakes

While promising, Lyu’s technique has limitations.

First, you need a reflective source of light. Incorrect light reflections from the eyes can also be corrected while editing the image. In addition, the technique only looks at the individual pixels that are reflected in the eyes – not the shape of the eye, the shapes in the eyes or the nature of what is reflected in the eyes.

Finally, the technique compares the reflections in both eyes. If the subject does not have an eye, or if the eye is not visible, the technique fails.

Lyu, who has been researching machine learning and computer vision projects for more than twenty years, has previously proven that deep-fake videos usually have an inconsistent or non-existent cutting speed for the video topics.

In addition to the evidence before Congress, in 2020 he helped Facebook with the global challenge of detecting deep-falsehood, and he helped create the ‘Deepfake-o-meter’, an online resource for the average person. help test whether the video they watched is viewed. is in fact a profundity.

He says the identification of perpetrators is becoming increasingly important, especially given the hyper-partisan world full of tension and gender-related tensions and the dangers of disinformation – especially violence.

“Unfortunately, a great deal of this kind of fake video was created for pornographic purposes, and it did a lot of psychological damage to the victims,” ​​Lyu said. ‘There’s also the potential political impact, the fake video that shows politicians saying something or doing something they should not do. That’s bad. “

Source