A new artificial intelligence tool provides a surprisingly simple way to detect whether what we are seeing is a deepfake or a real image: analyzing how light is reflected in the eyes.
The system has been developed by computer researchers at the University of Buffalo. In tests with portrait-style photos, the tool was 94% effective in detecting Deepfake images .
Corneal analysis
A Deepfake (an acronym of English formed by the words fake, falsification, and deep learning, deep learning) is an artificial intelligence technique that allows to edit false videos of people that apparently are real, using for this unsupervised learning algorithms, known in Spanish as RGAs (Antagonic Generative Network), and existing videos or images. Many times, the human eye can be fooled, which is why this new algorithm has been developed to identify them.

The system exposes the forgeries by analyzing the corneas , which have a mirror-like surface and which generate reflective patterns when illuminated with light.
In a photo of a real face taken with a camera, the reflection of the two eyes will be similar because they are seeing the same thing. But Deepfake images generally fail to accurately capture this similarity, showing different geometric shapes or mismatched locations of reflections.

The artificial intelligence system looks for these discrepancies by tracing a face and analyzing the light reflected from each eyeball . Generates a score that serves as a similarity metric. The lower the score, the more likely the face is a Deepfake.
Of course, the system has limitations: it is effective only on portrait images. If the face in the image is not looking at the camera, the system is likely to produce false positives . Also the most sophisticated deepfakes, where there is a post-processing work to avoid discrepancies in the reflections of the eyes, will be safe from the scrutiny of this new algorithm.