University of Leeds

12/04/2025 | Press release | Distributed by Public on 12/04/2025 15:01

Learning to spot AI generated faces

Five minutes of training can significantly improve people's ability to identify fake faces created by artificial intelligence, new research shows.

Scientists from the Universities of Leeds, Reading, Greenwich and Lincoln tested 664 participants' ability to distinguish between real human faces and faces generated by computer software called StyleGAN3.

Without any training, super-recognisers (individuals who score significantly higher than average on face recognition tests) correctly identified fake faces 41% of the time, while participants with typical abilities scored just 31%.

Our study shows that the use of super-recognisers - people with very high face recognition ability - combined with training may help in the detection of AI faces.


Meanwhile, a new set of participants who received a brief training procedure had higher accuracy. Super-recognisers achieved 64% accuracy in detecting fake faces, while typical participants scored 51% accuracy.

AI-generated faces normally display rendering mistakes, such as misaligned teeth, unusual hair lines, or misshapen or non-matching ears or earrings. During the training, participants were shown AI-generated images showing these artifacts.

Dr Eilidh Noyes from the University of Leeds' School of Psychology, said: "AI images are increasingly easy to make and difficult to detect. They can be used for nefarious purposes, therefore it is crucial from a security standpoint that we are testing methods to detect artificial images.

"Our study shows that the use of super-recognisers - people with very high face recognition ability - combined with training may help in the detection of AI faces."

The training affected both groups equally, suggesting super-recognisers may use different visual cues than typical observers when identifying synthetic faces, rather than simply being better at spotting rendering errors.

The research, published in Royal Society Open Science, tested faces created by StyleGAN3, the most advanced system available when the study was conducted. This represents a significant challenge compared to earlier research using older software, as participants in this study tended to have poorer performance than those in previous studies. Future research will examine whether the training effects last over time and how super-recognisers' skills might complement artificial intelligence detection tools.

Dr Katie Gray, lead researcher at the University of Reading, said: "Computer-generated faces pose genuine security risks. They have been used to create fake social media profiles, bypass identity verification systems and create false documents. The faces produced by the latest generation of artificial intelligence software are extremely realistic. People often judge AI-generated faces as more realistic than actual human faces.

"Our training procedure is brief and easy to implement. The results suggest that combining this training with the natural abilities of super-recognisers could help tackle real-world problems, such as verifying identities online."

Media enquiries can be emailed to University of Leeds press officer Lauren Ballinger via [email protected].

Picture: Both real and AI faces which formed part of the research. Further details about the faces are available in the paper.

University of Leeds published this content on December 04, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on December 04, 2025 at 21:01 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]