Swansea University

11/06/2025 | Press release | Distributed by Public on 11/06/2025 09:55

Fake or the real thing? How AI can make it harder to trust the pictures we see

A new study has revealed that artificial intelligence can now generate images of real people that are virtually impossible to tell apart from genuine photographs.

Using AI models ChatGPT and DALL·E, a team of researchers from Swansea University, the University of Lincoln and Ariel University in Israel, created highly realistic images of both fictional and famous faces, including celebrities.

They found that participants were unable to reliably distinguish them from authentic photos-even when they were familiar with the person's appearance.

Across four separate experiments the researchers noted that adding comparison photos or the participants' prior familiarity with the faces provided only limited help.

The research has just been published by journal Cognitive Research: Principles and Implications and the team say their findings highlight a new level of "deepfake realism," showing that AI can now produce convincing fake images of real people which could erode trust in visual media.

Professor Jeremy Tree, from the School of Psychology, said: "Studies have shown that face images of fictional people generated using AI are indistinguishable from real photographs. But for this research we went further by generating synthetic images of real people.

"The fact that everyday AI tools can do this not only raises urgent concerns about misinformation and trust in visual media but also the need for reliable detection methods as a matter of urgency."

One of the experiments, which involved participants from US, Canada, the UK, Australia and New Zealand, saw subjects shown a series of facial images, both real and artificially generated, and they were asked to identify which was which. The team say the fact the participants mistook the AI-generated novel faces for real photos indicated just how plausible they were.

Another experiment saw participants asked to if they could tell genuine pictures of Hollywood stars such as Paul Rudd and Olivia Wilde from computer-generated versions. Again the study's results showed just how difficult individuals can find it to spot the authentic version.

The researchers say AI's ability to produce novel/synthetic images of real people opens up a number of avenues for use and abuse. For instance, creators might generate images of a celebrity endorsing a certain product or political stance, which could influence public opinion of both the identity and the brand/organisation they are portrayed as supporting.

Professor Tree added: "This study shows that AI can create synthetic images of both new and known faces that most people can't tell apart from real photos. Familiarity with a face or having reference images didn't help much in spotting the fakes, that is why we urgently need to find new ways to detect them.

"While automated systems may eventually outperform humans at this task, for now, it's up to viewers to judge what's real."

Read the paper in full

Share Story

Swansea University published this content on November 06, 2025, and is solely responsible for the information contained herein. Distributed via Public Technologies (PUBT), unedited and unaltered, on November 06, 2025 at 15:55 UTC. If you believe the information included in the content is inaccurate or outdated and requires editing or removal, please contact us at [email protected]