Can you spot an AI generated face? Put your skills to the test with our quiz

News Room
7 Min Read

Technology is outpacing what the human mind can distinguish between at a rapid pace.

Artificial Intelligence can fabricate images and videos which look completely real to the average person in the blink of an eye.

However, most people think they are good at distinguishing between what’s real and what’s not, so we invite you to put your skills to the test.

Below are six pairs of images. In each case one is real, the other has been created by AI. Test yourself by trying to pick out which is which, answers are at the bottom of this story.

Things to look out for when checking pictures, according to picture editors at The Post, include: Does a person look too ‘polished’ for the scenario around them; Does their face look too symmetrical and perfect; does their clothing have natural wrinkles, fabric textures and signs of wear and are hair strands visible around their head?

John Villasenor, who teaches law and engineering at UCLA, told The Post he suggests looking for “inconsistencies in lighting … and details that don’t actually make sense.”

Extensive testing by the UK’s Royal Society of Open Science showed people with ordinary abilities were able to discern between AI images and real people only 31 percent of the time.

The study also found subjects were cocky, sure they had spotted fakes far more often than they had.

Anatoly Kvitnitsky, CEO and founder of AI or Not, works with corporations to find images which are computer generated. He also says giveaways aren’t always in the face itself.

“For the human eye, you should look for things in the background. AI is really good at creating a believable main subject, but in the background people’s faces can look blurred. In video, you’ll see people standing still.

“If there is a car in the background, look at the license plate. It may not be perfect. The subheading of a sign can be gibberish. AI currently does a quick job on the background,” he told The Post.

However, it may not be that away for long. In the earlier days of AI people could easily spot distorted teeth, glasses or accessories which merged into skin or people’s ears not attaching properly, but the technology quickly moved beyond that. Kvitnitsky says today’s generators even create pores and imperfections.

“There’s an arms race between the creators and the detectors,” added Villasenor. “The creation techniques get better and then the detector techniques try to catch up.”

Kvitnitsky’s company works with clients such as insurance companies to check images are authentic, such as damaged vehicles or scans of ID cards or checks.

The technology he uses analyzes images at the pixel level to see if they were taken with a real camera.

Images created with publicly available programs such as Google Gemini, Adobe Firefly and ChatGPT are the easiest to catch, as they have lines embedded into their code which say which image generator created them and when.

But if you’re not a computer, the odds are increasingly stacked against us. The UK study, published in Nov. 2025, found even so-called super-recognizers who have a natural knack for facial recognition only had a slight edge, picking up on human faces 54 percent of the time.

The flood of computer-generated images across advertising and social media is also, subconsciously, making people used to seeing AI faces.

When it is misused, the tech can have heavy real-world consequences. In 2024, a finance worker in Hong Kong was lured onto a video conference, apparently with his company’s Chief Finance Officer and other colleagues. After being convinced to transfer $25 million into an out-of-office account, he found out the CFO and other workers were generated by AI. The request was counterfeit but the money sent was legal tender.

Kvitnisky sees the problem as consequential in the long run for society as a whole.

“The biggest fear that I have about AI is people doubting what they see and what they hear,” he said.

“We can see something real and then assume it’s fake. That throws fuel on our biases. If we just don’t want to believe something, we can just say [dismiss] it as AI.”

Another real world example appeared over the last week following the killing of drug lord Nemesio “El Mencho” Oseguera Cervantes by Mexican authorities.

One day later the Internet lit up with pictures of a model named Maria Julissa apparently sitting next to him and claims they had been romantically involved.

Julissa denied even knowing or having met El Mencho, but it’s easy to see the risks inherent with people associating you with a cartel narco-terrorist.

As the lines continue to blur, Kvitnisky himself acknowledges that, under the right circumstances, even he could be fooled by something AI generated.

“I have three boys and I am the CEO of an AI detection company, but if [something appeared to have happened to them] I was sent a picture of one of my sons, my emotions would make me forget about all of these things I know,” he admitted. “I would just react the visual cue.”

ANSWERS: 1) B, 2) B, 3) A, 4) B, 5) A, 6) A

Read the full article here

Share This Article
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *