We live in a world where AI can fashion faces that do not exist. These faces are nearly indistinguishable from real faces, if not completely indistinguishable. Through deep fake technology, AI has the power to create audio, image, and video synthesis from nothing. This article will focus on facial reenactment for the synthesis of faces.
How AI Face Generation Works
Generative adversarial networks with two neural networks are responsible for the synthesis of AI content. The neural networks include a generator and a discriminator.
The first step falls to the generator, starting with a random pixel array. From then on, the generator iteratively learns to create a realistic face. The discriminator plays the role of distinguishing the newly synthesized face from a realistic face.
If the two are distinguishable, then the discator penalizes the generator and again begins the process a couple of iterations later, and the generator has learned how to synthesize more realistic faces.
The goal is to create a face that bamboozles even the discriminator into thinking it’s a realistic face. Only then does the synthesized face qualify as an AI-generated avatar?
Detecting Fake Faces
At the moment, AI-generated faces are good. Keeping them apart from their real faces is very tricky. When faced with the task, look out for these features:
One quick way of identifying fake faces is how bad of a hair day they are having. In AI-generated images, the hair will more often than not be in clumps or random wisps around the shoulders.
Gan will also apply thick straits to the foreheads. What makes this hard, even for the best AI algorithms, is how varied and detailed hair features often are. Even a simple afro can be a headache for the best GAN on the market.
Even in all its glory, AI can find it hard to manage long-distance dependencies in images. This is the case for things that come in two, like eyes or accessories, in particular, earrings. The latter may match the data set but not necessarily in the generated images.
Don’t be surprised to see AI-generated avatars with heterochromia or even worse, cross-eye. Earrings also come in pairs. What happens to ears is that they sometimes appear mismatched in height or size.
AI face-generating algorithms aren’t exactly geniuses with patterns and said patterns can throw them off a lot. This is why you are likely to see strange structures in the background of the images or strange clothing on the subject. I’ll let you in on a secret: always look at the text in the background. More often than not, it’s going to be malformed.
Deep Fake Technology Flaws
Deep fake technology is near-perfect and is sometimes impossible to tell apart from the real thing. It is near-perfect because of these shortcomings:
The goal of facial reenactment is to create near-realistic faces. The very first obstacle is matching skin tones perfectly. It is not uncommon to find some AI faces sporting some very unnatural skin tones—to be fair, perhaps they were at the beach for a fresh tan over the weekend.
That said, it is paramount that you pick candidates with identical or near-identical skin tones to make them believable.
I don’t know how many deep fake videos you have watched, but what’s clear to see is that the faces are usually unusually blurry. This is because the new face needs to blend with the rest of the images, so all the filters applied will blur the image.
Another possible reason behind Blur AI-generated avatars is the use of low-resolution pictures, perhaps due to low budgets.