Print

Deepfake detection

Author: Vítor Bernardo

A deepfake is the manipulation or artificial generation (synthesis) of audio, video or other forms of digital content to make it appear that a particular event occurred, or that someone behaved or looked differently than they actually did.

The manipulation of photos and videos, which used to be done manually using graphical editing tools, has undergone a significant evolution through the use of artificial intelligence and, in particular, deep learning.

Among the various deepfake creation methods, Generative Adversarial Networks (GAN) is a technology that has shown remarkable results, creating manipulations that are difficult to distinguish from original content. GANs are machine learning (ML) models in which two neural networks - a generator and a discriminator - compete with each other to make predictions that are as accurate as possible or, in the case of deepfake generation, to produce the most realistic result.

In addition to the question of content manipulation, there is also the concern that deepfake content could promote disinformation and have a negative impact on people's opinions, with potential political and social consequences. Nude or otherwise offensive depictions of people, hoaxes and financial fraud can also be produced through video manipulation.

Additionally, the ability to impersonate other people, by swapping faces in photos and videos, increases the risk of unauthorised access to services or premises.

Several approaches have been proposed to automatically detect fake videos of people, including eyebrow change detection, eye blink and  movement detection, inconsistent corneal specular highlights (i.e., consistency in the eye’s reflection of ambient lighting), and even heartbeat detection by capturing slight skin colour changes in the video.

Other techniques have focused on the detection of unique elements (fingerprints) in the digital content resulting from the use of deepfake tools; these elements are commonly referred to as 'artifacts'.

Categorisation algorithms are trained on large collections of real and fake audio-visual samples to identify artifacts.

The existing deepfake detectors rely mainly on the signatures of existing deepfake content by using ML techniques, including unsupervised clustering and supervised classification methods, and therefore are less likely to detect unknown deepfake manipulations. However, the technology used for deepfake detection content is still not able to provide sufficient assurance. The current deepfake detectors face challenges, particularly due to incomplete, sparse, and noisy data in training phases.

Positive impacts foreseen on data protection:

  • Prevention of the impact of deepfakes on individuals

With the limitations noted above, deepfake detection can be used to identify content that has been manipulated for malicious purposes. Detecting and tagging fake videos and images allow individuals and organisations to take action to stop the spread of potentially damaging misinformation. This can safeguard the reputation and privacy of individuals and prevent the dissemination of fake news, frauds, or cyberbullying.

  • Protection of personal data by preventing deepfake-based attacks

Deepfake manipulations can be used to create convincing impersonations of individuals, potentially leading to identity theft or unauthorised access to sensitive data. As fake videos and audio are used in various forms of cyberattacks, including spear phishing and social engineering, having robust detection mechanisms in place can prevent unauthorized access to sensitive information.

  • Improvement of data accuracy by applying data validation

Deepfake detection can be used for data validation. In the financial, healthcare, and legal sectors, are examples where data accuracy is paramount, deepfake detection tools can help verify the authenticity of documents, audio recordings, or video footage, ensuring that decisions and actions are based on reliable information.

Negative impacts foreseen on data protection:

  • Lack of fairness and trust

Research indicates that databases commonly used for training of deepfake detection lack diversity and, more importantly, show that deepfake detection models can be strongly biased. It has been observed that existing audio and visual deepfake datasets contain imbalanced data of different ethnic origins and genders. In some situations, having large lips or nose, being heavier or black led to more detection errors compared to images without these attributes. There is a risk that the application of biased models in the real world could discriminate against certain individuals.

  • Lack of transparency and fairness in detection methods

Existing deepfake detection approaches are typically designed to perform batch analysis over a large dataset. However, when these techniques are employed in the field, for example by journalists or law enforcement, there may only be a small set of videos available for analysis.

In these situations, an explanation of the numerical score given to the likelihood of the content being deepfake may be necessary for the analysis to be trusted before publication or utilization in possible legal actions. However, most deepfake detection methods and tools lack such an explanation, especially those based on deep learning, due to their black-box nature.

  • Lack of accuracy

Presently, deepfake detection methods are formulated as a binary classification problem, where each sample can be either real or fake. However, for real-world scenarios, videos can be altered in ways other than deepfake (for instance, by post-production), so content not detected as manipulated does not guarantee that the video is a genuine one. Additionally, fake images and videos are usually shared on social networks and for this reason suffer from high variations, such as compression level, resizing, and noise (a process known as media washing). This can incur in a large number of false negatives (i.e., undetected fakes).

Suggestions for further reading:

  • Masood, M., Nawaz, M., Malik, K. M., Javed, A., Irtaza, A., & Malik, H. (2023). Deepfakes generation and detection: State-of-the-art, open challenges, countermeasures, and way forward. Applied intelligence, 53(4), 3974-4026.
  • Patil, K., Kale, S., Dhokey, J., & Gulhane, A. (2023). Deepfake detection using biological features: a survey. arXiv preprint arXiv:2301.05819.
  • Trinh, L., & Liu, Y. (2021). An examination of fairness of AI models for deepfake detection. arXiv preprint arXiv:2105.00558.