Deepfake Detection

facefpp


The rapid progress in video manipulation has now come to a point where it raises significant concerns for the implications towards society. In particular, face manipulations are of special interest because faces play a central role in human communication, as the face of a person can emphasize a message or it can even convey a message in its own right. A manipulated video could potentially be used to help the spreading false information or fake news. Therefore, it is important to develop tools that help to detect its authenticity.

To help the research in this field, in collaboration with the TUM (Technical University Munich), we have made a dataset of facial forgeries, called FaceForensics++. The dataset will enable researchers to train deep-learning-based approaches in a supervised fashion. The dataset contains manipulations created with four state-of-the-art methods, namely, Face2Face, FaceSwap, DeepFakes, and NeuralTextures. We also examined the realism of state-of-the-art image manipulations, and how difficult it is to detect them, either automatically or by humans.

 

 

Datasets

The publicly available video dataset of facial forgeries, FaceForensics++, is formed by:
  • 1000 original videos (more than 500K images) downloaded from the YouTube8M dataset;
  • 1000 videos manipulated by Face2Face;
  • 1000 videos manipulated by FaceSwap;
  • 1000 videos manipulated by DeepFakes;
  • 1000 videos manipulated by NeuralTextures.

In total, we considered four face manipulation approaches: two computer graphics-based approaches (Face2Face and FaceSwap) and two learning-based approaches (DeepFakes and NeuralTextures).

dataset 

On the top, there are two original videos, while in the second line there are the four manipulated videos. The Face2Face and NeuralTextures manipulations are a facial reenactment where the expressions of the source video are transferred to the target video while retaining the identity of the target person. FaceSwap and DeepFakes are instead face-swapping methods that replace the face in the target video with the face in the source video. For more detail, see the original paper.

If you would like to download the FaceForensics++ datasets, please fill out this form and, once accepted, we will send you the link to our download script. We are also hosting the DeepFakes Detection Dataset which includes various high-quality scenes of multiple actors that have been manipulated using DeepFakes. The dataset was donated by Google/Jigsaw to support the community effort on detecting manipulated faces. See this page per more details.

 

Results 

In the paper, we also proposed a strategy to detect automatically face manipulations based on Convolution Neural Network. The strategy follows the pipeline shown in the below figure: the input image is processed by a robust face tracking method; we use the information to extract the region of the image covered by the face; this region is fed into a learned classification network that outputs the prediction.

detection pipeline

The proposed method, even in the presence of strong compression, clearly outperforms human observers. For more detail, see the original paper.

result

Result of the proposed method on a manipulated video where the journalist on the left was modified by the deepfakes method. The face, detected as manipulated by the method, is underlined using a red box, while the face, detected as pristine, is underlined using a green box.

 

Results on online videos 

We tried our proposal on some deepfake videos posted online, the results are obtained using a network trained on FaceForencis++ dataset and finetuned on only about 60 online videos.

examples1    result trump

The faces, detected as manipulated by the method, are underlined using a red box, while the faces, detected as pristine, are underlined using a green box.

 

Benchmark

To standardize the evaluation of detection methods, we propose an automated benchmark for facial manipulation detection To standardize the evaluation of detection methods, we propose an automated benchmark for facial manipulation detection. The benchmark is publicly available and contains a hidden test set. If you are interested to test your approach on unseen data, visit it here.


Source Code & Contact 

For more information about our code, visit the github or contact us under This email address is being protected from spambots. You need JavaScript enabled to view it..