Research Areas |
Deepfake Detection
To help the research in this field, in collaboration with the TUM (Technical University Munich), we have made a dataset of facial forgeries, called FaceForensics++. The dataset will enable researchers to train deep-learning-based approaches in a supervised fashion. The dataset contains manipulations created with four state-of-the-art methods, namely, Face2Face, FaceSwap, DeepFakes, and NeuralTextures. We also examined the realism of state-of-the-art image manipulations, and how difficult it is to detect them, either automatically or by humans.
DatasetsThe publicly available video dataset of facial forgeries, FaceForensics++, is formed by:
In total, we considered four face manipulation approaches: two computer graphics-based approaches (Face2Face and FaceSwap) and two learning-based approaches (DeepFakes and NeuralTextures).
On the top, there are two original videos, while in the second line there are the four manipulated videos. The Face2Face and NeuralTextures manipulations are a facial reenactment where the expressions of the source video are transferred to the target video while retaining the identity of the target person. FaceSwap and DeepFakes are instead face-swapping methods that replace the face in the target video with the face in the source video. For more detail, see the original paper. If you would like to download the FaceForensics++ datasets, please fill out this form and, once accepted, we will send you the link to our download script. We are also hosting the DeepFakes Detection Dataset which includes various high-quality scenes of multiple actors that have been manipulated using DeepFakes. The dataset was donated by Google/Jigsaw to support the community effort on detecting manipulated faces. See this page per more details.
ResultsIn the paper, we also proposed a strategy to detect automatically face manipulations based on Convolution Neural Network. The strategy follows the pipeline shown in the below figure: the input image is processed by a robust face tracking method; we use the information to extract the region of the image covered by the face; this region is fed into a learned classification network that outputs the prediction. The proposed method, even in the presence of strong compression, clearly outperforms human observers. For more detail, see the original paper. Result of the proposed method on a manipulated video where the journalist on the left was modified by the deepfakes method. The face, detected as manipulated by the method, is underlined using a red box, while the face, detected as pristine, is underlined using a green box.
Results on online videosWe tried our proposal on some deepfake videos posted online, the results are obtained using a network trained on FaceForencis++ dataset and finetuned on only about 60 online videos.
The faces, detected as manipulated by the method, are underlined using a red box, while the faces, detected as pristine, are underlined using a green box.
BenchmarkTo standardize the evaluation of detection methods, we propose an automated benchmark for facial manipulation detection To standardize the evaluation of detection methods, we propose an automated benchmark for facial manipulation detection. The benchmark is publicly available and contains a hidden test set. If you are interested to test your approach on unseen data, visit it here. Source Code & ContactFor more information about our code, visit the github or contact us under This email address is being protected from spambots. You need JavaScript enabled to view it..
|