COMPUTERS & ELECTRICAL ENGINEERING, cilt.117, 2024 (SCI-Expanded)
In recent years, with the massive development of new deep learning tools, the production of fake video content has become widespread. This fake content has the potential to cause serious social problems. Therefore, detecting fake content is of great importance. For this purpose, we present a new method for deepfake video detection. In most of the studies, which image frames of the videos are selected to be used in the detection models is determined randomly. This randomness can cause important image frames to be missed which can improve detection performance. The proposed method differs from other studies in the literature by determining which image frames to select from the videos with the help of the golden ratio information on the face. The method was developed using three different feature extraction methods, VGG19, EfficientNet B0, EfficientNet B4, and two different capsule network models, CapsuleNet and ArCapsNet. Performance evaluations were performed on Celeb-DF and DFDC-P, two of the currently challenging deepfake video datasets. The results were improved by fusing the best performing models. For Celeb-DF dataset, 93.63% ACC, 99.14% AUC and for DFDC-P dataset, 82.84% ACC, 89.08% AUC were obtained.