SOTAVerified

CAST: Cross-Attentive Spatio-Temporal feature fusion for deepfake detection

2026-03-13Unverified0· sign in to hype

Aryan Thakre, Omkar Nagwekar, Vedang Talekar, Aparna Santra Biswas

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Deepfakes have emerged as a significant threat to digital media authenticity, increasing the need for advanced detection techniques that can identify subtle and time-dependent manipulations. CNNs are effective at capturing spatial artifacts and Transformers excel at modeling temporal inconsistencies. However, many existing CNN-Transformer models process spatial and temporal features independently. In particular, attention based methods often use independent attention mechanisms for spatial and temporal features and combine them using naive approaches like averaging, addition or concatenation, limiting the depth of spatio-temporal interaction. To address this challenge, we propose a unified CAST model that leverages cross-attention to effectively fuse spatial and temporal features in a more integrated manner. Our approach allows temporal features to dynamically attend to relevant spatial regions, enhancing the model's ability to detect fine-grained, time-evolving artifacts such as flickering eyes or warped lips. This design enables more precise localization and deeper contextual understanding, leading to improved performance across diverse and challenging scenarios. We evaluate the performance of our model using the FaceForensics++, Celeb-DF, DeepfakeDetection and Deepfake Detection Challenge (DFDC) datasets in both intra and cross dataset settings to affirm the superiority of our approach. Our model achieves strong performance with an Area Under the Curve (AUC) of 99.49% and an accuracy of 97.57% in intra-dataset evaluations. In cross-dataset testing, the model achieves AUC scores of 93.31% and 81.25% on the unseen DeepFakeDetection and DFDC datasets, respectively. These results highlight the effectiveness of cross-attention-based feature fusion in enhancing the robustness of deepfake video detection.

Reproductions