Low-rank Orthogonal Subspace Intervention for Generalizable Face Forgery Detection
Chi Wang, Xinjue Hu, Boyu Wang, Ziwen He, Zhangjie Fu
Unverified — Be the first to reproduce this paper.
ReproduceAbstract
The generalization problem remains a key challenge in face forgery detection. This paper explores the reasons for the generalization failure of Vanilla CLIP: in ``real vs. fake" detection, the few dominant principal components in the feature space primarily encode forgery-irrelevant information, rather than authentic forgery traces. However, this irrelevant information inevitably leads to spurious correlations, severely limiting detector performance. We define this phenomenon as ``low-rank spurious bias". To address this, we propose a low-rank representation space intervention paradigm, named the SeLop, from the perspective of causal representation learning. SeLop unifies the spurious correlation factors irrelevant to forgery into a low-rank subspace and cuts off the statistical shortcut between it and the label, thus aligning representation learning with authentic forgery traces. Specifically, we decompose spurious correlation features into a low-rank subspace through orthogonal low-rank projection, then remove this subspace from the original representation and train its orthogonal complement to capture forgery-related features. This low-rank projection removal effectively eliminates spurious correlation factors, ensuring that classification decisions are based on authentic forgery cues. With only 0.39M trainable parameters, our method achieves state-of-the-art performance across several benchmarks, demonstrating excellent robustness and generalization.