SOTAVerified

Robust Scene Change Detection Using Visual Foundation Models and Cross-Attention Mechanisms

2024-09-25Code Available1· sign in to hype

Chun-Jung Lin, Sourav Garg, Tat-Jun Chin, Feras Dayoub

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present a novel method for scene change detection that leverages the robust feature extraction capabilities of a visual foundational model, DINOv2, and integrates full-image cross-attention to address key challenges such as varying lighting, seasonal variations, and viewpoint differences. In order to effectively learn correspondences and mis-correspondences between an image pair for the change detection task, we propose to a) ``freeze'' the backbone in order to retain the generality of dense foundation features, and b) employ ``full-image'' cross-attention to better tackle the viewpoint variations between the image pair. We evaluate our approach on two benchmark datasets, VL-CMU-CD and PSCD, along with their viewpoint-varied versions. Our experiments demonstrate significant improvements in F1-score, particularly in scenarios involving geometric changes between image pairs. The results indicate our method's superior generalization capabilities over existing state-of-the-art approaches, showing robustness against photometric and geometric variations as well as better overall generalization when fine-tuned to adapt to new environments. Detailed ablation studies further validate the contributions of each component in our architecture. Source code will be made publicly available upon acceptance.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Unaligned-VL-CMU-CD (neighbor distance 2)Robust-Scene-Change-Detection (Diff-View Augmentation)F1-score0.78Unverified
Unaligned-VL-CMU-CD (neighbor distance 2)Robust-Scene-Change-DetectionF1-score0.74Unverified

Reproductions