SOTAVerified

LD-DETR: Loop Decoder DEtection TRansformer for Video Moment Retrieval and Highlight Detection

2025-01-18Code Available1· sign in to hype

Pengcheng Zhao, Zhixian He, Fuwei Zhang, Shujin Lin, Fan Zhou

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Video Moment Retrieval and Highlight Detection aim to find corresponding content in the video based on a text query. Existing models usually first use contrastive learning methods to align video and text features, then fuse and extract multimodal information, and finally use a Transformer Decoder to decode multimodal information. However, existing methods face several issues: (1) Overlapping semantic information between different samples in the dataset hinders the model's multimodal aligning performance; (2) Existing models are not able to efficiently extract local features of the video; (3) The Transformer Decoder used by the existing model cannot adequately decode multimodal features. To address the above issues, we proposed the LD-DETR model for Video Moment Retrieval and Highlight Detection tasks. Specifically, we first distilled the similarity matrix into the identity matrix to mitigate the impact of overlapping semantic information. Then, we designed a method that enables convolutional layers to extract multimodal local features more efficiently. Finally, we fed the output of the Transformer Decoder back into itself to adequately decode multimodal information. We evaluated LD-DETR on four public benchmarks and conducted extensive experiments to demonstrate the superiority and effectiveness of our approach. Our model outperforms the State-Of-The-Art models on QVHighlight, Charades-STA and TACoS datasets. Our code is available at https://github.com/qingchen239/ld-detr.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Charades-STALD-DETRR@1 IoU=0.562.58Unverified
QVHighlightsLD-DETRmAP46.41Unverified

Reproductions