SOTAVerified

Uni-Sign: Toward Unified Sign Language Understanding at Scale

2025-01-25Code Available2· sign in to hype

Zecheng Li, Wengang Zhou, Weichao Zhao, Kepeng Wu, Hezhen Hu, Houqiang Li

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Sign language pre-training has gained increasing attention for its ability to enhance performance across various sign language understanding (SLU) tasks. However, existing methods often suffer from a gap between pre-training and fine-tuning, leading to suboptimal results. To address this, we propose Uni-Sign, a unified pre-training framework that eliminates the gap between pre-training and downstream SLU tasks through a large-scale generative pre-training strategy and a novel fine-tuning paradigm. First, we introduce CSL-News, a large-scale Chinese Sign Language (CSL) dataset containing 1,985 hours of video paired with textual annotations, which enables effective large-scale pre-training. Second, Uni-Sign unifies SLU tasks by treating downstream tasks as a single sign language translation (SLT) task during fine-tuning, ensuring seamless knowledge transfer between pre-training and fine-tuning. Furthermore, we incorporate a prior-guided fusion (PGF) module and a score-aware sampling strategy to efficiently fuse pose and RGB information, addressing keypoint inaccuracies and improving computational efficiency. Extensive experiments across multiple SLU benchmarks demonstrate that Uni-Sign achieves state-of-the-art performance across multiple downstream SLU tasks. Dataset and code are available at github.com/ZechengLi19/Uni-Sign.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CSL-DailyUni-SignWord Error Rate (WER)26Unverified
MSASL-1000Uni-SignP-I Top-1 Accuracy78.16Unverified
WLASL100Uni-SignTop-1 Accuracy92.25Unverified
WLASL-2000Uni-SignTop-1 Accuracy63.52Unverified

Reproductions