SOTAVerified

Improvement in Sign Language Translation Using Text CTC Alignment

2024-12-12Code Available0· sign in to hype

Sihan Tan, Taro Miyazaki, Nabeela Khan, Kazuhiro Nakadai

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Current sign language translation (SLT) approaches often rely on gloss-based supervision with Connectionist Temporal Classification (CTC), limiting their ability to handle non-monotonic alignments between sign language video and spoken text. In this work, we propose a novel method combining joint CTC/Attention and transfer learning. The joint CTC/Attention introduces hierarchical encoding and integrates CTC with the attention mechanism during decoding, effectively managing both monotonic and non-monotonic alignments. Meanwhile, transfer learning helps bridge the modality gap between vision and language in SLT. Experimental results on two widely adopted benchmarks, RWTH-PHOENIX-Weather 2014 T and CSL-Daily, show that our method achieves results comparable to state-of-the-art and outperforms the pure-attention baseline. Additionally, this work opens a new door for future research into gloss-free SLT using text-based CTC alignment.

Tasks

Reproductions