SOTAVerified

Temporal Attentive Alignment for Video Domain Adaptation

2019-05-26Code Available1· sign in to hype

Min-Hung Chen, Zsolt Kira, Ghassan AlRegib

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Although various image-based domain adaptation (DA) techniques have been proposed in recent years, domain shift in videos is still not well-explored. Most previous works only evaluate performance on small-scale datasets which are saturated. Therefore, we first propose a larger-scale dataset with larger domain discrepancy: UCF-HMDB_full. Second, we investigate different DA integration methods for videos, and show that simultaneously aligning and learning temporal dynamics achieves effective alignment even without sophisticated DA methods. Finally, we propose Temporal Attentive Adversarial Adaptation Network (TA3N), which explicitly attends to the temporal dynamics using domain discrepancy for more effective domain alignment, achieving state-of-the-art performance on three video DA datasets. The code and data are released at http://github.com/cmhungsteve/TA3N.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
HMDBfull-to-UCFTA3NAccuracy81.79Unverified
HMDBsmall-to-UCFTA3NAccuracy99.47Unverified
Olympic-to-HMDBsmallTA3NAccuracy92.92Unverified
UCF-to-HMDBfullTA3NAccuracy78.33Unverified
UCF-to-HMDBsmallTA3NAccuracy99.33Unverified
UCF-to-OlympicTA3NAccuracy98.15Unverified

Reproductions