SOTAVerified

Revisiting Classifier: Transferring Vision-Language Models for Video Recognition

2022-07-04Code Available2· sign in to hype

Wenhao Wu, Zhun Sun, Wanli Ouyang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Transferring knowledge from task-agnostic pre-trained deep models for downstream tasks is an important topic in computer vision research. Along with the growth of computational capacity, we now have open-source vision-language pre-trained models in large scales of the model architecture and amount of data. In this study, we focus on transferring knowledge for video classification tasks. Conventional methods randomly initialize the linear classifier head for vision classification, but they leave the usage of the text encoder for downstream visual recognition tasks undiscovered. In this paper, we revise the role of the linear classifier and replace the classifier with the different knowledge from pre-trained model. We utilize the well-pretrained language model to generate good semantic target for efficient transferring learning. The empirical study shows that our method improves both the performance and the training speed of video classification, with a negligible change in the model. Our simple yet effective tuning paradigm achieves state-of-the-art performance and efficient training on various video recognition scenarios, i.e., zero-shot, few-shot, general recognition. In particular, our paradigm achieves the state-of-the-art accuracy of 87.8% on Kinetics-400, and also surpasses previous methods by 20~50% absolute top-1 accuracy under zero-shot, few-shot settings on five popular video datasets. Code and models can be found at https://github.com/whwu95/Text4Vis .

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ActivityNetText4Vis (w/ ViT-L)mAP96.9Unverified
UCF101Text4Vis3-fold Accuracy98.2Unverified

Reproductions