SOTAVerified

AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities

2022-11-12Code Available4· sign in to hype

Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this work, we present a conceptually simple and effective method to train a strong bilingual/multilingual multimodal representation model. Starting from the pre-trained multimodal representation model CLIP released by OpenAI, we altered its text encoder with a pre-trained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k-CN, COCO-CN and XTD. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding. Our models and code are available at https://github.com/FlagAI-Open/FlagAI.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CN-ImageNetAltCLIPAccuracy (Private)59.6Unverified
CN-ImageNet-AAltCLIPAccuracy (Private)58.5Unverified
CN-ImageNet-RAltCLIPAccuracy (Private)79.9Unverified
CN-ImageNet-SketchAltCLIPAccuracy (Private)46.5Unverified
CN-ImageNet V2AltCLIPAccuracy (Private)50.9Unverified
ImageNetAltCLIPAccuracy (Private)74.5Unverified
ImageNet-AAltCLIPAccuracy (Private)69.5Unverified
ImageNet-RAltCLIPAccuracy87.2Unverified
ImageNet-SketchAltCLIPAccuracy (Private)58.7Unverified
ImageNet V2AltCLIPAccuracy (Private)68.1Unverified

Reproductions