SOTAVerified

CLIP4STR: A Simple Baseline for Scene Text Recognition with Pre-trained Vision-Language Model

2023-05-23Code Available1· sign in to hype

Shuai Zhao, Ruijie Quan, Linchao Zhu, Yi Yang

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Pre-trained vision-language models~(VLMs) are the de-facto foundation models for various downstream tasks. However, scene text recognition methods still prefer backbones pre-trained on a single modality, namely, the visual modality, despite the potential of VLMs to serve as powerful scene text readers. For example, CLIP can robustly identify regular (horizontal) and irregular (rotated, curved, blurred, or occluded) text in images. With such merits, we transform CLIP into a scene text reader and introduce CLIP4STR, a simple yet effective STR method built upon image and text encoders of CLIP. It has two encoder-decoder branches: a visual branch and a cross-modal branch. The visual branch provides an initial prediction based on the visual feature, and the cross-modal branch refines this prediction by addressing the discrepancy between the visual feature and text semantics. To fully leverage the capabilities of both branches, we design a dual predict-and-refine decoding scheme for inference. We scale CLIP4STR in terms of the model size, pre-training data, and training data, achieving state-of-the-art performance on 13 STR benchmarks. Additionally, a comprehensive empirical study is provided to enhance the understanding of the adaptation of CLIP to STR. Our method establishes a simple yet strong baseline for future STR research with VLMs.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
COCO-TextCLIP4STR-L1:1 Accuracy81.9Unverified
COCO-TextCLIP4STR-B1:1 Accuracy81.1Unverified
CUTE80CLIP4STR-L (DataComp-1B)Accuracy99.7Unverified
CUTE80CLIP4STR-LAccuracy99Unverified
CUTE80CLIP4STR-BAccuracy99.3Unverified
HOSTCLIP4STR-L1:1 Accuracy82.7Unverified
HOSTCLIP4STR-B1:1 Accuracy79.8Unverified
IC19-ArtCLIP4STR-LAccuracy (%)85.9Unverified
IC19-ArtCLIP4STR-BAccuracy (%)85.8Unverified
IC19-ArtCLIP4STR-L (DataComp-1B)Accuracy (%)86.4Unverified
ICDAR2013CLIP4STR-BAccuracy98.3Unverified
ICDAR2013CLIP4STR-L (DataComp-1B)Accuracy99Unverified
ICDAR2013CLIP4STR-LAccuracy98.5Unverified
ICDAR2015CLIP4STR-LAccuracy90.8Unverified
ICDAR2015CLIP4STR-BAccuracy90.6Unverified
ICDAR2015CLIP4STR-L (DataComp-1B)Accuracy91.4Unverified
IIIT5kCLIP4STR-B (DataComp-1B)Accuracy99.5Unverified
IIIT5kCLIP4STR-LAccuracy99.5Unverified
IIIT5kCLIP4STR-L (DataComp-1B)Accuracy99.6Unverified
IIIT5kCLIP4STR-BAccuracy99.2Unverified
SVTCLIP4STR-LAccuracy98.5Unverified
SVTCLIP4STR-H (DFN-5B)Accuracy99.1Unverified
SVTCLIP4STR-BAccuracy98.3Unverified
SVTCLIP4STR-L (DataComp-1B)Accuracy98.6Unverified
SVTPCLIP4STR-L (DataComp-1B)Accuracy98.1Unverified
SVTPCLIP4STR-BAccuracy97.2Unverified
SVTPCLIP4STR-LAccuracy97.4Unverified
Uber-TextCLIP4STR-L (DataComp-1B)Accuracy (%)92.2Unverified
Uber-TextCLIP4STR-BAccuracy (%)86.8Unverified
WOSTCLIP4STR-H (DFN-5B)1:1 Accuracy90.9Unverified
WOSTCLIP4STR-L (DataComp-1B)1:1 Accuracy90.6Unverified
WOSTCLIP4STR-L1:1 Accuracy88.8Unverified
WOSTCLIP4STR-B1:1 Accuracy87Unverified

Reproductions