SOTAVerified

Revisiting Classification Perspective on Scene Text Recognition

2021-02-22Code Available0· sign in to hype

Hongxiang Cai, Jun Sun, Yichao Xiong

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The prevalent perspectives of scene text recognition are from sequence to sequence (seq2seq) and segmentation. Nevertheless, the former is composed of many components which makes implementation and deployment complicated, while the latter requires character level annotations that is expensive. In this paper, we revisit classification perspective that models scene text recognition as an image classification problem. Classification perspective has a simple pipeline and only needs word level annotations. We revive classification perspective by devising a scene text recognition model named as CSTR, which performs as well as methods from other perspectives. The CSTR model consists of CPNet (classification perspective network) and SPPN (separated conv with global average pooling prediction network). CSTR is as simple as image classification model like ResNet he2016deep which makes it easy to implement and deploy. We demonstrate the effectiveness of the classification perspective on scene text recognition with extensive experiments. Futhermore, CSTR achieves nearly state-of-the-art performance on six public benchmarks including regular text, irregular text. The code will be available at https://github.com/Media-Smart/vedastr.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
ICDAR 2003CSTRAccuracy94.8Unverified
ICDAR2013CSTRAccuracy93.2Unverified
ICDAR2015CSTRAccuracy81.6Unverified
SVTCSTRAccuracy90.6Unverified

Reproductions