SOTAVerified

Self-supervised Implicit Glyph Attention for Text Recognition

2022-03-07CVPR 2023Code Available1· sign in to hype

Tongkun Guan, Chaochen Gu, Jingzheng Tu, Xue Yang, Qi Feng, Yudi Zhao, Xiaokang Yang, Wei Shen

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The attention mechanism has become the de facto module in scene text recognition (STR) methods, due to its capability of extracting character-level representations. These methods can be summarized into implicit attention based and supervised attention based, depended on how the attention is computed, i.e., implicit attention and supervised attention are learned from sequence-level text annotations and or character-level bounding box annotations, respectively. Implicit attention, as it may extract coarse or even incorrect spatial regions as character attention, is prone to suffering from an alignment-drifted issue. Supervised attention can alleviate the above issue, but it is character category-specific, which requires extra laborious character-level bounding box annotations and would be memory-intensive when handling languages with larger character categories. To address the aforementioned issues, we propose a novel attention mechanism for STR, self-supervised implicit glyph attention (SIGA). SIGA delineates the glyph structures of text images by jointly self-supervised text segmentation and implicit attention alignment, which serve as the supervision to improve attention correctness without extra character-level annotations. Experimental results demonstrate that SIGA performs consistently and significantly better than previous attention-based STR methods, in terms of both attention correctness and final recognition performance on publicly available context benchmarks and our contributed contextless benchmarks.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
CUTE80SIGA_TAccuracy93.1Unverified
ICDAR 2003SIGA_TAccuracy97Unverified
ICDAR2013SIGA_TAccuracy97.8Unverified
ICDAR2015SIGA_SAccuracy87.6Unverified
IIIT5kSIGA_SAccuracy96.9Unverified
SVTSIGA_TAccuracy95.1Unverified
SVTPSIGA_TAccuracy90.5Unverified

Reproductions