SOTAVerified

Single Shot Text Detector with Regional Attention

2017-09-01ICCV 2017Code Available0· sign in to hype

Pan He, Weilin Huang, Tong He, Qile Zhu, Yu Qiao, Xiaolin Li

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present a novel single-shot text detector that directly outputs word-level bounding boxes in a natural image. We propose an attention mechanism which roughly identifies text regions via an automatically learned attentional map. This substantially suppresses background interference in the convolutional features, which is the key to producing accurate inference of words, particularly at extremely small sizes. This results in a single model that essentially works in a coarse-to-fine manner. It departs from recent FCN- based text detectors which cascade multiple FCN models to achieve an accurate prediction. Furthermore, we develop a hierarchical inception module which efficiently aggregates multi-scale inception features. This enhances local details, and also encodes strong context information, allow- ing the detector to work reliably on multi-scale and multi- orientation text with single-scale images. Our text detector achieves an F-measure of 77% on the ICDAR 2015 bench- mark, advancing the state-of-the-art results in [18, 28]. Demo is available at: http://sstd.whuang.org/.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
COCO-TextSSTDF-Measure37Unverified
ICDAR 2013SSTDF-Measure87Unverified
ICDAR 2015EAST + PVANET2x RBOX (multi-scale)F-Measure80.7Unverified

Reproductions