MASTER: Multi-Aspect Non-local Network for Scene Text Recognition
Ning Lu, Wenwen Yu, Xianbiao Qi, Yihao Chen, Ping Gong, Rong Xiao, Xiang Bai
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/wenwenyu/MASTER-pytorchOfficialIn paperpytorch★ 280
- github.com/jiangxiluning/MASTER-TFOfficialIn papertf★ 0
- github.com/open-mmlab/mmocrpytorch★ 4,725
- github.com/JiaquanYe/TableMASTER-mmocrpytorch★ 470
- github.com/mindspore-lab/mindocrmindspore★ 298
- github.com/S-HuaBomb/MASTER-paddlepaddle★ 0
- github.com/mindee/doctrpytorch★ 0
Abstract
Attention-based scene text recognizers have gained huge success, which leverages a more compact intermediate representation to learn 1d- or 2d- attention by a RNN-based encoder-decoder architecture. However, such methods suffer from attention-drift problem because high similarity among encoded features leads to attention confusion under the RNN-based local attention mechanism. Moreover, RNN-based methods have low efficiency due to poor parallelization. To overcome these problems, we propose the MASTER, a self-attention based scene text recognizer that (1) not only encodes the input-output attention but also learns self-attention which encodes feature-feature and target-target relationships inside the encoder and decoder and (2) learns a more powerful and robust intermediate representation to spatial distortion, and (3) owns a great training efficiency because of high training parallelization and a high-speed inference because of an efficient memory-cache mechanism. Extensive experiments on various benchmarks demonstrate the superior performance of our MASTER on both regular and irregular scene text. Pytorch code can be found at https://github.com/wenwenyu/MASTER-pytorch, and Tensorflow code can be found at https://github.com/jiangxiluning/MASTER-TF.