E2E-MLT - an Unconstrained End-to-End Method for Multi-Language Scene Text
2018-01-30Code Available0· sign in to hype
Michal Bušta, Yash Patel, Jiri Matas
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/yash0307/E2E-MLTOfficialIn paperpytorch★ 0
- github.com/MichalBusta/E2E-MLTpytorch★ 0
- github.com/nhh1501/E2E_MLT_VNpytorch★ 0
Abstract
An end-to-end trainable (fully differentiable) method for multi-language scene text localization and recognition is proposed. The approach is based on a single fully convolutional network (FCN) with shared layers for both tasks. E2E-MLT is the first published multi-language OCR for scene text. While trained in multi-language setup, E2E-MLT demonstrates competitive performance when compared to other methods trained for English scene text alone. The experiments show that obtaining accurate multi-language multi-script annotations is a challenging problem.