Visual Semantic Re-ranker for Text Spotting
Ahmed Sabir, Francesc Moreno-Noguer, Lluís Padró
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/ahmedssabir/Visual-Semantic-Relatedness-with-Word-EmbeddingOfficialpytorch★ 0
Abstract
Many current state-of-the-art methods for text recognition are based on purely local information and ignore the semantic correlation between text and its surrounding visual context. In this paper, we propose a post-processing approach to improve the accuracy of text spotting by using the semantic relation between the text and the scene. We initially rely on an off-the-shelf deep neural network that provides a series of text hypotheses for each input image. These text hypotheses are then re-ranked using the semantic relatedness with the object in the image. As a result of this combination, the performance of the original network is boosted with a very low computational cost. The proposed framework can be used as a drop-in complement for any text-spotting algorithm that outputs a ranking of word hypotheses. We validate our approach on ICDAR'17 shared task dataset.