SOTAVerified

Handwritten Text Recognition from Crowdsourced Annotations

2023-06-19International Workshop on Historical Document Imaging and Processing 2023Unverified0· sign in to hype

Solène Tarride, Tristan Faine, Mélodie Boillet, Harold Mouchère, Christopher Kermorvant

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we explore different ways of training a model for handwritten text recognition when multiple imperfect or noisy transcriptions are available. We consider various training configurations, such as selecting a single transcription, retaining all transcriptions, or computing an aggregated transcription from all available annotations. In addition, we evaluate the impact of quality-based data selection, where samples with low agreement are removed from the training set. Our experiments are carried out on municipal registers of the city of Belfort (France) written between 1790 and 1946. % results The results show that computing a consensus transcription or training on multiple transcriptions are good alternatives. However, selecting training samples based on the degree of agreement between annotators introduces a bias in the training data and does not improve the results. Our dataset is publicly available on Zenodo: https://zenodo.org/record/8041668.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
BelfortPyLaia (all transcriptions + agreement-based split)CER (%)4.34Unverified
BelfortPyLaia (rover consensus + agreement-based split)CER (%)4.95Unverified
BelfortPyLaia (human transcriptions + agreement-based split)CER (%)5.57Unverified
BelfortPyLaia (human transcriptions + random split)CER (%)10.54Unverified

Reproductions