Handwritten Text Recognition
Handwritten Text Recognition (HTR) is the task of automatically identifying and transcribing handwritten text from images or scanned documents into machine-readable text. The goal is to develop a system capable of accurately interpreting diverse handwriting styles, accounting for variations in alignment, stroke, spacing, and noise. This task involves detecting handwritten regions within an image, extracting the text content, and converting it into a structured digital format, enabling further search, indexing, or data analysis.
Papers
Showing 76–100 of 139 papers
All datasetsIAMLAM(line-level)IAM(line-level)READ2016(line-level)BelfortREAD 2016BenthamDigital PeterHKRIAM-BIAM-DSaint Gall
Benchmark Results
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | Transformer w/ CNN | CER | 7.62 | — | Unverified |
| 2 | FPHR Paragraph Level (~145 dpi) | CER | 6.7 | — | Unverified |
| 3 | Leaky LP Cell | CER | 6.6 | — | Unverified |
| 4 | FPHR+Aug Line Level (~145 dpi) | CER | 6.5 | — | Unverified |
| 5 | Start, Follow, Read | CER | 6.4 | — | Unverified |
| 6 | Decouple Attention Network | CER | 6.4 | — | Unverified |
| 7 | FPHR+Aug Paragraph Level (~145 dpi) | CER | 6.3 | — | Unverified |
| 8 | Easter2.0 | CER | 6.21 | — | Unverified |
| 9 | HTR-VT(line-level) | CER | 4.7 | — | Unverified |
| 10 | Transformer w/ CNN (+synth) | CER | 4.67 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | GFCN | Test CER | 5.2 | — | Unverified |
| 2 | TrOCR | Test CER | 3.6 | — | Unverified |
| 3 | OrigamiNet-18 | Test CER | 3.1 | — | Unverified |
| 4 | OrigamiNet-12 | Test CER | 3.1 | — | Unverified |
| 5 | OrigamiNet-24 | Test CER | 3 | — | Unverified |
| 6 | HTR-VT | Test CER | 2.8 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | GFCN | Test CER | 8 | — | Unverified |
| 2 | OrigamiNet-12 | Test CER | 6 | — | Unverified |
| 3 | VAN | Test CER | 5 | — | Unverified |
| 4 | HTR-VT | Test CER | 4.7 | — | Unverified |
| 5 | TrOCR | Test CER | 3.4 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | CNN + BLSTM | Test CER | 4.7 | — | Unverified |
| 2 | Span | Test CER | 4.6 | — | Unverified |
| 3 | DAN | Test CER | 4.1 | — | Unverified |
| 4 | VAN | Test CER | 4.1 | — | Unverified |
| 5 | HTR-VT | Test CER | 3.9 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | PyLaia (human transcriptions + random split) | CER (%) | 10.54 | — | Unverified |
| 2 | PyLaia (human transcriptions + agreement-based split) | CER (%) | 5.57 | — | Unverified |
| 3 | PyLaia (rover consensus + agreement-based split) | CER (%) | 4.95 | — | Unverified |
| 4 | PyLaia (all transcriptions + agreement-based split) | CER (%) | 4.34 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | HTR-VT(line-level) | CER (%) | 3.9 | — | Unverified |
| 2 | DAN | CER (%) | 3.22 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | StackMix+Blots | CER | 1.73 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | StackMix+Blots | CER | 2.5 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | StackMix+Blots | CER | 3.49 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | StackMix+Blots | CER | 3.77 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | StackMix+Blots | CER | 3.01 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | StackMix+Blots | CER | 3.65 | — | Unverified |
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | DAN | CER (%) | 6.46 | — | Unverified |