Morpheme Segmentaiton
Succesful systems segment a given word or sentence into a sequence of morphemes.
Papers
Showing 1–6 of 6 papers
Benchmark Results
| # | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| 1 | Subword-ULM transformer (DeepSPIN-3; soft-attention, 1-5 entmax) | macro avg (subtask 1) | 97.29 | — | Unverified |
| 2 | Char LSTM (DeepSPIN-2; soft-attention, 1-5 entmax) | macro avg (subtask 1) | 97.15 | — | Unverified |
| 3 | Ensemble of hard-attention transducers (CLUZH) | macro avg (subtask 1) | 96.85 | — | Unverified |
| 4 | Char LSTM (DeepSPIN-1; soft-attention) | macro avg (subtask 1) | 96.32 | — | Unverified |
| 5 | BiLSTM for seq labelling (Tü_Seg-1) | macro avg (subtask 1) | 96.06 | — | Unverified |
| 6 | Bidirectional GRU + Morfessor features (AUUH_F) | macro avg (subtask 1) | 93.72 | — | Unverified |
| 7 | AUUH_B | f1 macro avg (subtask 2) | 89.77 | — | Unverified |
| 8 | AUUH_A | f1 macro avg (subtask 2) | 89 | — | Unverified |
| 9 | CLUZH-3 | f1 macro avg (subtask 2) | 88.14 | — | Unverified |
| 10 | CLUZH-2 | f1 macro avg (subtask 2) | 87.93 | — | Unverified |