An Information Extraction Study: Take In Mind the Tokenization!
Christos Theodoropoulos, Marie-Francine Moens
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/christos42/inductive_bias_IEOfficialIn paperpytorch★ 7
Abstract
Current research on the advantages and trade-offs of using characters, instead of tokenized text, as input for deep learning models, has evolved substantially. New token-free models remove the traditional tokenization step; however, their efficiency remains unclear. Moreover, the effect of tokenization is relatively unexplored in sequence tagging tasks. To this end, we investigate the impact of tokenization when extracting information from documents and present a comparative study and analysis of subword-based and character-based models. Specifically, we study Information Extraction (IE) from biomedical texts. The main outcome is twofold: tokenization patterns can introduce inductive bias that results in state-of-the-art performance, and the character-based models produce promising results; thus, transitioning to token-free IE models is feasible.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Adverse Drug Events (ADE) Corpus | PFN (ALBERT XXL, average aggregation) | RE+ Macro F1 | 83.9 | — | Unverified |