SOTAVerified

Improving Tokenisation by Alternative Treatment of Spaces

2021-12-17ACL ARR December 2022Unverified0· sign in to hype

Anonymous

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Tokenisation is the first step in almost all NLP tasks, and state-of-the-art transformer-based language models all use subword tokenisation algorithms to process input text. Existing algorithms have problems, often producing tokenisations of limited linguistic validity, and representing equivalent strings differently depending on their position within a word. We hypothesise that these problems hinder the ability of transformer-based models to handle complex words, and suggest that these problems are a result of allowing tokens to include spaces. We thus experiment with an alternative tokenisation approach where spaces are always treated as individual tokens, finding it alleviates existing problems, improving performance of models. Concretely, we apply a modification to the BPE and Unigram algorithms which implements this approach, and find it gives more morphologically correct tokenisations, in particular when handling prefixes. In addition, we show that the modified algorithms give improved performance on downstream NLP tasks that involve handling complex words, whilst having no detrimental effect on performance in general natural language understanding tasks. Given the results of our experiments, we advocate for always treating spaces as individual tokens as a superior tokenisation method.

Tasks

Reproductions