SOTAVerified

Trankit: A Light-Weight Transformer-based Toolkit for Multilingual Natural Language Processing

2021-01-09EACL 2021Code Available1· sign in to hype

Minh Van Nguyen, Viet Dac Lai, Amir Pouran Ben Veyseh, Thien Huu Nguyen

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We introduce Trankit, a light-weight Transformer-based Toolkit for multilingual Natural Language Processing (NLP). It provides a trainable pipeline for fundamental NLP tasks over 100 languages, and 90 pretrained pipelines for 56 languages. Built on a state-of-the-art pretrained language model, Trankit significantly outperforms prior multilingual NLP pipelines over sentence segmentation, part-of-speech tagging, morphological feature tagging, and dependency parsing while maintaining competitive performance for tokenization, multi-word token expansion, and lemmatization over 90 Universal Dependencies treebanks. Despite the use of a large pretrained transformer, our toolkit is still efficient in memory usage and speed. This is achieved by our novel plug-and-play mechanism with Adapters where a multilingual pretrained transformer is shared across pipelines for different languages. Our toolkit along with pretrained models and code are publicly available at: https://github.com/nlp-uoregon/trankit. A demo website for our toolkit is also available at: http://nlp.uoregon.edu/trankit. Finally, we create a demo video for Trankit at: https://youtu.be/q0KGP3zGjGc.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
UD2.5 testStanzaMacro-averaged F183.06Unverified
UD2.5 testTrankitMacro-averaged F187.06Unverified
UD2.5 testTrankitMacro-averaged F195.65Unverified
UD2.5 testStanzaMacro-averaged F194.21Unverified

Reproductions