SOTAVerified

Adaptation of Biomedical and Clinical Pretrained Models to French Long Documents: A Comparative Study

2024-02-26Code Available0· sign in to hype

Adrien Bazoge, Emmanuel Morin, Beatrice Daille, Pierre-Antoine Gourraud

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recently, pretrained language models based on BERT have been introduced for the French biomedical domain. Although these models have achieved state-of-the-art results on biomedical and clinical NLP tasks, they are constrained by a limited input sequence length of 512 tokens, which poses challenges when applied to clinical notes. In this paper, we present a comparative study of three adaptation strategies for long-sequence models, leveraging the Longformer architecture. We conducted evaluations of these models on 16 downstream tasks spanning both biomedical and clinical domains. Our findings reveal that further pre-training an English clinical model with French biomedical texts can outperform both converting a French biomedical BERT to the Longformer architecture and pre-training a French biomedical Longformer from scratch. The results underscore that long-sequence French biomedical models improve performance across most downstream tasks regardless of sequence length, but BERT based models remain the most efficient for named entity recognition tasks.

Tasks

Reproductions