SOTAVerified

Classifying Long Clinical Documents with Pre-trained Transformers

2021-05-14Unverified0· sign in to hype

Xin Su, Timothy Miller, Xiyu Ding, Majid Afshar, Dmitriy Dligach

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Automatic phenotyping is a task of identifying cohorts of patients that match a predefined set of criteria. Phenotyping typically involves classifying long clinical documents that contain thousands of tokens. At the same time, recent state-of-art transformer-based pre-trained language models limit the input to a few hundred tokens (e.g. 512 tokens for BERT). We evaluate several strategies for incorporating pre-trained sentence encoders into document-level representations of clinical text, and find that hierarchical transformers without pre-training are competitive with task pre-trained models.

Tasks

Reproductions