SOTAVerified

Attending to Characters in Neural Sequence Labeling Models

2016-11-14COLING 2016Unverified0· sign in to hype

Marek Rei, Gamal K. O. Crichton, Sampo Pyysalo

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Sequence labeling architectures use word embeddings for capturing similarity, but suffer when handling previously unseen or rare words. We investigate character-level extensions to such models and propose a novel architecture for combining alternative word representations. By using an attention mechanism, the model is able to dynamically decide how much information to use from a word- or character-level component. We evaluated different architectures on a range of sequence labeling datasets, and character-level extensions were found to improve performance on every benchmark. In addition, the proposed attention-based architecture delivered the best results even with a smaller number of trainable parameters.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Penn TreebankBi-LSTM + charattnAccuracy97.27Unverified

Reproductions