SOTAVerified

Syntactically Informed Text Compression with Recurrent Neural Networks

2016-08-08Code Available0· sign in to hype

David Cox

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

We present a self-contained system for constructing natural language models for use in text compression. Our system improves upon previous neural network based models by utilizing recent advances in syntactic parsing -- Google's SyntaxNet -- to augment character-level recurrent neural networks. RNNs have proven exceptional in modeling sequence data such as text, as their architecture allows for modeling of long-term contextual information.

Tasks

Reproductions