Trellis Networks for Sequence Modeling
Shaojie Bai, J. Zico Kolter, Vladlen Koltun
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/locuslab/trellisnetOfficialIn paperpytorch★ 0
Abstract
We present trellis networks, a new architecture for sequence modeling. On the one hand, a trellis network is a temporal convolutional network with special structure, characterized by weight tying across depth and direct injection of the input into deep layers. On the other hand, we show that truncated recurrent networks are equivalent to trellis networks with special sparsity structure in their weight matrices. Thus trellis networks with general weight matrices generalize truncated recurrent networks. We leverage these connections to design high-performing trellis networks that absorb structural and algorithmic elements from both recurrent and convolutional models. Experiments demonstrate that trellis networks outperform the current state of the art methods on a variety of challenging benchmarks, including word-level language modeling and character-level language modeling tasks, and stress tests designed to evaluate long-term memory retention. The code is available at https://github.com/locuslab/trellisnet .
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Penn Treebank (Character Level) | Trellis Network | Bit per Character (BPC) | 1.16 | — | Unverified |
| Penn Treebank (Word Level) | Trellis Network | Test perplexity | 54.19 | — | Unverified |
| WikiText-103 | Trellis Network | Test perplexity | 29.19 | — | Unverified |