SOTAVerified

DCT: Dynamic Compressive Transformer for Modeling Unbounded Sequence

2021-10-10Unverified0· sign in to hype

Kai-Po Chang, Wei-Yun Ma

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

In this paper, we propose Dynamic Compressive Transformer (DCT), a transformer-based framework for modeling the unbounded sequence. In contrast to the previous baselines which append every sentence representation to memory, conditionally selecting and appending them is a more reasonable solution to deal with unlimited long sequences. Our model uses a policy that determines whether the sequence should be kept in memory with a compressed state or discarded during the training process. With the benefits of retaining semantically meaningful sentence information in the memory system, our experiment results on Enwik8 benchmark show that DCT outperforms the previous state-of-the-art (SOTA) model.

Tasks

Reproductions