SOTAVerified

An Empirical Study on Context Length for Open-Domain Dialog Generation

2024-08-31Code Available0· sign in to hype

Xinyi Shen, Zuoquan Lin

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Transformer-based open-domain dialog models have become increasingly popular in recent years. These models typically represent context as a concatenation of a dialog history. However, there is no criterion to decide how many utterances should be kept adequate in a context. We try to figure out how the choice of context length affects the model. We experiment on three questions from coarse to fine: (i) Does longer context help model training? (ii) Is it necessary to change the training context length when dealing with dialogs of different context lengths? (iii) Do different dialog samples have the same preference for context length? Our experimental results show that context length, an often overlooked setting, deserves attention when implementing Transformer-based dialog models.

Tasks

Reproductions