SOTAVerified

Neural Composition: Learning to Generate from Multiple Models

2020-07-10Unverified0· sign in to hype

Denis Filimonov, Ravi Teja Gadde, Ariya Rastrow

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Decomposing models into multiple components is critically important in many applications such as language modeling (LM) as it enables adapting individual components separately and biasing of some components to the user's personal preferences. Conventionally, contextual and personalized adaptation for language models, are achieved through class-based factorization, which requires class-annotated data, or through biasing to individual phrases which is limited in scale. In this paper, we propose a system that combines model-defined components, by learning when to activate the generation process from each individual component, and how to combine probability distributions from each component, directly from unlabeled text data.

Tasks

Reproductions