SOTAVerified

A Hybrid Model for Globally Coherent Story Generation

2019-08-01WS 2019Unverified0· sign in to hype

Fangzhou Zhai, Vera Demberg, Pavel Shkadzko, Wei Shi, Asad Sayeed

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Automatically generating globally coherent stories is a challenging problem. Neural text generation models have been shown to perform well at generating fluent sentences from data, but they usually fail to keep track of the overall coherence of the story after a couple of sentences. Existing work that incorporates a text planning module succeeded in generating recipes and dialogues, but appears quite data-demanding. We propose a novel story generation approach that generates globally coherent stories from a fairly small corpus. The model exploits a symbolic text planning module to produce text plans, thus reducing the demand of data; a neural surface realization module then generates fluent text conditioned on the text plan. Human evaluation showed that our model outperforms various baselines by a wide margin and generates stories which are fluent as well as globally coherent.

Tasks

Reproductions