SOTAVerified

End-to-End Offline Goal-Oriented Dialog Policy Learning via Policy Gradient

2017-12-07Unverified0· sign in to hype

Li Zhou, Kevin Small, Oleg Rokhlenko, Charles Elkan

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Learning a goal-oriented dialog policy is generally performed offline with supervised learning algorithms or online with reinforcement learning (RL). Additionally, as companies accumulate massive quantities of dialog transcripts between customers and trained human agents, encoder-decoder methods have gained popularity as agent utterances can be directly treated as supervision without the need for utterance-level annotations. However, one potential drawback of such approaches is that they myopically generate the next agent utterance without regard for dialog-level considerations. To resolve this concern, this paper describes an offline RL method for learning from unannotated corpora that can optimize a goal-oriented policy at both the utterance and dialog level. We introduce a novel reward function and use both on-policy and off-policy policy gradient to learn a policy offline without requiring online user interaction or an explicit state space definition.

Tasks

Reproductions