What Makes A Good Story? Designing Composite Rewards for Visual Storytelling
Junjie Hu, Yu Cheng, Zhe Gan, Jingjing Liu, Jianfeng Gao, Graham Neubig
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/JunjieHu/ReCo-RLOfficialIn paperpytorch★ 0
Abstract
Previous storytelling approaches mostly focused on optimizing traditional metrics such as BLEU, ROUGE and CIDEr. In this paper, we re-examine this problem from a different angle, by looking deep into what defines a realistically-natural and topically-coherent story. To this end, we propose three assessment criteria: relevance, coherence and expressiveness, which we observe through empirical analysis could constitute a "high-quality" story to the human eye. Following this quality guideline, we propose a reinforcement learning framework, ReCo-RL, with reward functions designed to capture the essence of these quality criteria. Experiments on the Visual Storytelling Dataset (VIST) with both automatic and human evaluations demonstrate that our ReCo-RL model achieves better performance than state-of-the-art baselines on both traditional metrics and the proposed new criteria.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| VIST | BLEU-RL | BLEU-4 | 14.4 | — | Unverified |
| VIST | MLE | BLEU-4 | 14.3 | — | Unverified |
| VIST | AREL | BLEU-4 | 13.6 | — | Unverified |
| VIST | ReCo-RL | BLEU-4 | 12.4 | — | Unverified |
| VIST | HSRL | BLEU-4 | 9.8 | — | Unverified |