SOTAVerified

Visual Storytelling via Predicting Anchor Word Embeddings in the Stories

2020-01-13Unverified0· sign in to hype

Bowen Zhang, Hexiang Hu, Fei Sha

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

We propose a learning model for the task of visual storytelling. The main idea is to predict anchor word embeddings from the images and use the embeddings and the image features jointly to generate narrative sentences. We use the embeddings of randomly sampled nouns from the groundtruth stories as the target anchor word embeddings to learn the predictor. To narrate a sequence of images, we use the predicted anchor word embeddings and the image features as the joint input to a seq2seq model. As opposed to state-of-the-art methods, the proposed model is simple in design, easy to optimize, and attains the best results in most automatic evaluation metrics. In human evaluation, the method also outperforms competing methods.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
VISTStoryAnchor: w/ Predicted NounsBLEU-414Unverified

Reproductions