SOTAVerified

A Hierarchical Approach for Generating Descriptive Image Paragraphs

2016-11-20CVPR 2017Code Available0· sign in to hype

Jonathan Krause, Justin Johnson, Ranjay Krishna, Li Fei-Fei

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Recent progress on image captioning has made it possible to generate novel sentences describing images in natural language, but compressing an image into a single sentence can describe visual content in only coarse detail. While one new captioning approach, dense captioning, can potentially describe images in finer levels of detail by captioning many regions within an image, it in turn is unable to produce a coherent story for an image. In this paper we overcome these limitations by generating entire paragraphs for describing images, which can tell detailed, unified stories. We develop a model that decomposes both images and paragraphs into their constituent parts, detecting semantic regions in images and using a hierarchical recurrent neural network to reason about language. Linguistic analysis confirms the complexity of the paragraph generation task, and thorough experiments on a new dataset of image and paragraph pairs demonstrate the effectiveness of our approach.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Image Paragraph CaptioningRegions-Hierarchical (ours)BLEU-48.69Unverified

Reproductions