SOTAVerified

Multimodal Convolutional Neural Networks for Matching Image and Sentence

2015-04-23ICCV 2015Code Available0· sign in to hype

Lin Ma, Zhengdong Lu, Lifeng Shang, Hang Li

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

In this paper, we propose multimodal convolutional neural networks (m-CNNs) for matching image and sentence. Our m-CNN provides an end-to-end framework with convolutional architectures to exploit image representation, word composition, and the matching relations between the two modalities. More specifically, it consists of one image CNN encoding the image content, and one matching CNN learning the joint representation of image and sentence. The matching CNN composes words to different semantic fragments and learns the inter-modal relations between image and the composed fragments at different levels, thus fully exploit the matching relations between image and sentence. Experimental results on benchmark databases of bidirectional image and sentence retrieval demonstrate that the proposed m-CNNs can effectively capture the information necessary for image and sentence matching. Specifically, our proposed m-CNNs for bidirectional image and sentence retrieval on Flickr30K and Microsoft COCO databases achieve the state-of-the-art performances.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
Flickr30K 1K testmCNNR@126.2Unverified

Reproductions