SOTAVerified

A Deep Visual Correspondence Embedding Model for Stereo Matching Costs

2015-12-01ICCV 2015Unverified0· sign in to hype

Zhuoyuan Chen, Xun Sun, Liang Wang, Yinan Yu, Chang Huang

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

This paper presents a data-driven matching cost for stereo matching. A novel deep visual correspondence embedding model is trained via Convolutional Neural Network on a large set of stereo images with ground truth disparities. This deep embedding model leverages appearance data to learn visual similarity relationships between corresponding image patches, and explicitly maps intensity values into an embedding feature space to measure pixel dissimilarities. Experimental results on KITTI and Middlebury data sets demonstrate the effectiveness of our model. First, we prove that the new measure of pixel dissimilarity outperforms traditional matching costs. Furthermore, when integrated with a global stereo framework, our method ranks top 3 among all two-frame algorithms on the KITTI benchmark. Finally, cross-validation results show that our model is able to make correct predictions for unseen data which are outside of its labeled training set.

Tasks

Reproductions