SOTAVerified

Multi-view shape estimation of transparent containers

2019-11-27Code Available0· sign in to hype

Alessio Xompero, Ricardo Sanchez-Matilla, Apostolos Modas, Pascal Frossard, Andrea Cavallaro

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

The 3D localisation of an object and the estimation of its properties, such as shape and dimensions, are challenging under varying degrees of transparency and lighting conditions. In this paper, we propose a method for jointly localising container-like objects and estimating their dimensions using two wide-baseline, calibrated RGB cameras. Under the assumption of circular symmetry along the vertical axis, we estimate the dimensions of an object with a generative 3D sampling model of sparse circumferences, iterative shape fitting and image re-projection to verify the sampling hypotheses in each camera using semantic segmentation masks. We evaluate the proposed method on a novel dataset of objects with different degrees of transparency and captured under different backgrounds and illumination conditions. Our method, which is based on RGB images only, outperforms in terms of localisation success and dimension estimation accuracy a deep-learning based approach that uses depth maps.

Tasks

Reproductions