SOTAVerified

Visual Compositional Learning for Human-Object Interaction Detection

2020-07-24ECCV 2020Code Available1· sign in to hype

Zhi Hou, Xiaojiang Peng, Yu Qiao, DaCheng Tao

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Human-Object interaction (HOI) detection aims to localize and infer relationships between human and objects in an image. It is challenging because an enormous number of possible combinations of objects and verbs types forms a long-tail distribution. We devise a deep Visual Compositional Learning (VCL) framework, which is a simple yet efficient framework to effectively address this problem. VCL first decomposes an HOI representation into object and verb specific features, and then composes new interaction samples in the feature space via stitching the decomposed features. The integration of decomposition and composition enables VCL to share object and verb features among different HOI samples and images, and to generate new interaction samples and new types of HOI, and thus largely alleviates the long-tail distribution problem and benefits low-shot or zero-shot HOI detection. Extensive experiments demonstrate that the proposed VCL can effectively improve the generalization of HOI detection on HICO-DET and V-COCO and outperforms the recent state-of-the-art methods on HICO-DET. Code is available at https://github.com/zhihou7/VCL.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
HICO-DETVCLCOCO-Val201736.74Unverified
HICO-DET(Unknown Concepts)VCLCOCO-Val201728.71Unverified

Reproductions