SOTAVerified

Affordance Transfer Learning for Human-Object Interaction Detection

2021-04-07CVPR 2021Code Available1· sign in to hype

Zhi Hou, Baosheng Yu, Yu Qiao, Xiaojiang Peng, DaCheng Tao

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Reasoning the human-object interactions (HOI) is essential for deeper scene understanding, while object affordances (or functionalities) are of great importance for human to discover unseen HOIs with novel objects. Inspired by this, we introduce an affordance transfer learning approach to jointly detect HOIs with novel objects and recognize affordances. Specifically, HOI representations can be decoupled into a combination of affordance and object representations, making it possible to compose novel interactions by combining affordance representations and novel object representations from additional images, i.e. transferring the affordance to novel objects. With the proposed affordance transfer learning, the model is also capable of inferring the affordances of novel objects from known affordance representations. The proposed method can thus be used to 1) improve the performance of HOI detection, especially for the HOIs with unseen objects; and 2) infer the affordances of novel objects. Experimental results on two datasets, HICO-DET and HOI-COCO (from V-COCO), demonstrate significant improvements over recent state-of-the-art methods for HOI detection and object affordance detection. Code is available at https://github.com/zhihou7/HOI-CL

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
HICO-DETATLCOCO-Val201752.01Unverified
HICO-DET(Unknown Concepts)ATLCOCO-Val201736.8Unverified

Reproductions