SOTAVerified

DARA: Domain- and Relation-aware Adapters Make Parameter-efficient Tuning for Visual Grounding

2024-05-10Code Available1· sign in to hype

Ting Liu, Xuyang Liu, Siteng Huang, Honggang Chen, Quanjun Yin, Long Qin, Donglin Wang, Yue Hu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Visual grounding (VG) is a challenging task to localize an object in an image based on a textual description. Recent surge in the scale of VG models has substantially improved performance, but also introduced a significant burden on computational costs during fine-tuning. In this paper, we explore applying parameter-efficient transfer learning (PETL) to efficiently transfer the pre-trained vision-language knowledge to VG. Specifically, we propose DARA, a novel PETL method comprising Domain-aware Adapters (DA Adapters) and Relation-aware Adapters (RA Adapters) for VG. DA Adapters first transfer intra-modality representations to be more fine-grained for the VG domain. Then RA Adapters share weights to bridge the relation between two modalities, improving spatial reasoning. Empirical results on widely-used benchmarks demonstrate that DARA achieves the best accuracy while saving numerous updated parameters compared to the full fine-tuning and other PETL methods. Notably, with only 2.13\% tunable backbone parameters, DARA improves average accuracy by 0.81\% across the three benchmarks compared to the baseline model. Our code is available at https://github.com/liuting20/DARA.

Tasks

Reproductions