SOTAVerified

CapeLLM: Support-Free Category-Agnostic Pose Estimation with Multimodal Large Language Models

2024-11-11Unverified0· sign in to hype

Junho Kim, Hyungjin Chung, Byung-Hoon Kim

Unverified — Be the first to reproduce this paper.

Reproduce

Abstract

Category-agnostic pose estimation (CAPE) has traditionally relied on support images with annotated keypoints, a process that is often cumbersome and may fail to fully capture the necessary correspondences across diverse object categories. Recent efforts have begun exploring the use of text-based queries, where the need for support keypoints is eliminated. However, the optimal use of textual descriptions for keypoints remains an underexplored area. In this work, we introduce CapeLLM, a novel approach that leverages a text-based multimodal large language model (MLLM) for CAPE. Our method only employs query image and detailed text descriptions as an input to estimate category-agnostic keypoints. We conduct extensive experiments to systematically explore the design space of LLM-based CAPE, investigating factors such as choosing the optimal description for keypoints, neural network architectures, and training strategies. Thanks to the advanced reasoning capabilities of the pre-trained MLLM, CapeLLM demonstrates superior generalization and robust performance. Our approach sets a new state-of-the-art on the MP-100 benchmark in the challenging 1-shot setting, marking a significant advancement in the field of category-agnostic pose estimation.

Tasks

Benchmark Results

DatasetModelMetricClaimedVerifiedStatus
MP-100CapeLLMMean PCK@0.2 - 1shot92.6Unverified

Reproductions