SOTAVerified

VGAS: Value-Guided Action-Chunk Selection for Few-Shot Vision-Language-Action Adaptation

2026-02-07Code Available0· sign in to hype

Changhua Xu, Jie Lu, Junyu Xuan, En Yu

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Vision--Language--Action (VLA) models bridge multimodal reasoning with physical control, but adapting them to new tasks with scarce demonstrations remains unreliable. While fine-tuned VLA policies often produce semantically plausible trajectories, failures often arise from unresolved geometric ambiguities, where near-miss action candidates lead to divergent execution outcomes under limited supervision. We study few-shot VLA adaptation from a generation--selection perspective and propose a novel framework VGAS (Value-Guided Action-chunk Selection). It performs inference-time best-of-N selection to identify action chunks that are both semantically faithful and geometrically precise. Specifically, VGAS employs a finetuned VLA as a high-recall proposal generator and introduces the Q-Chunk-Former, a geometrically grounded Transformer critic to resolve fine-grained geometric ambiguities. In addition, we propose Explicit Geometric Regularization (EGR), which explicitly shapes a discriminative value landscape to preserve action ranking resolution among near-miss candidates while mitigating value instability under scarce supervision. Experiments and theoretical analysis demonstrate that VGAS consistently improves success rates and robustness under limited demonstrations and distribution shifts. Our code is available at https://github.com/Jyugo-15/VGAS.

Reproductions