VGAS: Value-Guided Action-Chunk Selection for Few-Shot Vision-Language-Action Adaptation
Changhua Xu, Jie Lu, Junyu Xuan, En Yu
Code Available — Be the first to reproduce this paper.
ReproduceCode
- github.com/jyugo-15/vgasOfficialIn paper★ 4
Abstract
Vision--Language--Action (VLA) models bridge multimodal reasoning with physical control, but adapting them to new tasks with scarce demonstrations remains unreliable. While fine-tuned VLA policies often produce semantically plausible trajectories, failures often arise from unresolved geometric ambiguities, where near-miss action candidates lead to divergent execution outcomes under limited supervision. We study few-shot VLA adaptation from a generation--selection perspective and propose a novel framework VGAS (Value-Guided Action-chunk Selection). It performs inference-time best-of-N selection to identify action chunks that are both semantically faithful and geometrically precise. Specifically, VGAS employs a finetuned VLA as a high-recall proposal generator and introduces the Q-Chunk-Former, a geometrically grounded Transformer critic to resolve fine-grained geometric ambiguities. In addition, we propose Explicit Geometric Regularization (EGR), which explicitly shapes a discriminative value landscape to preserve action ranking resolution among near-miss candidates while mitigating value instability under scarce supervision. Experiments and theoretical analysis demonstrate that VGAS consistently improves success rates and robustness under limited demonstrations and distribution shifts. Our code is available at https://github.com/Jyugo-15/VGAS.