SOTAVerified

Vision-Language-Action

Papers

Showing 131140 of 157 papers

TitleStatusHype
Uni-NaVid: A Video-based Vision-Language-Action Model for Unifying Embodied Navigation Tasks0
NaVILA: Legged Robot Vision-Language-Action Model for Navigation0
Quantization-Aware Imitation-Learning for Resource-Efficient Robotic Control0
CogACT: A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation0
GRAPE: Generalizing Robot Policy via Preference Alignment0
π_0: A Vision-Language-Action Flow Model for General Robot Control0
A Dual Process VLA: Efficient Robotic Manipulation Leveraging VLM0
Vision-Language-Action Model and Diffusion Policy Switching Enables Dexterous Control of an Anthropomorphic Hand0
Towards Synergistic, Generalized, and Efficient Dual-System for Robotic Manipulation0
LADEV: A Language-Driven Testing and Evaluation Platform for Vision-Language-Action Models in Robotic Manipulation0
Show:102550
← PrevPage 14 of 16Next →

No leaderboard results yet.