SOTAVerified

Vision-Language-Action

Papers

Showing 111120 of 157 papers

TitleStatusHype
VLABench: A Large-Scale Benchmark for Language-Conditioned Robotics Manipulation with Long-Horizon Reasoning Tasks0
QUART-Online: Latency-Free Large Multimodal Language Model for Quadruped Robot Learning0
Towards Generalist Robot Policies: What Matters in Building Vision-Language-Action ModelsCode3
RoboMIND: Benchmark on Multi-embodiment Intelligence Normative Data for Robot Manipulation0
Modality-Driven Design for Multi-Step Dexterous Manipulation: Insights from Neuroscience0
TraceVLA: Visual Trace Prompting Enhances Spatial-Temporal Awareness for Generalist Robotic Policies0
Uni-NaVid: A Video-based Vision-Language-Action Model for Unifying Embodied Navigation Tasks0
NaVILA: Legged Robot Vision-Language-Action Model for Navigation0
Quantization-Aware Imitation-Learning for Resource-Efficient Robotic Control0
RoboMatrix: A Skill-centric Hierarchical Framework for Scalable Robot Task Planning and Execution in Open-WorldCode2
Show:102550
← PrevPage 12 of 16Next →

No leaderboard results yet.