SOTAVerified

Vision-Language-Action

Papers

Showing 8190 of 157 papers

TitleStatusHype
RoboMamba: Efficient Vision-Language-Action Model for Robotic Reasoning and Manipulation0
RoboMIND: Benchmark on Multi-embodiment Intelligence Normative Data for Robot Manipulation0
RoboMonkey: Scaling Test-Time Sampling and Verification for Vision-Language-Action Models0
Robotic Control via Embodied Chain-of-Thought Reasoning0
Robotic Policy Learning via Human-assisted Action Preference Optimization0
ROSA: Harnessing Robot States for Vision-Language and Action Alignment0
RT-cache: Efficient Robot Trajectory Retrieval System0
Run-time Observation Interventions Make Vision-Language-Action Models More Visually Robust0
SAFE: Multitask Failure Detection for Vision-Language-Action Models0
SafeVLA: Towards Safety Alignment of Vision-Language-Action Model via Constrained Learning0
Show:102550
← PrevPage 9 of 16Next →

No leaderboard results yet.