SOTAVerified

Vision-Language-Action

Papers

Showing 7180 of 157 papers

TitleStatusHype
Probing a Vision-Language-Action Model for Symbolic States and Integration into a Cognitive Architecture0
Quantization-Aware Imitation-Learning for Resource-Efficient Robotic Control0
QUART-Online: Latency-Free Large Multimodal Language Model for Quadruped Robot Learning0
QUAR-VLA: Vision-Language-Action Model for Quadruped Robots0
ReAgent-V: A Reward-Driven Multi-Agent Framework for Video Understanding0
ReBot: Scaling Robot Learning with Real-to-Sim-to-Real Robotic Video Synthesis0
Refined Policy Distillation: From VLA Generalists to RL Experts0
ReVLA: Reverting Visual Domain Limitation of Robotic Foundation Models0
RLRC: Reinforcement Learning-based Recovery for Compressed Vision-Language-Action Models0
RoboCerebra: A Large-scale Benchmark for Long-horizon Robotic Manipulation Evaluation0
Show:102550
← PrevPage 8 of 16Next →

No leaderboard results yet.