| VLABench: A Large-Scale Benchmark for Language-Conditioned Robotics Manipulation with Long-Horizon Reasoning Tasks | Dec 24, 2024 | Common Sense ReasoningTransfer Learning | —Unverified | 0 |
| QUART-Online: Latency-Free Large Multimodal Language Model for Quadruped Robot Learning | Dec 20, 2024 | Language ModelingLanguage Modelling | —Unverified | 0 |
| RoboMIND: Benchmark on Multi-embodiment Intelligence Normative Data for Robot Manipulation | Dec 18, 2024 | DiversityImitation Learning | —Unverified | 0 |
| Modality-Driven Design for Multi-Step Dexterous Manipulation: Insights from Neuroscience | Dec 15, 2024 | Vision-Language-Action | —Unverified | 0 |
| TraceVLA: Visual Trace Prompting Enhances Spatial-Temporal Awareness for Generalist Robotic Policies | Dec 13, 2024 | Robot ManipulationVision-Language-Action | —Unverified | 0 |
| Uni-NaVid: A Video-based Vision-Language-Action Model for Unifying Embodied Navigation Tasks | Dec 9, 2024 | Vision-Language-Action | —Unverified | 0 |
| NaVILA: Legged Robot Vision-Language-Action Model for Navigation | Dec 5, 2024 | NavigateVision and Language Navigation | —Unverified | 0 |
| Quantization-Aware Imitation-Learning for Resource-Efficient Robotic Control | Dec 2, 2024 | Autonomous DrivingDecision Making | —Unverified | 0 |
| CogACT: A Foundational Vision-Language-Action Model for Synergizing Cognition and Action in Robotic Manipulation | Nov 29, 2024 | QuantizationVision-Language-Action | —Unverified | 0 |
| GRAPE: Generalizing Robot Policy via Preference Alignment | Nov 28, 2024 | Vision-Language-Action | —Unverified | 0 |
| π_0: A Vision-Language-Action Flow Model for General Robot Control | Oct 31, 2024 | Language ModelingLanguage Modelling | —Unverified | 0 |
| A Dual Process VLA: Efficient Robotic Manipulation Leveraging VLM | Oct 21, 2024 | Decision MakingVision-Language-Action | —Unverified | 0 |
| Vision-Language-Action Model and Diffusion Policy Switching Enables Dexterous Control of an Anthropomorphic Hand | Oct 17, 2024 | Vision-Language-Action | —Unverified | 0 |
| Towards Synergistic, Generalized, and Efficient Dual-System for Robotic Manipulation | Oct 10, 2024 | Robot ManipulationVision-Language-Action | —Unverified | 0 |
| LADEV: A Language-Driven Testing and Evaluation Platform for Vision-Language-Action Models in Robotic Manipulation | Oct 7, 2024 | Vision-Language-Action | —Unverified | 0 |
| Run-time Observation Interventions Make Vision-Language-Action Models More Visually Robust | Oct 2, 2024 | Vision-Language-Action | —Unverified | 0 |
| ReVLA: Reverting Visual Domain Limitation of Robotic Foundation Models | Sep 23, 2024 | Vision-Language-Action | —Unverified | 0 |
| Manipulation Facing Threats: Evaluating Physical Vulnerabilities in End-to-End Vision Language Action Models | Sep 20, 2024 | Vision-Language-Action | —Unverified | 0 |
| HiRT: Enhancing Robotic Control with Hierarchical Robot Transformers | Sep 12, 2024 | Vision-Language-Action | —Unverified | 0 |
| OccLLaMA: An Occupancy-Language-Action Generative World Model for Autonomous Driving | Sep 5, 2024 | Autonomous DrivingMotion Planning | —Unverified | 0 |
| CoVLA: Comprehensive Vision-Language-Action Dataset for Autonomous Driving | Aug 19, 2024 | Autonomous DrivingCaption Generation | —Unverified | 0 |
| Robotic Control via Embodied Chain-of-Thought Reasoning | Jul 11, 2024 | Vision-Language-Action | —Unverified | 0 |
| Mobility VLA: Multimodal Instruction Navigation with Long-Context VLMs and Topological Graphs | Jul 10, 2024 | Common Sense ReasoningVision-Language-Action | —Unverified | 0 |
| OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents | Jun 27, 2024 | DecoderImitation Learning | —Unverified | 0 |
| Towards Natural Language-Driven Assembly Using Foundation Models | Jun 23, 2024 | FrictionVision-Language-Action | —Unverified | 0 |