| LoHoVLA: A Unified Vision-Language-Action Model for Long-Horizon Embodied Tasks | May 31, 2025 | Task PlanningVision-Language-Action | —Unverified | 0 |
| Manipulation Facing Threats: Evaluating Physical Vulnerabilities in End-to-End Vision Language Action Models | Sep 20, 2024 | Vision-Language-Action | —Unverified | 0 |
| Mobility VLA: Multimodal Instruction Navigation with Long-Context VLMs and Topological Graphs | Jul 10, 2024 | Common Sense ReasoningVision-Language-Action | —Unverified | 0 |
| Modality-Driven Design for Multi-Step Dexterous Manipulation: Insights from Neuroscience | Dec 15, 2024 | Vision-Language-Action | —Unverified | 0 |
| MoLe-VLA: Dynamic Layer-skipping Vision Language Action Model via Mixture-of-Layers for Efficient Robot Manipulation | Mar 26, 2025 | Knowledge DistillationMixture-of-Experts | —Unverified | 0 |
| MoManipVLA: Transferring Vision-language-action Models for General Mobile Manipulation | Mar 17, 2025 | Motion PlanningVision-Language-Action | —Unverified | 0 |
| MoRE: Unlocking Scalability in Reinforcement Learning for Quadruped Vision-Language-Action Models | Mar 11, 2025 | Large Language ModelMixture-of-Experts | —Unverified | 0 |
| NaVILA: Legged Robot Vision-Language-Action Model for Navigation | Dec 5, 2024 | NavigateVision and Language Navigation | —Unverified | 0 |
| NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks | Apr 28, 2025 | Task PlanningVision-Language-Action | —Unverified | 0 |
| Object-Centric Prompt-Driven Vision-Language-Action Model for Robotic Manipulation | Jan 1, 2025 | Vision-Language-Action | —Unverified | 0 |
| Object-Focus Actor for Data-efficient Robot Generalization Dexterous Manipulation | May 21, 2025 | ObjectPose Estimation | —Unverified | 0 |
| ObjectVLA: End-to-End Open-World Object Manipulation Without Demonstration | Feb 26, 2025 | Imitation LearningObject | —Unverified | 0 |
| OccLLaMA: An Occupancy-Language-Action Generative World Model for Autonomous Driving | Sep 5, 2024 | Autonomous DrivingMotion Planning | —Unverified | 0 |
| OG-VLA: 3D-Aware Vision Language Action Model via Orthographic Image Generation | Jun 1, 2025 | Image GenerationLarge Language Model | —Unverified | 0 |
| OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents | Jun 27, 2024 | DecoderImitation Learning | —Unverified | 0 |
| OPAL: Encoding Causal Understanding of Physical Systems for Robot Learning | Apr 9, 2025 | Vision-Language-Action | —Unverified | 0 |
| OTTER: A Vision-Language-Action Model with Text-Aware Visual Feature Extraction | Mar 5, 2025 | Vision-Language-ActionZero-shot Generalization | —Unverified | 0 |
| π_0.5: a Vision-Language-Action Model with Open-World Generalization | Apr 22, 2025 | Transfer LearningVision-Language-Action | —Unverified | 0 |
| π_0: A Vision-Language-Action Flow Model for General Robot Control | Oct 31, 2024 | Language ModelingLanguage Modelling | —Unverified | 0 |
| Pixel Motion as Universal Representation for Robot Control | May 12, 2025 | Vision-Language-Action | —Unverified | 0 |
| Probing a Vision-Language-Action Model for Symbolic States and Integration into a Cognitive Architecture | Feb 6, 2025 | ObjectVision-Language-Action | —Unverified | 0 |
| Quantization-Aware Imitation-Learning for Resource-Efficient Robotic Control | Dec 2, 2024 | Autonomous DrivingDecision Making | —Unverified | 0 |
| QUART-Online: Latency-Free Large Multimodal Language Model for Quadruped Robot Learning | Dec 20, 2024 | Language ModelingLanguage Modelling | —Unverified | 0 |
| QUAR-VLA: Vision-Language-Action Model for Quadruped Robots | Dec 22, 2023 | Decision MakingVision-Language-Action | —Unverified | 0 |
| ReAgent-V: A Reward-Driven Multi-Agent Framework for Video Understanding | Jun 2, 2025 | Action RecognitionVideo Understanding | —Unverified | 0 |
| ReBot: Scaling Robot Learning with Real-to-Sim-to-Real Robotic Video Synthesis | Mar 15, 2025 | Domain GeneralizationRobot Manipulation | —Unverified | 0 |
| Refined Policy Distillation: From VLA Generalists to RL Experts | Mar 6, 2025 | Vision-Language-Action | —Unverified | 0 |
| ReVLA: Reverting Visual Domain Limitation of Robotic Foundation Models | Sep 23, 2024 | Vision-Language-Action | —Unverified | 0 |
| RLRC: Reinforcement Learning-based Recovery for Compressed Vision-Language-Action Models | Jun 21, 2025 | Model CompressionQuantization | —Unverified | 0 |
| RoboCerebra: A Large-scale Benchmark for Long-horizon Robotic Manipulation Evaluation | Jun 7, 2025 | Vision-Language-Action | —Unverified | 0 |
| RoboMamba: Efficient Vision-Language-Action Model for Robotic Reasoning and Manipulation | Jun 6, 2024 | Common Sense ReasoningMamba | —Unverified | 0 |
| RoboMIND: Benchmark on Multi-embodiment Intelligence Normative Data for Robot Manipulation | Dec 18, 2024 | DiversityImitation Learning | —Unverified | 0 |
| RoboMonkey: Scaling Test-Time Sampling and Verification for Vision-Language-Action Models | Jun 21, 2025 | Synthetic Data GenerationVision-Language-Action | —Unverified | 0 |
| Robotic Control via Embodied Chain-of-Thought Reasoning | Jul 11, 2024 | Vision-Language-Action | —Unverified | 0 |
| Robotic Policy Learning via Human-assisted Action Preference Optimization | Jun 8, 2025 | Vision-Language-Action | —Unverified | 0 |
| ROSA: Harnessing Robot States for Vision-Language and Action Alignment | Jun 16, 2025 | State EstimationVision-Language-Action | —Unverified | 0 |
| RT-cache: Efficient Robot Trajectory Retrieval System | May 14, 2025 | RetrievalVision-Language-Action | —Unverified | 0 |
| Run-time Observation Interventions Make Vision-Language-Action Models More Visually Robust | Oct 2, 2024 | Vision-Language-Action | —Unverified | 0 |
| SAFE: Multitask Failure Detection for Vision-Language-Action Models | Jun 11, 2025 | Conformal PredictionVision-Language-Action | —Unverified | 0 |
| SafeVLA: Towards Safety Alignment of Vision-Language-Action Model via Constrained Learning | Mar 5, 2025 | Safe Reinforcement LearningSafety Alignment | —Unverified | 0 |
| SARA-RT: Scaling up Robotics Transformers with Self-Adaptive Robust Attention | Dec 4, 2023 | Vision-Language-Action | —Unverified | 0 |
| SOLAMI: Social Vision-Language-Action Modeling for Immersive Interaction with 3D Autonomous Characters | Jan 1, 2025 | Vision-Language-Action | —Unverified | 0 |
| Survey on Vision-Language-Action Models | Feb 7, 2025 | Review GenerationSurvey | —Unverified | 0 |
| Towards a Generalizable Bimanual Foundation Policy via Flow-based Video Prediction | May 30, 2025 | Action GenerationOptical Flow Estimation | —Unverified | 0 |
| Towards Natural Language-Driven Assembly Using Foundation Models | Jun 23, 2024 | FrictionVision-Language-Action | —Unverified | 0 |
| A Taxonomy for Evaluating Generalist Robot Policies | Mar 3, 2025 | Robot ManipulationVision-Language-Action | —Unverified | 0 |
| TraceVLA: Visual Trace Prompting Enhances Spatial-Temporal Awareness for Generalist Robotic Policies | Dec 13, 2024 | Robot ManipulationVision-Language-Action | —Unverified | 0 |
| TrackVLA: Embodied Visual Tracking in the Wild | May 29, 2025 | Language ModelingLanguage Modelling | —Unverified | 0 |
| Unified Vision-Language-Action Model | Jun 24, 2025 | Autonomous Drivingmodel | —Unverified | 0 |
| Uni-NaVid: A Video-based Vision-Language-Action Model for Unifying Embodied Navigation Tasks | Dec 9, 2024 | Vision-Language-Action | —Unverified | 0 |