| DriveMoE: Mixture-of-Experts for Vision-Language-Action Model in End-to-End Autonomous Driving | May 22, 2025 | Autonomous DrivingBench2Drive | —Unverified | 0 | 0 |
| EfficientVLA: Training-Free Acceleration and Compression for Vision-Language-Action Models | Jun 11, 2025 | Vision-Language-Action | —Unverified | 0 | 0 |
| Embodied AI with Foundation Models for Mobile Service Robots: A Systematic Review | May 26, 2025 | Decision Making Under UncertaintySensor Fusion | —Unverified | 0 | 0 |
| EndoVLA: Dual-Phase Vision-Language-Action Model for Autonomous Tracking in Endoscopy | May 21, 2025 | Motion PlanningVision-Language-Action | —Unverified | 0 | 0 |
| Evolution 6.0: Evolving Robotic Capabilities Through Generative Design | Feb 24, 2025 | Action GenerationText to 3D | —Unverified | 0 | 0 |
| FAST: Efficient Action Tokenization for Vision-Language-Action Models | Jan 16, 2025 | Vision-Language-Action | —Unverified | 0 | 0 |
| FLARE: Robot Learning with Implicit World Modeling | May 21, 2025 | Imitation LearningVision-Language-Action | —Unverified | 0 | 0 |
| ForceVLA: Enhancing VLA Models with a Force-aware MoE for Contact-rich Manipulation | May 28, 2025 | Contact-rich ManipulationMixture-of-Experts | —Unverified | 0 | 0 |
| From Intention to Execution: Probing the Generalization Boundaries of Vision-Language-Action Models | Jun 11, 2025 | Imitation LearningVision-Language-Action | —Unverified | 0 | 0 |
| General-purpose foundation models for increased autonomy in robot-assisted surgery | Jan 1, 2024 | Vision-Language-Action | —Unverified | 0 | 0 |