| Pixel Motion as Universal Representation for Robot Control | May 12, 2025 | Vision-Language-Action | —Unverified | 0 |
| UniVLA: Learning to Act Anywhere with Task-centric Latent Actions | May 9, 2025 | Robot ManipulationVision-Language-Action | CodeCode Available | 5 |
| 3D CAVLA: Leveraging Depth and 3D Context to Generalize Vision Language Action Models for Unseen Tasks | May 9, 2025 | Vision-Language-Action | —Unverified | 0 |
| Benchmarking Vision, Language, & Action Models in Procedurally Generated, Open Ended Action Environments | May 8, 2025 | BenchmarkingPrompt Engineering | CodeCode Available | 1 |
| Vision-Language-Action Models: Concepts, Progress, Applications and Challenges | May 7, 2025 | Autonomous VehiclesNatural Language Understanding | —Unverified | 0 |
| OpenHelix: A Short Survey, Empirical Analysis, and Open-Source Dual-System VLA Model for Robotic Manipulation | May 6, 2025 | Robot ManipulationVision-Language-Action | CodeCode Available | 3 |
| Automated Data Curation Using GPS & NLP to Generate Instruction-Action Pairs for Autonomous Vehicle Vision-Language Navigation Datasets | May 6, 2025 | Autonomous VehiclesTAG | —Unverified | 0 |
| NORA: A Small Open-Sourced Generalist Vision Language Action Model for Embodied Tasks | Apr 28, 2025 | Task PlanningVision-Language-Action | —Unverified | 0 |
| π_0.5: a Vision-Language-Action Model with Open-World Generalization | Apr 22, 2025 | Transfer LearningVision-Language-Action | —Unverified | 0 |
| GUI-R1 : A Generalist R1-Style Vision-Language Action Model For GUI Agents | Apr 14, 2025 | Vision-Language-Action | CodeCode Available | 3 |