| Robotic Control via Embodied Chain-of-Thought Reasoning | Jul 11, 2024 | Vision-Language-Action | —Unverified | 0 |
| Mobility VLA: Multimodal Instruction Navigation with Long-Context VLMs and Topological Graphs | Jul 10, 2024 | Common Sense ReasoningVision-Language-Action | —Unverified | 0 |
| LLaRA: Supercharging Robot Learning Data for Vision-Language Policy | Jun 28, 2024 | Vision-Language-ActionWorld Knowledge | CodeCode Available | 3 |
| OmniJARVIS: Unified Vision-Language-Action Tokenization Enables Open-World Instruction Following Agents | Jun 27, 2024 | DecoderImitation Learning | —Unverified | 0 |
| Towards Natural Language-Driven Assembly Using Foundation Models | Jun 23, 2024 | FrictionVision-Language-Action | —Unverified | 0 |
| OpenVLA: An Open-Source Vision-Language-Action Model | Jun 13, 2024 | Imitation LearningLanguage Modelling | CodeCode Available | 9 |
| RoboMamba: Efficient Vision-Language-Action Model for Robotic Reasoning and Manipulation | Jun 6, 2024 | Common Sense ReasoningMamba | —Unverified | 0 |
| Vision-Language Meets the Skeleton: Progressively Distillation with Cross-Modal Knowledge for 3D Action Representation Learning | May 31, 2024 | Action RecognitionContrastive Learning | CodeCode Available | 0 |
| A Survey on Vision-Language-Action Models for Embodied AI | May 23, 2024 | Image CaptioningInstruction Following | CodeCode Available | 4 |
| LEGENT: Open Platform for Embodied Agents | Apr 28, 2024 | Vision-Language-Action | —Unverified | 0 |