SOTAVerified

Action Generation

Papers

Showing 150 of 111 papers

TitleStatusHype
SmolVLA: A Vision-Language-Action Model for Affordable and Efficient RoboticsCode11
Large Action Models: From Inception to ImplementationCode9
Fine-Tuning Vision-Language-Action Models: Optimizing Speed and SuccessCode5
WorldVLA: Towards Autoregressive Action World ModelCode4
PokeLLMon: A Human-Parity Agent for Pokemon Battles with Large Language ModelsCode3
Distilling LLM Agent into Small Models with Retrieval and Code ToolsCode3
AutoVLA: A Vision-Language-Action Model for End-to-End Autonomous Driving with Adaptive Reasoning and Reinforcement Fine-TuningCode3
Affordance-based Robot Manipulation with Flow MatchingCode3
Flow Q-LearningCode3
AutoScraper: A Progressive Understanding Web Agent for Web Scraper GenerationCode3
Parallels Between VLA Model Post-Training and Human Motor Learning: Progress, Challenges, and TrendsCode2
What Makes a Good Diffusion Planner for Decision Making?Code2
Driving with LLMs: Fusing Object-Level Vector Modality for Explainable Autonomous DrivingCode2
Agent models: Internalizing Chain-of-Action Generation into Reasoning modelsCode2
Prior Does Matter: Visual Navigation via Denoising Diffusion Bridge ModelsCode2
Learning Physically Realizable Skills for Online Packing of General 3D ShapesCode2
LiteWebAgent: The Open-Source Suite for VLM-Based Web-Agent ApplicationsCode2
InfiGUI-R1: Advancing Multimodal GUI Agents from Reactive Actors to Deliberative ReasonersCode2
AICL: Action In-Context Learning for Video Diffusion ModelCode1
Time to Talk: LLM Agents for Asynchronous Group Communication in Mafia GamesCode1
Human Action Generation with Generative Adversarial NetworksCode1
COMMA: Modeling Relationship among Motivations, Emotions and Actions in Language-based Human ActivitiesCode1
Mini Diffuser: Fast Multi-task Diffusion Policy Training Using Two-level Mini-batchesCode1
Wonderful Team: Zero-Shot Physical Task Planning with Visual LLMsCode1
Structure-Aware Human-Action GenerationCode1
Graph Constrained Reinforcement Learning for Natural Language Action SpacesCode1
Generative Adversarial Graph Convolutional Networks for Human Action SynthesisCode1
OWMM-Agent: Open World Mobile Manipulation With Multi-modal Agentic Data SynthesisCode1
Large Language Models for Multi-Robot Systems: A SurveyCode1
LLM-Explorer: Towards Efficient and Affordable LLM-based Exploration for Mobile AppsCode1
Benchmarking Vision, Language, & Action Models on Robotic Learning TasksCode1
Keep CALM and Explore: Language Models for Action Generation in Text-based GamesCode1
EPO: Hierarchical LLM Agents with Environment Preference OptimizationCode1
ACT: Empowering Decision Transformer with Dynamic Programming via Advantage ConditioningCode1
MUGL: Large Scale Multi Person Conditional Action Generation with LocomotionCode1
Action2Motion: Conditioned Generation of 3D Human MotionsCode1
Efficient Listener: Dyadic Facial Motion Synthesis via Action Diffusion0
A Survey on (M)LLM-Based GUI Agents0
Active Generation Network of Human Skeleton for Action Recognition0
Actions Generation from Captions0
Distilled Thompson Sampling: Practical and Efficient Thompson Sampling via Imitation Learning0
A Survey on GUI Agents with Foundation Models Enhanced by Reinforcement Learning0
Diffuse-CLoC: Guided Diffusion for Physics-based Character Look-ahead Control0
Imagine, Initialize, and Explore: An Effective Exploration Method in Multi-Agent Reinforcement Learning0
Context-aware taxi dispatching at city-scale using deep reinforcement learning0
AssistGUI: Task-Oriented PC Graphical User Interface Automation0
ITCMA: A Generative Agent Based on a Computational Consciousness Structure0
H^3DP: Triply-Hierarchical Diffusion Policy for Visuomotor Learning0
IMLE Policy: Fast and Sample Efficient Visuomotor Policy Learning via Implicit Maximum Likelihood Estimation0
Hierarchical Instruction-aware Embodied Visual Tracking0
Show:102550
← PrevPage 1 of 3Next →

No leaderboard results yet.