SOTAVerified

Robot Manipulation

Papers

Showing 110 of 430 papers

TitleStatusHype
DreamVLA: A Vision-Language-Action Model Dreamed with Comprehensive World KnowledgeCode3
Geometry-aware 4D Video Generation for Robot Manipulation0
CapsDT: Diffusion-Transformer for Capsule Robot Manipulation0
Robust Instant Policy: Leveraging Student's t-Regression Model for Robust In-context Imitation Learning of Robot Manipulation0
SENIOR: Efficient Query Selection and Preference-Guided Exploration in Preference-based Reinforcement Learning0
What Matters in Learning from Large-Scale Datasets for Robot Manipulation0
Demonstrating Multi-Suction Item Picking at Scale via Multi-Modal Learning of Pick Success0
BridgeVLA: Input-Output Alignment for Efficient 3D Manipulation Learning with Vision-Language Models0
3DFlowAction: Learning Cross-Embodiment Manipulation from 3D Flow World ModelCode1
OG-VLA: 3D-Aware Vision Language Action Model via Orthographic Image Generation0
Show:102550
← PrevPage 1 of 43Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DreamVLAavg. sequence length (D to D)4.44Unverified
2VPPavg. sequence length (D to D)4.29Unverified
3RoboVLMsavg. sequence length (D to D)4.25Unverified
4Openhelixavg. sequence length (D to D)4.08Unverified
5UP-VLAavg. sequence length (D to D)4.08Unverified
6GR-MGavg. sequence length (D to D)4.04Unverified
7MoDEavg. sequence length (D to D)4.01Unverified
8RoboUniViewavg. sequence length (D to D)3.86Unverified
9UniVLAavg. sequence length (D to D)3.8Unverified
10RoboDualavg. sequence length (D to D)3.66Unverified