SOTAVerified

Robot Manipulation

Papers

Showing 51100 of 430 papers

TitleStatusHype
se(3)-TrackNet: Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic DomainsCode1
Skill-Based Reinforcement Learning with Intrinsic Reward MatchingCode1
RPMArt: Towards Robust Perception and Manipulation for Articulated ObjectsCode1
MRHER: Model-based Relay Hindsight Experience Replay for Sequential Object Manipulation Tasks with Sparse RewardsCode1
CALVIN: A Benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation TasksCode1
CACTI: A Framework for Scalable Multi-Task Multi-Scene Visual Imitation LearningCode1
BusyBot: Learning to Interact, Reason, and Plan in a BusyBoard EnvironmentCode1
SCENEREPLICA: Benchmarking Real-World Robot Manipulation by Creating Replicable ScenesCode1
ABNet: Attention BarrierNet for Safe and Scalable Robot LearningCode1
SELP: Generating Safe and Efficient Task Plans for Robot Agents with Large Language ModelsCode1
Coarse-to-Fine Q-attention: Efficient Learning for Visual Robotic Manipulation via DiscretisationCode1
Coarse-to-fine Q-attention with Tree ExpansionCode1
BundleTrack: 6D Pose Tracking for Novel Objects without Instance or Category-Level 3D ModelsCode1
RoboCLIP: One Demonstration is Enough to Learn Robot PoliciesCode1
Re-Mix: Optimizing Data Mixtures for Large Scale Imitation LearningCode1
Relay Hindsight Experience Replay: Self-Guided Continual Reinforcement Learning for Sequential Object Manipulation Tasks with Sparse RewardsCode1
Reward Uncertainty for Exploration in Preference-based Reinforcement LearningCode1
Diffusion-Reinforcement Learning Hierarchical Motion Planning in Multi-agent Adversarial GamesCode1
DrS: Learning Reusable Dense Rewards for Multi-Stage TasksCode1
Diff-IP2D: Diffusion-Based Hand-Object Interaction Prediction on Egocentric VideosCode1
Distilling Motion Planner Augmented Policies into Visual Control Policies for Robot ManipulationCode1
PixL2R: Guiding Reinforcement Learning Using Natural Language by Mapping Pixels to RewardsCode1
On the Efficacy of 3D Point Cloud Reinforcement LearningCode1
OSCAR: Data-Driven Operational Space Control for Adaptive and Robust Robot ManipulationCode1
PolarNet: 3D Point Clouds for Language-Guided Robotic ManipulationCode1
Motion Policy NetworksCode1
3DFlowAction: Learning Cross-Embodiment Manipulation from 3D Flow World ModelCode1
Multimodal Fusion and Vision-Language Models: A Survey for Robot VisionCode1
ManiSkill2: A Unified Benchmark for Generalizable Manipulation SkillsCode1
Mean Shift Mask Transformer for Unseen Object Instance SegmentationCode1
LTLDoG: Satisfying Temporally-Extended Symbolic Constraints for Safe Diffusion-based PlanningCode1
Modeling Fine-Grained Hand-Object Dynamics for Egocentric Video Representation LearningCode1
DeepIM: Deep Iterative Matching for 6D Pose EstimationCode1
Deep SE(3)-Equivariant Geometric Reasoning for Precise Placement TasksCode1
One-Shot Object Affordance Detection in the WildCode1
Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic DomainsCode1
Demonstration-Guided Reinforcement Learning with Learned SkillsCode1
Auto-Lambda: Disentangling Dynamic Task RelationshipsCode1
Learning 3D Dynamic Scene Representations for Robot ManipulationCode1
Language Reward Modulation for Pretraining Reinforcement LearningCode1
DexArt: Benchmarking Generalizable Dexterous Manipulation with Articulated ObjectsCode1
Bingham Policy Parameterization for 3D Rotations in Reinforcement LearningCode1
Learning Neuro-symbolic Programs for Language Guided Robot ManipulationCode1
Cross-Embodiment Robot Manipulation Skill Transfer using Latent Space AlignmentCode1
A Universal Semantic-Geometric Representation for Robotic ManipulationCode1
Instruction-driven history-aware policies for robotic manipulationsCode1
GUARD: A Safe Reinforcement Learning BenchmarkCode1
HO-Cap: A Capture System and Dataset for 3D Reconstruction and Pose Tracking of Hand-Object InteractionCode1
Language-Conditioned Imitation Learning for Robot Manipulation TasksCode1
Leveraging Locality to Boost Sample Efficiency in Robotic ManipulationCode1
Show:102550
← PrevPage 2 of 9Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1DreamVLAavg. sequence length (D to D)4.44Unverified
2VPPavg. sequence length (D to D)4.29Unverified
3RoboVLMsavg. sequence length (D to D)4.25Unverified
4Openhelixavg. sequence length (D to D)4.08Unverified
5UP-VLAavg. sequence length (D to D)4.08Unverified
6GR-MGavg. sequence length (D to D)4.04Unverified
7MoDEavg. sequence length (D to D)4.01Unverified
8RoboUniViewavg. sequence length (D to D)3.86Unverified
9UniVLAavg. sequence length (D to D)3.8Unverified
10RoboDualavg. sequence length (D to D)3.66Unverified
#ModelMetricClaimedVerifiedStatus
1EquActSucc. Rate (18 tasks, 100 demo/task)89.4Unverified
2SAM2ActSucc. Rate (18 tasks, 100 demo/task)86.8Unverified
3ARP+Succ. Rate (18 tasks, 100 demo/task)84.9Unverified
43D-LOTUSSucc. Rate (18 tasks, 100 demo/task)83.1Unverified
5RVT-2Succ. Rate (18 tasks, 100 demo/task)81.4Unverified
63D Diffuser ActorSucc. Rate (18 tasks, 100 demo/task)81.3Unverified
7Mini DiffuserSucc. Rate (18 tasks, 100 demo/task)77.6Unverified
8SAM-ESucc. Rate (18 tasks, 100 demo/task)70.6Unverified
9Auto-λSucc. Rate (10 tasks, 100 demos/task)69.3Unverified
10Act3DSucc. Rate (18 tasks, 100 demo/task)65Unverified
#ModelMetricClaimedVerifiedStatus
1SoFarVisual Matching0.75Unverified
2SpatialVLAVisual Matching0.72Unverified
3Dita-300MVisual Matching0.69Unverified
4RT-2-XVisual Matching0.61Unverified
5RoboVLMVisual Matching0.56Unverified
6RT-1-XVisual Matching0.53Unverified
7TraceVLAVisual Matching0.46Unverified
8OpenVLAVisual Matching0.28Unverified
9Octo-BaseVisual Matching0.17Unverified
#ModelMetricClaimedVerifiedStatus
1SDPSucc. Rate (12 tasks, 100 demo/task)76Unverified
2EquiDiff (Voxel)Succ. Rate (12 tasks, 100 demo/task)63.9Unverified
3EquiDiff (Image)Succ. Rate (12 tasks, 100 demo/task)53.7Unverified
4DP (Evaluated in EquiDiff)Succ. Rate (12 tasks, 100 demo/task)42Unverified
5DP3 (Evaluated in EquiDiff)Succ. Rate (12 tasks, 100 demo/task)23.9Unverified
6BC RNN (Evaluated in EquiDiff)Succ. Rate (12 tasks, 100 demo/task)22.9Unverified
7ACT (Evaluated in EquiDiff)Succ. Rate (12 tasks, 100 demo/task)21.3Unverified
#ModelMetricClaimedVerifiedStatus
1SoFarAverage0.58Unverified
2SpatialVLAAverage0.34Unverified
3Octo-SmallAverage0.3Unverified
4Octo-BaseAverage0.16Unverified
5RoboVLMAverage0.14Unverified
6RT-1-XAverage0.01Unverified
7OpenVLAAverage0.01Unverified