SOTAVerified

Robotic Grasping

This task is composed of using Deep Learning to identify how best to grasp objects using robotic arms in different scenarios. This is a very complex task as it might involve dynamic environments and objects unknown to the network.

Papers

Showing 110 of 246 papers

TitleStatusHype
MTF-Grasp: A Multi-tier Federated Learning Approach for Robotic Grasping0
Consensus-Driven Uncertainty for Robotic Grasping based on RGB PerceptionCode0
JENGA: Object selection and pose estimation for robotic grasping from a stack0
You Only Estimate Once: Unified, One-stage, Real-Time Category-level Articulated Object 6D Pose Estimation for Robotic Grasping0
Category-Level 6D Object Pose Estimation in Agricultural Settings Using a Lattice-Deformation Framework and Diffusion-Augmented Synthetic Data0
SR3D: Unleashing Single-view 3D Reconstruction for Transparent and Specular Object Grasping0
Spatial RoboGrasp: Generalized Robotic Grasping Control Policy0
ViTaPEs: Visuotactile Position Encodings for Cross-Modal Alignment in Multimodal Transformers0
Grasp the Graph (GtG) 2.0: Ensemble of GNNs for High-Precision Grasp Pose Detection in Clutter0
Category-Level and Open-Set Object Pose Estimation for Robotics0
Show:102550
← PrevPage 1 of 25Next →

Benchmark Results

#ModelMetricClaimedVerifiedStatus
1Efficient-GraspingAccuracy (%)95.6Unverified
2GR-ConvNetAccuracy (%)94.6Unverified
3grasp_det_seg_cnn (rgb only)Accuracy (%)92.95Unverified