Real-Time Grasp Detection Using Convolutional Neural Networks
Joseph Redmon, Anelia Angelova
Code Available — Be the first to reproduce this paper.
ReproduceCode
Abstract
We present an accurate, real-time approach to robotic grasp detection based on convolutional neural networks. Our network performs single-stage regression to graspable bounding boxes without using standard sliding window or region proposal techniques. The model outperforms state-of-the-art approaches by 14 percentage points and runs at 13 frames per second on a GPU. Our network can simultaneously perform classification so that in a single step it recognizes the object and finds a good grasp rectangle. A modification to this model predicts multiple grasps per object by using a locally constrained prediction mechanism. The locally constrained model performs significantly better, especially on objects that can be grasped in a variety of ways.
Tasks
Benchmark Results
| Dataset | Model | Metric | Claimed | Verified | Status |
|---|---|---|---|---|---|
| Cornell Grasp Dataset | AlexNet, MultiGrasp | 5 fold cross validation | 88 | — | Unverified |