Semantic Scholar Open Access 2013 1740 sitasi

Deep learning for detecting robotic grasps

Ian Lenz Honglak Lee Ashutosh Saxena

Abstrak

We consider the problem of detecting robotic grasps in an RGB-D view of a scene containing objects. In this work, we apply a deep learning approach to solve this problem, which avoids time-consuming hand-design of features. This presents two main challenges. First, we need to evaluate a huge number of candidate grasps. In order to make detection fast and robust, we present a two-step cascaded system with two deep networks, where the top detections from the first are re-evaluated by the second. The first network has fewer features, is faster to run, and can effectively prune out unlikely candidate grasps. The second, with more features, is slower but has to run only on the top few detections. Second, we need to handle multimodal inputs effectively, for which we present a method that applies structured regularization on the weights based on multimodal group regularization. We show that our method improves performance on an RGBD robotic grasping dataset, and can be used to successfully execute grasps on two different robotic platforms.

Topik & Kata Kunci

Penulis (3)

I

Ian Lenz

H

Honglak Lee

A

Ashutosh Saxena

Format Sitasi

Lenz, I., Lee, H., Saxena, A. (2013). Deep learning for detecting robotic grasps. https://doi.org/10.1177/0278364914549607

Akses Cepat

Lihat di Sumber doi.org/10.1177/0278364914549607
Informasi Jurnal
Tahun Terbit
2013
Bahasa
en
Total Sitasi
1740×
Sumber Database
Semantic Scholar
DOI
10.1177/0278364914549607
Akses
Open Access ✓