• Media type: Doctoral Thesis; Electronic Thesis; E-Book
  • Title: Efficient Robotic Grasping in Unstructured Environments
  • Contributor: Breyer, Michel [Author]
  • Published: ETH Zurich, 2022
  • Language: English
  • DOI: https://doi.org/20.500.11850/597371; https://doi.org/10.3929/ethz-b-000597371
  • Keywords: Data processing ; manipulation ; robotics ; grasping ; computer science ; deep learning
  • Origination:
  • Footnote: Diese Datenquelle enthält auch Bestandsnachweise, die nicht zu einem Volltext führen.
  • Description: Robots that physically interact with their environment have the potential to automate repetitive tasks in warehouses, hospitals, and homes. However, current applications of robotic manipulation are mostly constrained to well-controlled environments such as assembly lines. In less structured environments, there exist a large number of objects with different shapes, sizes, and materials, making it impractical to assume prior knowledge of the objects' properties. This together with uncertainties in perception make automation challenging. In this thesis, we aim to enable robots to manipulate arbitrary objects in such unstructured environments. In particular, we focus on grasping, which is a critical component of many manipulation tasks. First, we present an end-to-end reinforcement learning approach to train a robot to grasp objects in a simulated environment. We propose several mechanisms to improve sample efficiency and show that the trained policies can be directly transferred to a real-world setup. Second, we use a modular approach based on a convolutional neural network that generates grasp poses from a volumetric reconstruction of unknown scenes. By training the network to predict grasp configurations at every voxel of the volume in parallel, our method is able to detect grasps within milliseconds. Third, we extend the previous approach with active perception by moving a camera attached to a robot arm to informative views to grasp a partially occluded target. Our method continuously attempts to find a good grasp pose or adapts the robot’s trajectory for further exploration. We show that our approach reduces grasp execution time without compromising grasp success rates compared to fixed camera placements and can even handle situations where these methods fail.
  • Access State: Open Access
  • Rights information: In Copyright - Non-commercial Use Permitted