• Media type: Doctoral Thesis; Electronic Thesis; E-Book
  • Title: Learning to segment in images and videos with different forms of supervision
  • Contributor: Khoreva, Anna [Author]
  • Published: Saarländische Universitäts- und Landesbibliothek, 2017
  • Language: English
  • DOI: https://doi.org/10.22028/D291-26995
  • Keywords: Deep learning ; Image segmentation ; Object boundary detection ; Semantic labelling ; Instance segmentation ; Tracking ; Video segmentation ; Weakly-supervised learning ; Bildsegmentierung ; Segmentierung
  • Origination:
  • Footnote: Diese Datenquelle enthält auch Bestandsnachweise, die nicht zu einem Volltext führen.
  • Description: Much progress has been made in image and video segmentation over the last years. To a large extent, the success can be attributed to the strong appearance models completely learned from data, in particular using deep learning methods. However, to perform best these methods require large representative datasets for training with expensive pixel-level annotations, which in case of videos are prohibitive to obtain. Therefore, there is a need to relax this constraint and to consider alternative forms of supervision, which are easier and cheaper to collect. In this thesis, we aim to develop algorithms for learning to segment in images and videos with different levels of supervision. First, we develop approaches for training convolutional networks with weaker forms of supervision, such as bounding boxes or image labels, for object boundary estimation and semantic/instance labelling tasks. We propose to generate pixel-level approximate groundtruth from these weaker forms of annotations to train a network, which allows to achieve high-quality results comparable to the full supervision quality without any modifications of the network architecture or the training procedure. Second, we address the problem of the excessive computational and memory costs inherent to solving video segmentation via graphs. We propose approaches to improve the runtime and memory efficiency as well as the output segmentation quality by learning from the available training data the best representation of the graph. In particular, we contribute with learning must-link constraints, the topology and edge weights of the graph as well as enhancing the graph nodes - superpixels - themselves. Third, we tackle the task of pixel-level object tracking and address the problem of the limited amount of densely annotated video data for training convolutional networks. We introduce an architecture which allows training with static images only and propose an elaborate data synthesis scheme which creates a large number of training examples close to the target ...
  • Access State: Open Access
  • Rights information: Attribution - Non Commercial (CC BY-NC)