• Medientyp: E-Artikel
  • Titel: ConsInstancy: learning instance representations for semi-supervised panoptic segmentation of concrete aggregate particles
  • Beteiligte: Coenen, Max; Schack, Tobias; Beyer, Dries; Heipke, Christian; Haist, Michael
  • Erschienen: Springer Science and Business Media LLC, 2022
  • Erschienen in: Machine Vision and Applications
  • Sprache: Englisch
  • DOI: 10.1007/s00138-022-01313-x
  • ISSN: 0932-8092; 1432-1769
  • Schlagwörter: Computer Science Applications ; Computer Vision and Pattern Recognition ; Hardware and Architecture ; Software
  • Entstehung:
  • Anmerkungen:
  • Beschreibung: <jats:title>Abstract</jats:title><jats:p>We present a semi-supervised method for panoptic segmentation based on ConsInstancy regularisation, a novel strategy for semi-supervised learning. It leverages completely unlabelled data by enforcing consistency between predicted instance representations and semantic segmentations during training in order to improve the segmentation performance. To this end, we also propose new types of instance representations that can be predicted by one simple forward path through a fully convolutional network (FCN), delivering a convenient and simple-to-train framework for panoptic segmentation. More specifically, we propose the prediction of a three-dimensional instance orientation map as intermediate representation and two complementary distance transform maps as final representation, providing unique instance representations for a panoptic segmentation. We test our method on two challenging data sets of both, hardened and fresh concrete, the latter being proposed by the authors in this paper demonstrating the effectiveness of our approach, outperforming the results achieved by state-of-the-art methods for semi-supervised segmentation. In particular, we are able to show that by leveraging completely unlabelled data in our semi-supervised approach the achieved overall accuracy (OA) is increased by up to 5% compared to an entirely supervised training using only labelled data. Furthermore, we exceed the OA achieved by state-of-the-art semi-supervised methods by up to 1.5%. </jats:p>