• Media type: E-Article; Text
  • Title: Towards better classification of land cover and land use based on convolutional neural networks
  • Contributor: Yang, C. [Author]; Rottensteiner, F. [Author]; Heipke, C. [Author]; Vosselman, G. [Author]; Oude Elberink, S.J. [Author]; Yang, M.Y. [Author]
  • imprint: Göttingen : Copernicus, 2019
  • Published in: ISPRS Geospatial Week 2019 ; The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences ; 42-2/W13
  • Issue: published Version
  • Language: English
  • DOI: https://doi.org/10.15488/10183; https://doi.org/10.5194/isprs-archives-XLII-2-W13-139-2019
  • ISSN: 1682-1750
  • Keywords: Konferenzschrift ; Convolutional neural network ; Landuse classifications ; Antennas ; Convolution ; High-resolution aerial images ; CNN ; Neural networks ; Land use ; Aerial photography ; Remote sensing ; Land use and land cover ; geospatial land use database ; Database systems ; Land use database ; Land cover classification ; aerial imagery ; Classification (of information) ; semantic segmentation ; Semantics ; Land use classification
  • Origination:
  • Footnote: Diese Datenquelle enthält auch Bestandsnachweise, die nicht zu einem Volltext führen.
  • Description: Land use and land cover are two important variables in remote sensing. Commonly, the information of land use is stored in geospatial databases. In order to update such databases, we present a new approach to determine the land cover and to classify land use objects using convolutional neural networks (CNN). High-resolution aerial images and derived data such as digital surface models serve as input. An encoder-decoder based CNN is used for land cover classification. We found a composite including the infrared band and height data to outperform RGB images in land cover classification. We also propose a CNN-based methodology for the prediction of land use label from the geospatial databases, where we use masks representing object shape, the RGB images and the pixel-wise class scores of land cover as input. For this task, we developed a two-branch network where the first branch considers the whole area of an image, while the second branch focuses on a smaller relevant area. We evaluated our methods using two sites and achieved an overall accuracy of up to 89.6% and 81.7% for land cover and land use, respectively. We also tested our methods for land cover classification using the Vaihingen dataset of the ISPRS 2D semantic labelling challenge and achieved an overall accuracy of 90.7%. © Authors 2019.
  • Access State: Open Access
  • Rights information: Attribution (CC BY)