• Media type: E-Article
  • Title: Mapping Underwater Aquatic Vegetation Using Foundation Models With Air- and Space-Borne Images: The Case of Polyphytos Lake
  • Contributor: Alagialoglou, Leonidas; Manakos, Ioannis; Papadopoulou, Sofia; Chadoulis, Rizos-Theodoros; Kita, Afroditi
  • imprint: MDPI AG, 2023
  • Published in: Remote Sensing
  • Language: English
  • DOI: 10.3390/rs15164001
  • ISSN: 2072-4292
  • Keywords: General Earth and Planetary Sciences
  • Origination:
  • Footnote:
  • Description: <jats:p>Mapping underwater aquatic vegetation (UVeg) is crucial for understanding the dynamics of freshwater ecosystems. The advancement of artificial intelligence (AI) techniques has shown great potential in improving the accuracy and efficiency of UVeg mapping using remote sensing data. This paper presents a comparative study of the performance of classical and modern AI tools, including logistic regression, random forest, and a visual-prompt-tuned foundational model, the Segment Anything model (SAM), for mapping UVeg by analyzing air- and space-borne images in the few-shot learning regime, i.e., using limited annotations. The findings demonstrate the effectiveness of the SAM foundation model in air-borne imagery (GSD = 3–6 cm) with an F1 score of 86.5%±4.1% when trained with as few as 40 positive/negative pairs of pixels, compared to 54.0%±9.2% using the random forest model and 42.8%±6.2% using logistic regression models. However, adapting SAM to space-borne images (WorldView-2 and Sentinel-2) remains challenging, and could not outperform classical pixel-wise random forest and logistic regression methods in our task. The findings presented provide valuable insights into the strengths and limitations of AI models for UVeg mapping, aiding researchers and practitioners in selecting the most suitable tools for their specific applications.</jats:p>
  • Access State: Open Access