• Media type: E-Article
  • Title: Estimation of the Surface Fluxes for Heat and Momentum in Unstable Conditions with Machine Learning and Similarity Approaches for the LAFE Data Set
  • Contributor: Wulfmeyer, Volker; Pineda, Juan Manuel Valencia; Otte, Sebastian; Karlbauer, Matthias; Butz, Martin V.; Lee, Temple R.; Rajtschan, Verena
  • imprint: Springer Science and Business Media LLC, 2023
  • Published in: Boundary-Layer Meteorology
  • Language: English
  • DOI: 10.1007/s10546-022-00761-2
  • ISSN: 0006-8314; 1573-1472
  • Keywords: Atmospheric Science
  • Origination:
  • Footnote:
  • Description: <jats:title>Abstract</jats:title><jats:p>Measurements of three flux towers operated during the land atmosphere feedback experiment (LAFE) are used to investigate relationships between surface fluxes and variables of the land–atmosphere system. We study these relations by means of two machine learning (ML) techniques: multilayer perceptrons (MLP) and extreme gradient boosting (XGB). We compare their flux derivation performance with Monin–Obukhov similarity theory (MOST) and a similarity relationship using the bulk Richardson number (BRN). The ML approaches outperform MOST and BRN. Best agreement with the observations is achieved for the friction velocity. For the sensible heat flux and even more so for the latent heat flux, MOST and BRN deviate from the observations while MLP and XGB yield more accurate predictions. Using MOST and BRN for latent heat flux, the root mean square errors (RMSE) are 107 Wm<jats:inline-formula><jats:alternatives><jats:tex-math>$$^{-2}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:msup> <mml:mrow /> <mml:mrow> <mml:mo>-</mml:mo> <mml:mn>2</mml:mn> </mml:mrow> </mml:msup> </mml:math></jats:alternatives></jats:inline-formula> and 121 Wm<jats:inline-formula><jats:alternatives><jats:tex-math>$$^{-2}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:msup> <mml:mrow /> <mml:mrow> <mml:mo>-</mml:mo> <mml:mn>2</mml:mn> </mml:mrow> </mml:msup> </mml:math></jats:alternatives></jats:inline-formula>, respectively, as well as the intercepts of the regression lines are <jats:inline-formula><jats:alternatives><jats:tex-math>$$\approx 110$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mo>≈</mml:mo> <mml:mn>110</mml:mn> </mml:mrow> </mml:math></jats:alternatives></jats:inline-formula> Wm<jats:inline-formula><jats:alternatives><jats:tex-math>$$^{-2}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:msup> <mml:mrow /> <mml:mrow> <mml:mo>-</mml:mo> <mml:mn>2</mml:mn> </mml:mrow> </mml:msup> </mml:math></jats:alternatives></jats:inline-formula>. For the ML methods, the RMSEs reduce to 31 Wm<jats:inline-formula><jats:alternatives><jats:tex-math>$$^{-2}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:msup> <mml:mrow /> <mml:mrow> <mml:mo>-</mml:mo> <mml:mn>2</mml:mn> </mml:mrow> </mml:msup> </mml:math></jats:alternatives></jats:inline-formula> for MLP and 33 Wm<jats:inline-formula><jats:alternatives><jats:tex-math>$$^{-2}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:msup> <mml:mrow /> <mml:mrow> <mml:mo>-</mml:mo> <mml:mn>2</mml:mn> </mml:mrow> </mml:msup> </mml:math></jats:alternatives></jats:inline-formula> for XGB as well as the intercepts to just 4 Wm<jats:inline-formula><jats:alternatives><jats:tex-math>$$^{-2}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:msup> <mml:mrow /> <mml:mrow> <mml:mo>-</mml:mo> <mml:mn>2</mml:mn> </mml:mrow> </mml:msup> </mml:math></jats:alternatives></jats:inline-formula> for MLP and <jats:inline-formula><jats:alternatives><jats:tex-math>$$-1$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:mrow> <mml:mo>-</mml:mo> <mml:mn>1</mml:mn> </mml:mrow> </mml:math></jats:alternatives></jats:inline-formula> Wm<jats:inline-formula><jats:alternatives><jats:tex-math>$$^{-2}$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> <mml:msup> <mml:mrow /> <mml:mrow> <mml:mo>-</mml:mo> <mml:mn>2</mml:mn> </mml:mrow> </mml:msup> </mml:math></jats:alternatives></jats:inline-formula> for XGB with slopes of the regression lines close to 1, respectively. These results indicate significant deficiencies of MOST and BRN, particularly for the derivation of the latent heat flux. In fact, in contrast to the established theories, feature importance weighting demonstrates that the ML methods base their improved derivations on net radiation, the incoming and outgoing shortwave radiations, the air temperature gradient, and the available water contents, but not on the water vapor gradient. The results imply that further studies of surface fluxes and other turbulent variables with ML techniques provide great promise for deriving advanced flux parameterizations and their implementation in land–atmosphere system models.</jats:p>