• Medientyp: E-Artikel
  • Titel: Validation and generalizability of machine learning prediction models on attrition in longitudinal studies
  • Beteiligte: Jankowsky, Kristin; Schroeders, Ulrich
  • Erschienen: SAGE Publications, 2022
  • Erschienen in: International Journal of Behavioral Development
  • Sprache: Englisch
  • DOI: 10.1177/01650254221075034
  • ISSN: 0165-0254; 1464-0651
  • Schlagwörter: Developmental and Educational Psychology ; Life-span and Life-course Studies ; Developmental Neuroscience ; Social Psychology ; Social Sciences (miscellaneous) ; Education
  • Entstehung:
  • Anmerkungen:
  • Beschreibung: <jats:p> Attrition in longitudinal studies is a major threat to the representativeness of the data and the generalizability of the findings. Typical approaches to address systematic nonresponse are either expensive and unsatisfactory (e.g., oversampling) or rely on the unrealistic assumption of data missing at random (e.g., multiple imputation). Thus, models that effectively predict who most likely drops out in subsequent occasions might offer the opportunity to take countermeasures (e.g., incentives). With the current study, we introduce a longitudinal model validation approach and examine whether attrition in two nationally representative longitudinal panel studies can be predicted accurately. We compare the performance of a basic logistic regression model with a more flexible, data-driven machine learning algorithm—gradient boosting machines. Our results show almost no difference in accuracies for both modeling approaches, which contradicts claims of similar studies on survey attrition. Prediction models could not be generalized across surveys and were less accurate when tested at a later survey wave. We discuss the implications of these findings for survey retention, the use of complex machine learning algorithms, and give some recommendations to deal with study attrition. </jats:p>