• Media type: E-Article
  • Title: Exploring new depths: Applying machine learning for the analysis of student argumentation in chemistry
  • Contributor: Martin, Paul P.; Kranz, David; Wulff, Peter; Graulich, Nicole
  • imprint: Wiley, 2023
  • Published in: Journal of Research in Science Teaching
  • Language: English
  • DOI: 10.1002/tea.21903
  • ISSN: 0022-4308; 1098-2736
  • Origination:
  • Footnote:
  • Description: <jats:title>Abstract</jats:title><jats:p>Constructing arguments is essential in science subjects like chemistry. For example, students in organic chemistry should learn to argue about the plausibility of competing chemical reactions by including various sources of evidence and justifying the derived information with reasoning. While doing so, students face significant challenges in coherently structuring their arguments and integrating chemical concepts. For this reason, a reliable assessment of students' argumentation is critical. However, as arguments are usually presented in open‐ended tasks, scoring assessments manually is resource‐consuming and conceptually difficult. To augment human diagnostic capabilities, artificial intelligence techniques such as machine learning or natural language processing offer novel possibilities for an in‐depth analysis of students' argumentation. In this study, we extensively evaluated students' written arguments about the plausibility of competing chemical reactions based on a methodological approach called <jats:italic>computational grounded theory</jats:italic>. By using an unsupervised clustering technique, we sought to evaluate students' argumentation patterns in detail, providing new insights into the <jats:italic>modes of reasoning</jats:italic> and <jats:italic>levels of granularity</jats:italic> applied in students' written accounts. Based on this analysis, we developed a holistic 20‐category rubric by combining the data‐driven clusters with a theory‐driven framework to automate the analysis of the identified argumentation patterns. Pre‐trained large language models in conjunction with deep neural networks provided <jats:italic>almost perfect</jats:italic> machine‐human score agreement and well‐interpretable results, which underpins the potential of the applied state‐of‐the‐art deep learning techniques in analyzing students' argument complexity. The findings demonstrate an approach to combining human and computer‐based analysis in uncovering written argumentation.</jats:p>