You can manage bookmarks using lists, please log in to your user account for this.
Media type:
E-Article
Title:
Semantic Spaces Are Not Created Equal – How Should We Weigh Them in the Sequel? : On Composites in Automated Creativity Scoring
:
On Composites in Automated Creativity Scoring
Contributor:
Forthmann, Boris;
Beaty, Roger E.;
Johnson, Dan R.
Published:
Hogrefe Publishing Group, 2023
Published in:
European Journal of Psychological Assessment, 39 (2023) 6, Seite 449-461
Language:
English
DOI:
10.1027/1015-5759/a000723
ISSN:
1015-5759;
2151-2426
Origination:
Footnote:
Description:
Abstract: Semantic distance scoring provides an attractive alternative to other scoring approaches for responses in creative thinking tasks. In addition, evidence in support of semantic distance scoring has increased over the last few years. In one recent approach, it has been proposed to combine multiple semantic spaces to better balance the idiosyncratic influences of each space. Thereby, final semantic distance scores for each response are represented by a composite or factor score. However, semantic spaces are not necessarily equally weighted in mean scores, and the usage of factor scores requires high levels of factor determinacy (i.e., the correlation between estimates and true factor scores). Hence, in this work, we examined the weighting underlying mean scores, mean scores of standardized variables, factor loadings, weights that maximize reliability, and equally effective weights on common verbal creative thinking tasks. Both empirical and simulated factor determinacy, as well as Gilmer-Feldt’s composite reliability, were mostly good to excellent (i.e., > .80) across two task types (Alternate Uses and Creative Word Association), eight samples of data, and all weighting approaches. Person-level validity findings were further highly comparable across weighting approaches. Observed nuances and challenges of different weightings and the question of using composites vs. factor scores are thoroughly provided.