• Media type: E-Article
  • Title: Vector-Space Models of Semantic Representation From a Cognitive Perspective: A Discussion of Common Misconceptions
  • Contributor: Günther, Fritz; Rinaldi, Luca; Marelli, Marco
  • imprint: SAGE Publications, 2019
  • Published in: Perspectives on Psychological Science
  • Language: English
  • DOI: 10.1177/1745691619861372
  • ISSN: 1745-6916; 1745-6924
  • Origination:
  • Footnote:
  • Description: <jats:p>Models that represent meaning as high-dimensional numerical vectors—such as latent semantic analysis (LSA), hyperspace analogue to language (HAL), bound encoding of the aggregate language environment (BEAGLE), topic models, global vectors (GloVe), and word2vec—have been introduced as extremely powerful machine-learning proxies for human semantic representations and have seen an explosive rise in popularity over the past 2 decades. However, despite their considerable advancements and spread in the cognitive sciences, one can observe problems associated with the adequate presentation and understanding of some of their features. Indeed, when these models are examined from a cognitive perspective, a number of unfounded arguments tend to appear in the psychological literature. In this article, we review the most common of these arguments and discuss (a) what exactly these models represent at the implementational level and their plausibility as a cognitive theory, (b) how they deal with various aspects of meaning such as polysemy or compositionality, and (c) how they relate to the debate on embodied and grounded cognition. We identify common misconceptions that arise as a result of incomplete descriptions, outdated arguments, and unclear distinctions between theory and implementation of the models. We clarify and amend these points to provide a theoretical basis for future research and discussions on vector models of semantic representation.</jats:p>