• Media type: E-Article
  • Title: Misogynoir: challenges in detecting intersectional hate
  • Contributor: Kwarteng, Joseph; Perfumi, Serena Coppolino; Farrell, Tracie; Third, Aisling; Fernandez, Miriam
  • imprint: Springer Science and Business Media LLC, 2022
  • Published in: Social Network Analysis and Mining
  • Language: English
  • DOI: 10.1007/s13278-022-00993-7
  • ISSN: 1869-5450; 1869-5469
  • Keywords: Computer Science Applications ; Human-Computer Interaction ; Media Technology ; Communication ; Information Systems
  • Origination:
  • Footnote:
  • Description: <jats:title>Abstract</jats:title><jats:p>“Misogynoir” is a term that refers to the anti-Black forms of misogyny that Black women experience. To explore how current automated hate speech detection approaches perform in detecting this type of hate, we evaluated the performance of two state-of-the-art detection tools, HateSonar and Google’s Perspective API, on a balanced dataset of 300 tweets, half of which are examples of misogynoir and half of which are examples of supporting Black women and an imbalanced dataset of 3138 tweets of which 162 tweets are examples of misogynoir and 2976 tweets are examples of allyship tweets. We aim to determine if these tools flag these messages under any of their classifications of hateful speech (e.g. “hate speech”, “offensive language”, “toxicity” etc.). Close analysis of the classifications and errors shows that current hate speech detection tools are ineffective in detecting misogynoir. They lack sensitivity to context, which is an essential component for misogynoir detection. We found that tweets likely to be classified as hate speech explicitly reference racism or sexism or use profane or aggressive words. Subtle tweets without references to these topics are more challenging to classify. We find that the lack of sensitivity to context may make such tools not only ineffective but potentially harmful to Black women.</jats:p>