• Medientyp: E-Book
  • Titel: Uncovering The Source of Machine Bias
  • Beteiligte: Hu, Xiyang [VerfasserIn]; Huang, Yan [VerfasserIn]; Li, Beibei [VerfasserIn]; Lu, Tian [VerfasserIn]
  • Erschienen: [S.l.]: SSRN, 2022
  • Umfang: 1 Online-Ressource (35 p)
  • Sprache: Englisch
  • DOI: 10.2139/ssrn.4195014
  • Identifikator:
  • Schlagwörter: Algorithmic Bias ; Human Bias ; Machine Learning ; Structural Modeling ; Micro-lending
  • Entstehung:
  • Anmerkungen: Nach Informationen von SSRN wurde die ursprüngliche Fassung des Dokuments August 19, 2022 erstellt
  • Beschreibung: The emerging artificial intelligence (AI) and human-AI interactions have attracted great attentions to improving decision-making effectiveness. A common practice is that human make decisions (at least at beginning) to generate training data for machine learning (ML) algorithms, and ML algorithms train on these historical data to make final decisions. Yet the answers to whether human bias exists in their own decision making and how machines would inherit human bias are missing. In this study, with longitudinal data set of an online micro-lending setting, we develop a structural econometric model to capture the decision dynamics of human evaluators on borrower credit risk. We find two types of biases in gender (in favor of female borrowers), namely, preference-based bias and belief-based bias, are present in human evaluators' decisions. Through counterfactual simulations, we quantify the effect of gender bias on both profits and fairness. When either the preference-based or belief-based bias is mitigated, the platform earns more profits. These outcomes majorly stem from the raise of approval probability for male borrowers especially those who would eventually pay back loans. That is, the elimination of either bias decreases the gender gap of the approval rates and the fairness (measured by true positive rates) in the credit risk evaluation. Moreover, we train ML algorithms on both a real-world data set and a generated counterfactual data set. By comparing the decisions output by diverse settings, we find that ML algorithms can mitigate both the preference-based bias and the belief-based bias, while the effects vary for new and repeated applicants. Based on our findings, we propose a two-step human-AI collaboration framework for practitioners to reduce decision bias most effectively
  • Zugangsstatus: Freier Zugang