• Media type: E-Article
  • Title: WE RAN 9 BILLION REGRESSIONS: ELIMINATING FALSE POSITIVES THROUGH COMPUTATIONAL MODEL ROBUSTNESS
  • Contributor: Muñoz, John; Young, Cristobal
  • Published: SAGE Publishing, 2018
  • Published in: Sociological Methodology, 48 (2018), Seite 1-33
  • Language: English
  • ISSN: 0081-1750; 1467-9531
  • Keywords: Symposium on Model Uncertainity and Model Selection
  • Origination:
  • Footnote:
  • Description: <p>False positive findings are a growing problem in many research literatures. We argue that excessive false positives often stem from model uncertainty. There are many plausible ways of specifying a regression model, but researchers typically report only a few preferred estimates. This raises the concern that such research reveals only a small fraction of the possible results and may easily lead to nonrobust, false positive conclusions. It is often unclear how much the results are driven by model specification and how much the results would change if a different plausible model were used. Computational model robustness analysis addresses this challenge by estimating all possible models from a theoretically informed model space. We use large-scale random noise simulations to show (1) the problem of excess false positive errors under model uncertainty and (2) that computational robustness analysis can identify and eliminate false positives caused by model uncertainty. We also draw on a series of empirical applications to further illustrate issues of model uncertainty and estimate instability. Computational robustness analysis offers a method for relaxing modeling assumptions and improving the transparency of applied research.</p>