Menkveld, Albert J.
[Author];
Dreber, Anna
[Author];
Holzmeister, Felix
[Author];
Huber, Jürgen
[Author];
Johannesson, Magnus
[Author];
Kirchler, Michael
[Author];
Neusüß, Sebastian
[Author];
Razen, Michael
[Author];
Weitzel, Utz
[Author]
Description:
In statistics, samples are drawn from a population in a datagenerating process (DGP). Standard errors measure the uncertainty in sample estimates of population parameters. In science, evidence is generated to test hypotheses in an evidencegenerating process (EGP). We claim that EGP variation across researchers adds uncertainty: non-standard errors. To study them, we let 164 teams test six hypotheses on the same sample. We find that non-standard errors are sizeable, on par with standard errors. Their size (i) co-varies only weakly with team merits, reproducibility, or peer rating, (ii) declines significantly after peer-feedback, and (iii) is underestimated by participants.