• Medientyp: E-Artikel
  • Titel: Assessing Effects of Unreplicated Perturbations: No Simple Solutions
  • Beteiligte: Stewart-Oaten, Allan; Bence, James R.; Osenberg, Craig W.
  • Erschienen: Wiley, 1992
  • Erschienen in: Ecology, 73 (1992) 4, Seite 1396-1404
  • Sprache: Englisch
  • DOI: 10.2307/1940685
  • ISSN: 0012-9658; 1939-9170
  • Entstehung:
  • Anmerkungen:
  • Beschreibung: <jats:p>We address the task of determining the effects, on mean population density or other parameters, of an unreplicated perturbation, such as arises in environmental assessments and some ecosystem—level experiments. Our context is the Before—After—Control—Impact—Pairs design (BACIP): on several dates Before and After the perturbation, samples are collected simultaneously at both the Impact site and a nearby "Control." One approach is to test whether the mean of the Impact—Control difference has changed from Before to After the perturbation. If a conventional test is used, checks of its assumptions are an important and messy part of the analysis, since BACIP data do not necessarily satisfy them. It has been suggested that these checks are not needed for randomization tests, because they are insensitive to some of these assumptions and can be adjusted to allow for others. A major aim of this paper is to refute this suggestion: there is no panacea for the difficult and messy technical problems in the analysis of data from assessments or unreplicated experiments. We compare the randomization t test with the standard t test and the modified (Welch—Satterthwaite—Aspin) t test, which allows for unequal variances. We conclude that the randomization t test is less likely to yield valid inferences than is the Welch t test, because it requires identical distributions for small sample sizes and either equal variances or equal sample sizes for larger ones. The formal requirement of Normality is not crucial to the Welch t test. Both parametric and randomization tests require that time and location effects be additive and the Impact—Control differences on different dates be independent. These assumptions should be tested; if they are seriously wrong, alternative analyses are needed. This will often require a long time series of data. Finally, for assessing the importance of a perturbation, the P value of a hypothesis test is rarely as useful as an estimate of the size of the effect. Especially if effect size varies with time and conditions, flexible estimation methods with approximate answers are preferable to formally exact P values.</jats:p>