• Media type: E-Book
  • Title: Evaluating Macro-Modelling Systems : An Application to the Area Wide Model
  • Contributor: McAdam, Peter [Author]; Mestre, Ricardo [Author]
  • Published: [S.l.]: SSRN, 2007
  • Extent: 1 Online-Ressource (56 p)
  • Language: English
  • DOI: 10.2139/ssrn.970534
  • Identifier:
  • Origination:
  • Footnote: Nach Informationen von SSRN wurde die ursprüngliche Fassung des Dokuments September 2002 erstellt
  • Description: This paper considers approaches to considering the validity of the overall structure of macro-econometric models - specifically, the Area Wide Model of the European Central Bank. By structure, we refer to the dynamic (business-cycle) and steady-state features that the model purports to capture. This leads to two types of tests. The first, drawing on the DSGE literature, is concerned with whether the model matches business-cycle (high-frequency) data characteristics. This is implemented by a Cholesky bootstrap whereby the steady state of the model is stochastically simulated using historically consistent covariances. The generated data is analysed for stylised-facts fitting and, similarly, using the model's implied spectral characteristics, for congruence with the data in terms of persistence, periodicity and spectral fit. Moments matching, however, is only one aspect of overall model evaluation. Consequently, we move to tests that combine high-frequency aspects (short-horizon forecasts) with long-run features (such as the existence and identification of steady states and trends). Recursive forecasting tests form the second part. The forecasts attempt to measure the accuracy of model-based forecasts both simulated out-of-sample and in an in-sample exercise. The out-of-sample exercise analyses the 1- to 8-step-ahead forecasting ability of the model. For this, the model is re-estimated each time on a subset of the original sample, and the re-estimated model is used to generate a forecast over the remaining sample. The in-sample exercise omits the re-estimation step but performs a more thorough exercise, covering 1- to 12-steps-ahead forecasts over a larger part of the original sample. For this latter exercise, a thorough analysis of sources of forecast error is made. Both exercises incorporate alternative residual-projection methods, in order to assess the importance of unaccounted-for breaks in forecast accuracy. Conclusions reached are that, on the one hand, in-sample exercises should be preferred with systems of this size and typical samples; and, on the other, that mechanical residual adjustment or model re-estimation should be avoided except under strong evidence of mis-specification. The paper considers the testing procedure to be one applicable to the class of large, macro models and therefore of general interest
  • Access State: Open Access