|
on Forecasting |
By: | Athanasopoulos, George; Guillén, Osmani Teixeira de Carvalho; Issler, João Victor; Vahid, Farshid |
Abstract: | We study the joint determination of the lag length, the dimension of the cointegrating space andthe rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using modelselection criteria. We consider model selection criteria which have data-dependent penalties aswell as the traditional ones. We suggest a new two-step model selection procedure which is ahybrid of traditional criteria and criteria with data-dependant penalties and we prove its consistency.Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arisefrom the joint determination of lag-length and rank using our proposed procedure, relative to anunrestricted VAR or a cointegrated VAR estimated by the commonly used procedure of selecting thelag-length only and then testing for cointegration. Two empirical applications forecasting Brazilianinflation and U.S. macroeconomic aggregates growth rates respectively show the usefulness of themodel-selection strategy proposed here. The gains in different measures of forecasting accuracy aresubstantial, especially for short horizons. |
Date: | 2010–09–13 |
URL: | http://d.repec.org/n?u=RePEc:fgv:epgewp:707&r=for |
By: | Feldkircher, Martin (Oesterreichische Nationalbank) |
Abstract: | In this study the forecast performance of model averaged forecasts is compared to that of alternative single models. Following Eklund and Karlsson (2007) we form posterior model probabilities - the weights for the combined forecast - based on the predictive likelihood. Extending the work of Fernández et al. (2001a) we carry out a prior sensitivity analysis for a key parameter in Bayesian model averaging (BMA): Zellner's g. The main results based on a simulation study are fourfold: First the predictive likelihood does always better than the traditionally employed 'marginal' likelihood in settings where the true model is not part of the model space. Secondly, and more striking, forecast accuracy as measured by the root mean square error (rmse) is maximized for the median probability model put forward by Barbieri and Berger (2003). On the other hand, model averaging excels in predicting direction of changes, a finding that is in line with Crespo Cuaresma (2007). Lastly, our recommendation concerning the prior on g is to choose the prior proposed by Laud and Ibrahim (1995) with a hold-out sample size of 25% to minimize the rmse (median model) and 75% to optimize direction of change forecasts (model averaging). We finally forecast the monthly industrial production output of six Central Eastern and South Eastern European (CESEE) economies for a one step ahead forecasting horizon. Following the aforementioned forecasting recommendations improves the out-of-sample statistics over a 30-period horizon beating for almost all countries the first order autoregressive benchmark model. |
Keywords: | Forecast Combination; Bayesian Model Averaging; Median Probability Model; Predictive Likelihood; Industrial Production; Model Uncertainty |
JEL: | C11 C15 C53 |
Date: | 2010–09–15 |
URL: | http://d.repec.org/n?u=RePEc:ris:sbgwpe:2010_014&r=for |
By: | Francesco Ravazzolo (Norges Bank (Central Bank of Norway)); Philip Rothman (East Carolina University) |
Abstract: | We study the real-time Granger-causal relationship between crude oil prices and US GDP growth through a simulated out-of-sample (OOS) forecasting exercise; we also provide strong evidence of in-sample predictability from oil prices to GDP. Comparing our benchmark model "without oil" against alternatives "with oil," we strongly reject the null hypothesis of no OOS predictability from oil prices to GDP via our point forecast comparisons from the mid-1980s through the Great Recession. Further analysis shows that these results may be due to our oil price measures serving as proxies for a recently developed measure of global real economic activity omitted from the alternatives to the benchmark forecasting models in which we only use lags of GDP growth. By way of density forecast OOS comparisons, we find evidence of such oil price predictability for GDP for our full 1970-2009 OOS period. Examination of the density forecasts reveals a massive increase in forecast uncertainty following the 1973 post-Yom Kippur War crude oil price increases. |
Date: | 2010–09–15 |
URL: | http://d.repec.org/n?u=RePEc:bno:worpap:2010_18&r=for |
By: | Roland Döhrn; Christoph M. Schmidt |
Abstract: | The accuracy of macroeconomic forecast depends on various factors, most importantly the mix of analytical methods used by the individual forecasters, the way that their personal experience is shaping their identifi cation strategies, but also their effi - ciency in translating new information into revised forecasts. In this paper we use a broad sample of forecasts of German GDP and its components to analyze the impact of institutions and information on forecast accuracy. We fi nd that forecast errors are a linear function of the forecast horizon. This result is robust over a variety of diff erent specifi cations. As better information seems to be the key to achieving better forecasts, approaches for acquiring reliable information early seem to be a good investment. By contrast, the institutional factors tend to be small and statistically insignifi cant. It has to remain open, whether this is the consequence of the effi ciency-enhancing competition among German research institutions or rather the refl ection of an abundance of forecast suppliers. |
Keywords: | Forecast accuracy, Forecast Revisions, Forecast Horizon, Economic Activity |
JEL: | C53 E27 E01 |
Date: | 2010–09 |
URL: | http://d.repec.org/n?u=RePEc:rwi:repape:0201&r=for |
By: | Dobrislav P. Dobrev; Pawel J. Szerszen |
Abstract: | We demonstrate that the parameters controlling skewness and kurtosis in popular equity return models estimated at daily frequency can be obtained almost as precisely as if volatility is observable by simply incorporating the strong information content of realized volatility measures extracted from high-frequency data. For this purpose, we introduce asymptotically exact volatility measurement equations in state space form and propose a Bayesian estimation approach. Our highly efficient estimates lead in turn to substantial gains for forecasting various risk measures at horizons ranging from a few days to a few months ahead when taking also into account parameter uncertainty. As a practical rule of thumb, we find that two years of high frequency data often suffice to obtain the same level of precision as twenty years of daily data, thereby making our approach particularly useful in finance applications where only short data samples are available or economically meaningful to use. Moreover, we find that compared to model inference without high-frequency data, our approach largely eliminates underestimation of risk during bad times or overestimation of risk during good times. We assess the attainable improvements in VaR forecast accuracy on simulated data and provide an empirical illustration on stock returns during the financial crisis of 2007-2008. |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedgfe:2010-45&r=for |
By: | Jeroen Rombouts; Lars Peter Stentoft |
Abstract: | This paper uses asymmetric heteroskedastic normal mixture models to fit return data and to price options. The models can be estimated straightforwardly by maximum likelihood, have high statistical fit when used on S&P 500 index return data, and allow for substantial negative skewness and time varying higher order moments of the risk neutral distribution. When forecasting out-of-sample a large set of index options between 1996 and 2009, substantial improvements are found compared to several benchmark models in terms of dollar losses and the ability to explain the smirk in implied volatilities. Overall, the dollar root mean squared error of the best performing benchmark component model is 39% larger than for the mixture model. When considering the recent financial crisis this difference increases to 69%. <P>Dans le présent document, nous avons recours aux modèles hétéroscédastiques asymétriques avec mélange de distributions normales pour ajuster les données sur les rendements et fixer les prix des options. Les modèles peuvent être estimés directement par le maximum de vraisemblance, ils comportent un ajustement statistique élevé quand ils sont utilisés sur les données de rendement de l’indice S&P 500, et ils permettent de tenir compte d’une asymétrie négative importante et des moments d’ordre élevé variant dans le temps liés à la distribution du risque nul. Dans le cas des prévisions hors-échantillonnage concernant une vaste gamme d’options sur indice entre 1996 et 2009, nous constatons des améliorations substantielles, par rapport à plusieurs modèles de référence, en termes de pertes exprimées en dollars et de capacité d’expliquer le caractère ironique des volatilités implicites. En général, la racine de l’erreur quadratique moyenne du modèle de référence à composantes le plus efficace est 39 % plus grande que dans le cas du modèle à mélange. Dans le contexte de la récente crise financière, cette différence augmente à 69 %. |
Keywords: | Asymmetric heteroskedastic models, finite mixture models, option pricing, out-of-sample prediction, statistical fit , modèles hétéroscédastiques asymétriques, modèle à mélanges finis, fixation des prix des options, prédiction hors-échantillonnage, ajustement statistique |
JEL: | C11 C15 C22 G13 |
Date: | 2010–09–01 |
URL: | http://d.repec.org/n?u=RePEc:cir:cirwor:2010s-38&r=for |
By: | Massimiliano Caporin; Michael McAleer (University of Canterbury) |
Abstract: | This paper focuses on the selection and comparison of alternative non-nested volatility models. We review the traditional in-sample methods commonly applied in the volatility framework, namely diagnostic checking procedures, information criteria, and conditions for the existence of moments and asymptotic theory, as well as the out-of-sample model selection approaches, such as mean squared error and Model Confidence Set approaches. The paper develops some innovative loss functions which are based on Value-at-Risk forecasts. Finally, we present an empirical application based on simple univariate volatility models, namely GARCH, GJR, EGARCH, and Stochastic Volatility that are widely used to capture asymmetry and leverage. |
Keywords: | Volatility model selection; volatility model comparison; non-nested models; model confidence set; Value-at-Risk forecasts; asymmetry, leverage |
JEL: | C11 C22 C52 |
Date: | 2010–09–01 |
URL: | http://d.repec.org/n?u=RePEc:cbt:econwp:10/58&r=for |
By: | Di Kuang (Aon, 8 Devonshire Square, London EC2M 4PL, U.K.); Bent Nielsen (Nuffield College, Oxford OX1 1NF, U.K.); Jens Perch Nielsen (Cass Business School, City University London, 106 Bunhill Row, London EC1Y 8TZ, U.K.) |
Abstract: | Reserving in general insurance is often done using chain-ladder-type methods. We propose a method aimed at situations where there is a sudden change in the economic environment affecting the policies for all accident years in the reserving triangle. It is shown that methods for forecasting non-stationary time series are helpful. We illustrate the method using data published in Barnett and Zehnwirth (2000). These data illustrate features we also found in data from the general insurer RSA during the recent credit crunch. |
Keywords: | Calendar effect, canonical parameter, extended chain-ladder, identification problem, forecasting. |
Date: | 2010–06–24 |
URL: | http://d.repec.org/n?u=RePEc:nuf:econwp:1005&r=for |
By: | James A. Feigenbaum; Geng Li |
Abstract: | We propose a novel approach to estimate household income uncertainty at various future horizons and characterize how the estimated uncertainty evolves over the life cycle. We measure income uncertainty as the variance of linear forecast errors conditional on information available to households prior to observing the realized income. This approach is semiparametric because we impose essentially no restrictions on the statistical properties of the forecast errors. Relative to previous studies, we find lower and less persistent income uncertainties that call for a life cycle consumption profile with a less pronounced hump. |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedgfe:2010-42&r=for |
By: | Koumparoulis, Dimitrios |
Abstract: | At the Bank of Greece econometric modeling started in 1975 when the Bank’s first model of the Greek economy was developed under the leadership of the outgoing Governor of the Bank C. Garganas. The model was extensively used for many years in forecasting as well as in policy analysis and proved to be an indispensable tool for the policy decisions of the Bank over a broad spectrum of issues. Model of Greece is an ongoing activity fuelled by the changes in the economy as well as by modeling theoretical advances. This paper describes and documents the use of the current version of the Bank of Greece model. |
Keywords: | econometric modeling; cointegration techniques |
JEL: | C50 E17 |
Date: | 2010 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:25040&r=for |
By: | Berg, Nathan; Biele, Guido; Gigerenzer, Gerd |
Abstract: | Subjective beliefs and behavior regarding the Prostate Specific Antigen (PSA) test for prostate cancer were surveyed among attendees of the 2006 meeting of the American Economic Association. Logical inconsistency was measured in percentage deviations from a restriction imposed by Bayes’ Rule on pairs of conditional beliefs. Economists with inconsistent beliefs tended to be more accurate than average, and consistent Bayesians were substantially less accurate. Within a loss function framework, we look for and cannot find evidence that inconsistent beliefs cause economic losses. Subjective beliefs about cancer risks do not predict PSA testing decisions, but social influences do. |
Keywords: | logical consistency; predictive accuracy; elicitation; non-Bayesian; ecological rationality |
JEL: | D6 D8 |
Date: | 2010–08–11 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:24976&r=for |
By: | Bent Jesper Christensen (Aarhus University and CREATES); Paolo Santucci de Magistris (University of Pavia and CREATES) |
Abstract: | We propose a simple model in which realized stock market return volatility and implied volatility backed out of option prices are subject to common level shifts corresponding to movements between bull and bear markets. The model is estimated using the Kalman filter in a generalization to the multivariate case of the univariate level shift technique by Lu and Perron (2008). An application to the S&P500 index and a simulation experiment show that the recently documented empirical properties of strong persistence in volatility and forecastability of future realized volatility from current implied volatility, which have been interpreted as long memory (or fractional integration) in volatility and fractional cointegration between implied and realized volatility, are accounted for by occasional common level shifts. |
Keywords: | Common level shifts, fractional cointegration, fractional VECM, implied volatility, long memory, options, realized volatility. |
JEL: | C32 G13 G14 |
Date: | 2010–09–09 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2010-60&r=for |