|
on Forecasting |
By: | Smets, Frank; Warne, Anders; Wouters, Raf |
Abstract: | This paper analyses the real-time forecasting performance of the New Keynesian DSGE model of Galí, Smets, and Wouters (2012) estimated on euro area data. It investigates to what extent forecasts of inflation, GDP growth and unemployment by professional forecasters improve the forecasting performance. We consider two approaches for conditioning on such information. Under the “noise” approach, the mean professional forecasts are assumed to be noisy indicators of the rational expectations forecasts implied by the DSGE model. Under the “news” approach, it is assumed that the forecasts reveal the presence of expected future structural shocks in line with those estimated over the past. The forecasts of the DSGE model are compared with those from a Bayesian VAR model and a random walk. JEL Classification: E24, E31, E32 |
Keywords: | Bayesian methods, DSGE model, estimated New Keynesian model, macroeconomic forecasting, real-time data, survey data |
Date: | 2013–08 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20131571&r=for |
By: | Laurent L. Pauwels (The University of Sydney Business School); Andrey Vasnev |
Abstract: | This paper proposes the use of forecast combination to improve predictive accuracy in forecasting the U.S. business cycle index, as published by the Business Cycle Dating Committee of the NBER. It focuses on one-step ahead out-of-sample monthly forecast utilising the well-established coincident indicators and yield curve models, allowing for dynamics and real-time data revisions. Forecast combinations use logscore and quadratic-score based weights, which change over time. This paper finds that forecast accuracy improves when combining the probability forecasts of both the coincident indicators model and the yield curve model, compared to each model's own forecasting performance. |
Keywords: | U.S. business cycle, Forecast combination, Density forecast, Probit models, Yield curve, Coincident indicators. |
Date: | 2013–03 |
URL: | http://d.repec.org/n?u=RePEc:syb:wpbsba:05/2013&r=for |
By: | Christiane Baumeister; Lutz Kilian; Xiaoqing Zhou |
Abstract: | Notwithstanding a resurgence in research on out-of-sample forecasts of the price of oil in recent years, there is one important approach to forecasting the real price of oil which has not been studied systematically to date. This approach is based on the premise that demand for crude oil derives from the demand for refined products such as gasoline or heating oil. Oil industry analysts such as Philip Verleger and financial analysts widely believe that there is predictive power in the product spread, defined as the difference between suitably weighted refined product market prices and the price of crude oil. Our objective is to evaluate this proposition. We derive from first principles a number of alternative forecasting model specifications involving product spreads and compare these models to the no-change forecast of the real price of oil. We show that not all product spread models are useful for out-of-sample forecasting, but some models are, even at horizons between one and two years. The most accurate model is a time-varying parameter model of gasoline and heating oil spot spreads that allows the marginal product market to change over time. We document mean-squared prediction error reductions as high as 20 per cent and directional accuracy as high as 63 per cent at the two-year horizon, making product spread models a good complement to forecasting models based on economic fundamentals, which work best at short horizons. |
Keywords: | Econometric and statistical methods; International topics |
JEL: | Q43 C53 G15 |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:bca:bocawp:13-25&r=for |
By: | Kenny, Geoff; Kostka, Thomas; Masera, Federico |
Abstract: | We propose methods to evaluate the risk assessments collected as part of the ECB Survey of Professional Forecasters (SPF). Our approach focuses on direction-of-change predictions as well as the prediction of relatively more extreme macroeconomic outcomes located in the upper and lower regions of the predictive densities. For inflation and GDP growth, we find such surveyed densities are informative about future direction of change. Regarding more extreme high and low outcome events, the surveys are really only informative about GDP growth outcomes and at short-horizons. The upper and lower regions of the predictive densities for inflation are much less informative. JEL Classification: C22, C53 |
Keywords: | calibration error, forecast evaluation, probability forecasts, Survey of Professional Forecasters |
Date: | 2013–04 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20131540&r=for |
By: | Andrey L. Vasnev (The University of Sydney Business School); Laurent L. Pauwels |
Abstract: | The problem of finding appropriate weights to combine several density forecasts is an important issue currently debated in the forecast combination literature. Recently, a paper by Hall and Mitchell (IJF, 2007) proposes to combine density forecasts with optimal weights obtained from solving an optimization problem. This paper studies the properties of this optimization problem when the number of forecasting periods is relatively small and finds that it often produces corner solutions by allocating all the weight to one density forecast only. This paper's practical recommendation is to have an additional training sample period for the optimal weights. While reserving a portion of the data for parameter estimation and making pseudo-out-of-sample forecasts are common practices in the empirical literature, employing a separate training sample for the optimal weights is novel, and it is suggested because it decreases the chances of corner solutions. Alternative log-score or quadratic-score weighting schemes do not have this training sample requirement. |
Keywords: | Forecast combination; Density forecast; Optimization; Optimal weight; Discrete choice models |
Date: | 2013–01 |
URL: | http://d.repec.org/n?u=RePEc:syb:wpbsba:01/2013&r=for |
By: | Asimakopoulos, Stylianos; Paredes, Joan; Warmedinger, Thomas |
Abstract: | Given the increased importance of …fiscal monitoring, this study amends the existing literature in the …field of intra-annual fi…scal data in two main dimensions. First, we use quarterly fi…scal data to forecast a very disaggregated set of …fiscal series at annual frequency. This makes the analysis useful in the typical forecasting environment of large institutions, which employ a "bottom-up" or disaggregated framework. Aside from this practical type of consideration, we fi…nd that forecasts for total revenues and expenditures via their subcomponents can actually result more accurate than a direct forecast of the aggregate. Second, we employ a Mixed Data Sampling (MiDaS) approach to analyze mixed frequency …fiscal data, which is a methodological novelty. It is shown that MiDaS is the best approach for the analysis of mixed frequency fi…scal data compared to two alternative approaches. The results regarding the information content of quarterly …fiscal data con…rm previous work that such data should be taken into account as it becomes available throughout the year for improving the end-year forecast. For instance, once data for the third quarter is incorporated, the annual forecast becomes very accurate (very close to actual data). We also benchmark against the European Commission’s forecast and fi…nd the results fare favorably, particularly when considering that they stem from a simple univariate framework. JEL Classification: C22, C53, E62, H68 |
Keywords: | aggregated vs disaggregated forecast, Fiscal Policy, Mixed frequency data, short-term forecasting |
Date: | 2013–05 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20131550&r=for |
By: | Warne, Anders; Coenen, Günter; Christoffel, Kai |
Abstract: | This paper shows how to compute the h-step-ahead predictive likelihood for any subset of the observed variables in parametric discrete time series models estimated with Bayesian methods. The subset of variables may vary across forecast horizons and the problem thereby covers marginal and joint predictive likelihoods for a fixed subset as special cases. The basic idea is to utilize well-known techniques for handling missing data when computing the likelihood function, such as a missing observations consistent Kalman filter for linear Gaussian models, but it also extends to nonlinear, nonnormal state-space models. The predictive likelihood can thereafter be calculated via Monte Carlo integration using draws from the posterior distribution. As an empirical illustration, we use euro area data and compare the forecasting performance of the New Area-Wide Model, a small-open-economy DSGE model, to DSGEVARs, and to reduced-form linear Gaussian models. JEL Classification: C11, C32, C52, C53, E37 |
Keywords: | Bayesian inference, forecasting, Kalman filter, Missing data, Monte Carlo integration |
Date: | 2013–04 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20131536&r=for |
By: | Chen, Nan-Kuang; Chen, Shiu-Sheng; Chou, Yu-Hsi |
Abstract: | In this paper, we revisit bear market predictability by employing a number of variables widely used in forecasting stock returns. In particular, we focus on variables related to the presence of imperfect credit markets. We evaluate prediction performance using in-sample and out-of-sample tests. Empirical evidence from the US stock market suggests that among the variables we investigate, the default yield spread, inflation, and the term spread are useful in predicting bear markets. Further, we find that the default yield spread provides superior out-of-sample predictability for bear markets one to three months ahead, which suggests that the external finance premium has an informative content on the financial market. |
Keywords: | Bear markets, stock returns, Markov-switching models |
JEL: | C53 G10 |
Date: | 2013–08–15 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:49093&r=for |
By: | Ca' Zorzi, Michele; Muck, Jakub; Rubaszek, Michał |
Abstract: | This paper brings three new insights into the Purchasing Power Parity (PPP) debate. First, we show that a half-life PPP model is able to forecast real exchange rates (RER) better than the random walk (RW) model at both short and long-term horizons. Secondly, we find that this result holds only if the speed of adjustment to the sample mean is calibrated at reasonable values rather than estimated. Finally, we find that it is also preferable to calibrate, rather than to elicit as a prior, the parameter determining the speed of adjustment to PPP. JEL Classification: C32, F31, F37 |
Keywords: | Exchange rate forecasting, half-life, purchasing power parity |
Date: | 2013–08 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20131576&r=for |
By: | Bańbura, Marta; Giannone, Domenico; Modugno, Michele; Reichlin, Lucrezia |
Abstract: | The term now-casting is a contraction for now and forecasting and has been used for a long-time in meteorology and recently also in economics. In this paper we survey recent developments in economic now-casting with special focus on those models that formalize key features of how market participants and policy makers read macroeconomic data releases in real time, which involves: monitoring many data, forming expectations about them and revising the assessment on the state of the economy whenever realizations diverge sizeably from those expectations. (Prepared for G. Elliott and A. Timmermann, eds., Handbook of Economic Forecasting, Volume 2, Elsevier-North Holland). JEL Classification: E32, E37, C01, C33, C53 |
Keywords: | macroeconomic forecasting, Macroeconomic news, mixed frequency, real-time data, state space models |
Date: | 2013–07 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20131564&r=for |
By: | Jan R. Magnus (The University of Sydney Business School); Andrey L. Vasnev |
Abstract: | Sensitivity analysis is important for its own sake and also in combination with diagnostic testing. We consider the question how to use sensitivity statistics in practice, in particular how to judge whether sensitivity is large or small. For this purpose we distinguish between absolute and relative sensitivity and highlight the context-dependent nature of any sensitivity analysis. Relative sensitivity is then applied in the context of forecast combination and sensitivity-based weights are introduced. All concepts are illustrated through the European yield curve. In this context it is natural to look at sensitivity to autocorrelation and normality assumptions. Different forecasting models are combined with equal, fit-based and sensitivity-based weights, and compared with the multivariate and random walk benchmarks. We show that the fit-based weights and the sensitivity-based weights are complementary. For long-term maturities the sensitivity-based weights perform better than other weights. |
Keywords: | Sensitivity analysis, Forecast combination, Yield curve prediction |
Date: | 2013–03 |
URL: | http://d.repec.org/n?u=RePEc:syb:wpbsba:04/2013&r=for |
By: | Binder, Michael; Gross, Marco |
Abstract: | The purpose of the paper is to develop a Regime-Switching Global Vector Autoregressive (RS-GVAR) model. The RS-GVAR model allows for recurring or non-recurring structural changes in all or a subset of countries. It can be used to generate regime-dependent impulse response functions which are conditional upon a regime-constellation across countries. Coupling the RS and the GVAR methodology improves out-of-sample forecast accuracy significantly in an application to real GDP, price inflation, and stock prices. JEL Classification: C32, E17, G20 |
Keywords: | forecasting and simulation, Global macroeconometric modeling, nonlinear modeling, Regime switching |
Date: | 2013–08 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20131569&r=for |
By: | Amisano, Gianni; Geweke, John |
Abstract: | Prediction of macroeconomic aggregates is one of the primary functions of macroeconometric models, including dynamic factor models, dynamic stochastic general equilibrium models, and vector autoregressions. This study establishes methods that improve the predictions of these models, using a representative model from each class and a canonical 7-variable postwar US data set. It focuses on prediction over the period 1966 through 2011. It measures the quality of prediction by the probability densities assigned to the actual values of these variables, one quarter ahead, by the predictive distributions of the models in real time. Two steps lead to substantial improvement. The first is to use full Bayesian predictive distributions rather than substitute a "plug-in" posterior mode for parameters. Across models and quarters, this leads to a mean improvement in probability of 50.4%. The second is to use an equally-weighted pool of predictive densities from the three models, which leads to a mean improvement in probability of 41.9% over the full Bayesian predictive distributions of the individual models. This improvement is much better than that a¤orded by Bayesian model averaging. The study uses several analytical tools, including pooling, analysis of predictive variance, and probability integral transform tests, to understand and interpret the improvements. JEL Classification: C11, C51 C53 |
Keywords: | Analysis of variance, Bayesian model averaging, dynamic factor model, dynamic stochastic general equilibrium model, prediction pools, probability integral transform test, vector autoregression model |
Date: | 2013–04 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20131537&r=for |
By: | Gross, Marco |
Abstract: | This paper aims to illustrate how weight matrices that are needed to construct foreign variable vectors in Global Vector Autoregressive (GVAR) models can be estimated jointly with the GVAR's parameters. An application to real GDP and consumption expenditure price inflation as well as a controlled Monte Carlo simulation serve to highlight that 1) In the application at hand, the estimated weights differ for some countries significantly from trade-based ones that are traditionally employed in that context; 2) misspecified weights might bias the GVAR estimate and therefore distort its dynamics; 3) using estimated GVAR weights instead of trade-based ones (to the extent that they differ and the latter bias the global model estimates) shall enhance the out-of-sample forecast performance of the GVAR. Devising a method for estimating GVAR weights is particularly useful for contexts in which it is not obvious how weights could otherwise be constructed from data. JEL Classification: C33, C53, C61, E17 |
Keywords: | forecasting and simulation, Global macroeconometric modeling, models with panel data |
Date: | 2013–03 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20131523&r=for |
By: | Sousa, João; Sousa, Ricardo M. |
Abstract: | The goal of this paper is to analyze predictability of future asset returns in the context of model uncertainty. Using data for the euro area, the US and the U.K., we show that one can improve the forecasts of stock returns using a model averaging approach, and there is a large amount of model uncertainty. The empirical evidence for the euro area suggests that several macroeconomic, …financial and macro-financial variables are consistently among the most prominent determinants of the risk premium. As for the US, only a few predictors play an important role. In the case of the UK, future stock returns are better forecast by …financial variables. JEL Classification: E21, G11, E44 |
Keywords: | Bayesian model averaging, Model uncertainty, Stock returns |
Date: | 2013–08 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20131575&r=for |
By: | Mawuli Segnon; Thomas Lux |
Abstract: | This chapter provides an overview over the recently developed so called multifractal (MF) approach for modeling and forecasting volatility. We outline the genesis of this approach from similar models of turbulent flows in statistical physics and provide details on different specifications of multifractal time series models in finance, available methods for their estimation, and the current state of their empirical applications |
Keywords: | Multifractal processes, random measures, stochastic volatility, forecasting |
JEL: | C20 F37 G15 |
Date: | 2013–08 |
URL: | http://d.repec.org/n?u=RePEc:kie:kieliw:1860&r=for |
By: | Meub, Lukas; Proeger, Till; Bizer, Kilian |
Abstract: | Behavioral biases in forecasting, particularly the lack of adjustment from current values and the overall clustering of forecasts, are increasingly explained as resulting from the anchoring heuristic. Nonetheless, the classical anchoring experiments presented in support of this interpretation lack external validity for economic domains, particularly monetary incentives, feedback for learning effects and a rational strategy of unbiased predictions. We introduce an experimental design that implements central aspects of forecasting to close the gap between empirical studies on forecasting quality and the laboratory evidence for anchoring effects. Comprising more than 5,000 individual forecasts by 455 participants, our study shows significant anchoring effects. Without monetary incentives, the share of rational predictions drops from 42% to 15% in the anchor's presence. Monetary incentives reduce the average bias to one-third of its original value. Additionally, the average anchor bias is doubled when task complexity is increased, and is quadrupled when the underlying risk is increased. The variance of forecasts is significantly reduced by the anchor once risk or cognitive load is increased. Subjects with higher cognitive abilities are on average less biased toward the anchor when task complexity is high. The anchoring bias in our repeated game is not influenced by learning effects, although feedback is provided. Our results support the assumption that biased forecasts and their specific variance can be ascribed to anchoring effects. -- |
Keywords: | anchoring,cognitive ability,forecasting,heuristics and biases,incentives,laboratory experiment |
JEL: | C90 D03 D80 G17 |
Date: | 2013 |
URL: | http://d.repec.org/n?u=RePEc:zbw:cegedp:166&r=for |
By: | Wolfgang Karl Härdle; Dedy Dwi Prastyo; ; Dieter |
Abstract: | Probability of default prediction is one of the important tasks of rating agencies as well as of banks and other financial companies to measure the default risk of their counterparties. Knowing predictors that significantly contribute to default prediction provides a better insight into fundamentals of credit risk analysis. Default prediction and default predictor selection are two related issues, but many existing approaches address them separately. We employed a unified procedure, a regularization approach with logit as an underlying model, which simultaneously selects the default predictors and optimizes all the parameters within the model. We employ Lasso and elastic-net penalty functions as regularization approach. The methods are applied to predict default of companies from industry sector in Southeast Asian countries. The empirical result exhibits that the proposed method has a very high accuracy prediction particularly for companies operating Indonesia, Singapore, and Thailand. The relevant default predictors over the countries reveal that credit risk analysis is sample specific. A few number of predictors result in counter intuitive sign estimates. |
Keywords: | Default risk, Predictor selection, logit, Lasso, Elastic-net |
JEL: | C13 C61 G33 |
Date: | 2013–08 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2013-037&r=for |
By: | Villa, Stefania |
Abstract: | This paper compares from a Bayesian perspective three dynamic stochastic general equilibrium models in order to analyse whether financial frictions are empirically relevant in the Euro Area (EA) and, if so, which type of financial frictions is preferred by the data. The models are: (i) Smets and Wouters (2007) (SW); (ii) a SW model with financial frictions originating in non-financial firms à la Bernanke et al. (1999), (SWBGG); and (iii) a SW model with financial frictions originating in financial intermediaries, à la Gertler and Karadi (2011), (SWGK). The comparison between the three estimated models is made along different dimensions: (i) the Bayes factor; (ii) business cycle moments; and (iii) impulse response functions. The analysis of the Bayes factor and of simulated moments provides evidence in favour of the SWGK model. This paper also finds that the SWGK model outperforms the SWBGG model in forecasting EA inflationary pressures in a Phillips curve specification. JEL Classification: C11, E44 |
Keywords: | Bayesian estimation, DSGE Models, Financial Frictions |
Date: | 2013–03 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20131521&r=for |
By: | Gross, Marco; Kok Sørensen, Christoffer |
Abstract: | This paper aims to illustrate how a Mixed-Cross-Section Global Vector Autoregressive (MCS-GVAR) model can be set up and solved for the purpose of forecasting and scenario simulation. The application involves two cross-sections: sovereigns and banks for which we model their credit default swap spreads. Our MCS-GVAR comprises 23 sovereigns and 41 international banks from Europe, the US and Japan. The model is used to conduct systematic shock simulations and thereby compute a measure of spill-over potential for within and across the group of sovereigns and banks. The results point to a number of salient facts: i) Spill-over potential in the CDS market was particularly pronounced in 2008 and more recently in 2011-12; ii) while in 2008 contagion primarily went from banks to sovereigns, the direction reversed in 2011-12 in the course of the sovereign debt crisis; iii) the index of spill-over potential suggests that the system of banks and sovereigns has become more densely connected over time. Should large shocks of size similar to those experienced in the early phase of the crisis hit the system in 2011/2012, considerably more pronounced and more synchronized adverse responses across banks and sovereigns would have to be expected. JEL Classification: C33, C53, C61, E17 |
Keywords: | Contagion, forecasting and simulation, Global macroeconometric modeling, macro-financial linkages, models with panel data, network analysis, spill-overs |
Date: | 2013–08 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20131570&r=for |