|
on Forecasting |
By: | Charles Rahal |
Abstract: | We take a computational approach to forecasting real and nominal house prices, comparing a large number of models varying by the choice of factors, 'observable endogenous variables' and the number of lags, in addition to classical and modern econometric models. We utilize various optimal model selection and model averaging techniques, comparing them against classical benchmarks. Using six original datasets with large cross-sectional dimensions, we include recent developments in the factor literature as part of our model set. Within a 'pseudo real-time' out of sample forecasting context, results show that model averaging across a large set of candidate factor models is able to consistently generatye forecasts with favorable properties, but unable to consistently generate the singularly lowest forecast error. Other results include forecast error increasing in horizon, error magnitude varying by country (the variance of the underlying series) and that errors are lower for nominal, as opposed to real indexes. |
Keywords: | House Prices, Forecasting, Factor Error Correction Models, FAVARs |
JEL: | C53 R30 |
Date: | 2015–01 |
URL: | http://d.repec.org/n?u=RePEc:bir:birmec:15-05&r=for |
By: | Fornaro, Paolo |
Abstract: | In this paper, I use a large set of macroeconomic and financial predictors to forecast U.S. recession periods. I adopt Bayesian methodology with shrinkage in the parameters of the probit model for the binary time series tracking the state of the economy. The in-sample and out-of-sample results show that utilizing a large cross-section of indicators yields superior U.S. recession forecasts in comparison to a number of parsimonious benchmark models. Moreover, data rich models with shrinkage manage to beat the forecasts obtained with the factor-augmented probit model employed in past research |
Keywords: | Bayesian shrinkage, Business Cycles, Probit model, large cross-sections |
JEL: | C11 C25 E32 E37 |
Date: | 2015–03–15 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:62973&r=for |
By: | Meyer, Brent (Federal Reserve Bank of Atlanta); Tasci, Murat (Federal Reserve Bank of Cleveland) |
Abstract: | This paper evaluates the ability of autoregressive models, professional forecasters, and models that incorporate unemployment flows to forecast the unemployment rate. We pay particular attention to flows-based approaches–the more reduced-form approach of Barnichon and Nekarda (2012) and the more structural method in Tasci (2012)–to generalize whether data on unemployment flows are useful in forecasting the unemployment rate. We find that any approach that considers unemployment inflow and outflow rates performs well in the near term. Over longer forecast horizons, Tasci (2012) appears to be a useful framework even though it was designed to be mainly a tool to uncover long-run labor market dynamics such as the "natural" rate. Its usefulness is amplified at specific points in the business cycle when the unemployment rate is away from the longer-run natural rate. Judgmental forecasts from professional economists tend to be the single best predictor of future unemployment rates. However, combining those guesses with flows-based approaches yields significant gains in forecasting accuracy. |
Keywords: | unemployment forecasting; natural rate; unemployment flows; labor market search |
JEL: | C53 E24 E32 J64 |
Date: | 2015–02–01 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedawp:2015-01&r=for |
By: | Marcos Álvarez-Díaz (Department of Economics, University of Vigo, Galicia, Spain); Rangan Gupta (Department of Economics, University of Pretoria) |
Abstract: | The objective of this paper is to predict, both in-sample and out-of-sample, the consumer price index (CPI) of the United States (US) economy based on monthly data covering the period of 1980:1-2013:12, using a variety of linear (random walk (RW), autoregressive (AR) and seasonally-adjusted autoregressive moving average (SARIMA)) and nonlinear (artificial neural network (ANN) and genetic programming (GP)) univariate models. Our results show that, while the SARIMA model is superior relative to other linear and nonlinear models, as it tends to produce smaller forecast errors; statistically, these forecasting gains are not significant relative to higher-order AR and nonlinear models, though simple benchmarks like the RW and AR(1) models are statistically outperformed. Overall, we show that in terms of forecasting the US CPI, accounting for nonlinearity does not necessarily provide us with any statistical gains. |
Keywords: | Linear, Nonlinear, Forecasting, Consumer Price Index |
JEL: | C22 C45 C53 E31 |
Date: | 2015–03 |
URL: | http://d.repec.org/n?u=RePEc:pre:wpaper:201512&r=for |
By: | Roberto Casarin (Department of Economics, University of Venice Cà Foscari); Federico Bassetti (Department of Mathematics, University of Pavia); Francesco Ravazzolo (Norges Bank and BI Norwegian Business School) |
Abstract: | We introduce a Bayesian approach to predictive density calibration and combination that accounts for parameter uncertainty and model set incompleteness through the use of random calibration functionals and random combination weights. Building on the work of Ranjan and Gneiting (2010) and Gneiting and Ranjan (2013), we use infinite beta mixtures for the calibration. The proposed Bayesian nonparametric approach takes advantage of the flexibility of Dirichlet process mixtures, to achieve any continuous deformation of linearly combined predictive distributions. The inference procedure is based on Gibbs sampling and allows to account for uncertainty in the number of mixture components, mixture weights, and calibration parameters. The weak posterior consistency of the Bayesian nonparametric. calibration is provided under suitable conditions for unknown true density. We study the methodology in simulation examples with fat tails and multimodal densities, and apply it to density forecasts of daily S&P returns and daily maximum wind speed at the Frankfurt airport. |
Keywords: | Forecast calibration, Forecast combination, Density forecast, Beta mixtures, Bayesian nonparametrics, Slice sampling. |
JEL: | C13 C14 C51 C53 |
Date: | 2015 |
URL: | http://d.repec.org/n?u=RePEc:ven:wpaper:2015:04&r=for |
By: | Alessandro Giovannelli (Department of Economics and Finance, University of Rome "Tor Vergata"); Tommaso Proietti (DEF and CEIS, Università di Roma "Tor Vergata") |
Abstract: | We address the problem of selecting the common factors that are relevant for forecasting macroeconomic variables. In economic forecasting using diffusion indexes the factors are ordered, according to their importance, in terms of relative variability, and are the same for each variable to predict, i.e. the process of selecting the factors is not supervised by the predictand. We propose a simple and operational supervised method, based on selecting the factors on the basis of their significance in the regression of the predictand on the predictors. Given a potentially large number of predictors, we consider linear transformations obtained by principal components analysis. The orthogonality of the components implies that the standard t-statistics for the inclusion of a particular component are independent, and thus applying a selection procedure that takes into account the multiplicity of the hypotheses tests is both correct and computationally feasible. We focus on three main multiple testing procedures: Holm’s sequential method, controlling the family wise error rate, the Benjamini-Hochberg method, controlling the false discovery rate, and a procedure for incorporating prior information on the ordering of the components, based on weighting the p-values according to the eigenvalues associated to the components. We compare the empirical performances of these methods with the classical diffusion index (DI) approach proposed by Stock and Watson, conducting a pseudo-real time forecasting exercise, assessing the predictions of 8 macroeconomic variables using factors extracted from an U.S. dataset consisting of 121 quarterly time series. The overall conclusion is that nature is tricky, but essentially benign: the information that is relevant for prediction is effectively condensed by the first few factors. However, variable selection, leading to exclude some of the low order principal components, can lead to a sizable improvement in forecasting in specific cases. Only in one instance, real personal income, we were able to detect a significant contribution from high order components. |
Keywords: | Variable selection; Multiple testing; p-value weighting. |
JEL: | C22 C52 C58 |
Date: | 2015–03–12 |
URL: | http://d.repec.org/n?u=RePEc:rtv:ceisrp:332&r=for |
By: | O'Hare, Colin; Li, Youwei |
Abstract: | In recent years the issue of life expectancy has become of upmost importance to pension providers, insurance companies and the government bodies in the developed world. Significant and consistent improvements in mortality rates and hence life expectancy have led to unprecedented increases in the cost of providing for older ages. This has resulted in an explosion of stochastic mortality models forecasting trends in mortality data in order to anticipate future life expectancy and hence quantify the costs of providing for future ageing populations. Many stochastic models of mortality rates identify linear trends in mortality rates by time, age and cohort and forecast these trends into the future using standard statistical methods. These approaches rely on the assumption that structural breaks in the trend do not exist or do not have a significant impact on the mortality forecasts. Recent literature has started to question this assumption. In this paper we carry out a comprehensive investigation of the presence or otherwise of structural breaks in a selection of leading mortality models. We find that structural breaks are present in the majority of cases. In particular, where there is a structural break present we find that allowing for that improves the forecast result significantly. |
Keywords: | Mortality; stochastic models; forecasting; structural breaks |
JEL: | C51 C52 C53 G22 G23 J11 |
Date: | 2014–10 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:62994&r=for |
By: | Golinski, Adam; Madeira, Joao; Rambaccussing, Dooruj |
Abstract: | We re-examine the dynamics of returns and dividend growth within the present-value framework of stock prices. We find that the finite sample order of integration of returns is approximately equal to the order of integration of the first-differenced price-dividend ratio. As such, the traditional return forecasting regressions based on the price-dividend ratio are invalid. Moreover, the nonstationary long memory behaviour of the price-dividend ratio induces antipersistence in returns. This suggests that expected returns should be modelled as an ARFIMA process and we show this improves the forecast ability of the present-value model in-sample and out-of-sample. |
Keywords: | price-dividend ratio, persistence, fractional integration, return predictability, present-value model. |
JEL: | C32 C58 G12 |
Date: | 2014–09–13 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:58554&r=for |
By: | Boriss Siliverstovs (KOF Swiss Economic Institute, ETH Zurich, Switzerland) |
Abstract: | In this paper we extend the targeted-regressor approach suggested in Bai and Ng (2008) for variables sampled at the same frequency to mixed-frequency data. Our MIDASSO approach is a combination of the unrestricted MIxed-frequency DAta-Sampling approach (U-MIDAS) (see Foroni et al., 2015; Castle et al., 2009; Bec and Mogliani, 2013), and the LASSO-type penalised regression used in Bai and Ng (2008), called the elastic net (Zou and Hastie, 2005). We illustrate our approach by forecasting the quarterly real GDP growth rate in Switzerland. |
Keywords: | LASSO, Switzerland, Forecasting, Real-time data, MIDAS |
JEL: | C22 C53 |
Date: | 2015–03 |
URL: | http://d.repec.org/n?u=RePEc:kof:wpskof:15-375&r=for |
By: | Joshua C.C. Chan |
Abstract: | This paper generalizes the popular stochastic volatility in mean model of Koopman and Hol Uspensky (2002) to allow for time-varying parameters in the conditional mean. The estimation of this extension is nontrival since the volatility appears in both the conditional mean and the conditional variance, and its coefficient in the former is time-varying. We develop an efficient Markov chain Monte Carlo algorithm based on band and sparse matrix algorithms instead of the Kalman filter to estimate this more general variant. We illustrate the methodology with an application that involves US, UK and Germany inflation. The estimation results show substantial time-variation in the coefficient associated with the volatility, high-lighting the empirical relevance of the proposed extension. Moreover, in a pseudo out-of-sample forecasting exercise, the proposed variant also forecasts better than various standard benchmarks. |
Keywords: | nonlinear, state space, inflation forecasting, inflation uncertainty |
JEL: | C11 C15 C53 C58 E31 |
Date: | 2015–03 |
URL: | http://d.repec.org/n?u=RePEc:een:camaaa:2015-07&r=for |
By: | Michael P Clements (ICMA Centre, Henley Business School, University of Reading) |
Abstract: | We consider a number of ways of testing whether macroeconomic forecasters herd or anti-herd, i.e., whether they shade their forecasts towards those of others or purpose- fully exaggerate their differences. When applied to survey respondents expectations of inflation and output growth the tests indicate conflicting behaviour. We show that this can be explained in terms of a simple model in which differences between forecasters are primarily due to idiosyncratic factors or reporting errors rather than imitative behaviour. Models of forecaster heterogeneity that stress informational rigidities will also falsely indicate imitative behaviour. |
Date: | 2014–10 |
URL: | http://d.repec.org/n?u=RePEc:rdg:icmadp:icma-dp2014-12&r=for |
By: | Michael V. Klibanov; Andrey V. Kuzhuget |
Abstract: | A new mathematical model for the Black-Scholes equation is proposed to forecast option prices. This model includes new interval for the price of the underlying stock as well as new initial and boundary conditions. Conventional notions of maturity time and strike prices are not used. The Black-Scholes equation is solved as a parabolic equation with the reversed time, which is an ill-posed problem. Thus, a regularization method is used to solve it. This idea is verified on real market data for twenty liquid options. A trading strategy is proposed. This strategy indicates that our method is profitable on at least those twenty options. We conjecture that our method might lead to significant profits of those financial institutions which trade large amounts of options. We caution, however, that detailed further studies are necessary to verify this conjecture. |
Date: | 2015–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1503.03567&r=for |