|
on Forecasting |
By: | Souhaib Ben Taieb; James W. Taylor; Rob J. Hyndman |
Abstract: | Many applications require forecasts for a hierarchy comprising a set of time series along with aggregates of subsets of these series. Although forecasts can be produced independently for each series in the hierarchy, typically this does not lead to coherent forecasts -- the property that forecasts add up appropriately across the hierarchy. State-of-the-art hierarchical forecasting methods usually reconcile the independently generated forecasts to satisfy the aggregation constraints. A fundamental limitation of prior research is that it has considered only the problem of forecasting the mean of each time series. We consider the situation where probabilistic forecasts are needed for each series in the hierarchy. We define forecast coherency in this setting, and propose an algorithm to compute predictive distributions for each series in the hierarchy. Our algorithm has the advantage of synthesizing information from different levels in the hierarchy through a sparse forecast combination and a probabilistic hierarchical aggregation. We evaluate the accuracy of our forecasting algorithm on both simulated data and large-scale electricity smart meter data. The results show consistent performance gains compared to state-of-the art methods. |
Keywords: | forecast combination, probabilistic forecast, copula, machine learning.; Forecast combination, probabilistic forecast, copula, machine learning |
JEL: | C53 Q47 C32 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2017-3&r=for |
By: | Ruipeng Liu (Department of Finance, Deakin Business School, Deakin University, Melbourne, Australia); Riza Demirer (Department of Economics & Finance, Southern Illinois University Edwardsville, Edwardsville, USA); Rangan Gupta (Department of Economics, University of Pretoria, Pretoria, South Africa and IPAG Business School, Paris, France); Mark E. Wohar (College of Business Administration, University of Nebraska at Omaha, Omaha, USA and School of Business and Economics, Loughborough University, Leicestershire,UK) |
Abstract: | This paper examines volatility linkages and forecasting for stock and foreign exchange (FX) markets from a novel perspective by utilizing a bivariate Markov-switching multifractal model (MSM) that accounts for possible interactions between stock and FX markets. Examining daily data from the advanced G6 and emerging BRICS nations, we compare the out-of-sample volatility forecasts from GARCH, univariate MSM and bivariate MSM models. Our findings show that the GARCH model generally offers superior volatility forecasts for short horizons, particularly for FX returns in advanced markets. Multifractal models, on the other hand, offer significant improvements for longer forecast horizons, consistently across most markets. Finally, the bivariate MF model provides superior forecasts compared to the univariate alternative in most G6 countries and more consistently for FX returns, while its benefits are limited in the case of emerging markets. Overall, our findings suggest that multifractal models can indeed improve out-of-sample volatility forecasts, particularly for longer horizons, while the bivariate specification can potentially extend the superior forecast performance to shorter horizons as well. |
Keywords: | Long memory, multifractal models, simulation based inference, volatility forecasting, BRICS |
JEL: | C11 C13 G15 |
Date: | 2017–04 |
URL: | http://d.repec.org/n?u=RePEc:pre:wpaper:201728&r=for |
By: | Yin-Wong Cheung; Menzie D. Chinn; Antonio Garcia Pascual; Yi Zhang |
Abstract: | Previous assessments of nominal exchange rate determination, following Meese and Rogoff (1983) have focused upon a narrow set of models. Cheung et al. (2005) augmented the usual suspects with productivity based models, and "behavioral equilibrium exchange rate" models, and assessed performance at horizons of up to 5 years. In this paper, we further expand the set of models to include Taylor rule fundamentals, yield curve factors, and incorporate shadow rates and risk and liquidity factors. The performance of these models is compared against the random walk benchmark. The models are estimated in error correction and first-difference specifications. We examine model performance at various forecast horizons (1 quarter, 4 quarters, 20 quarters) using differing metrics (mean squared error, direction of change), as well as the “consistency” test of Cheung and Chinn (1998). No model consistently outperforms a random walk, by a mean squared error measure, although purchasing power parity does fairly well. Moreover, along a direction-of-change dimension, certain structural models do outperform a random walk with statistical significance. While one finds that these forecasts are cointegrated with the actual values of exchange rates, in most cases, the elasticity of the forecasts with respect to the actual values is different from unity. Overall, model/specification/currency combinations that work well in one period will not necessarily work well in another period |
JEL: | F31 F47 |
Date: | 2017–03 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:23267&r=for |
By: | Travis J. Berge |
Abstract: | Survey based measures of inflation expectations are not informationally efficient yet carry important information about future inflation. This paper explores the economic significance of informational inefficiencies of survey expectations. A model selection algorithm is applied to the inflation expectations of households and professionals using a large panel of macroeconomic data. The expectations of professionals are best described by different indicators than the expectations of households. A forecast experiment finds that it is difficult to exploit informational inefficiencies to improve inflation forecasts, suggesting that the economic cost of the surveys' deviation from rationality is not large. |
Keywords: | Informational efficiency ; Phillips curve ; Survey based inflation expectations ; Boosting ; Inflation forecasting ; Machine learning |
JEL: | C53 E31 E37 |
Date: | 2017–04 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedgfe:2017-46&r=for |
By: | Alireza Ermagun; David Levinson (Nexus (Networks, Economics, and Urban Systems) Research Group, Department of Civil Engineering, University of Minnesota) |
Abstract: | This paper systematically reviews studies that forecast short-term traffic conditions using spatial dependence between links. We synthesize 130 extracted research papers from two perspectives: (1) methodological framework, and (2) approach for capturing and incorporating spatial information. From the methodology side, spatial information boosts the accuracy of prediction, particularly in congested traffic regimes and for longer horizons. There is a broad and longstanding agreement that non-parametric methods outperform the naive statistical methods such as historical average, real time profile, and exponential smoothing. However, to make an inexorable conclusion regarding the performance of neural network methods against STARIMA family models, more research is needed in this field. From the spatial dependency detection side, we believe that a large gulf exists between the realistic spatial dependence of traffic links on a real network and the studied networks. This systematic review highlights that the field is approaching its maturity, while it is still as crude as it is perplexing. It is perplexing in the conceptual methodology, and it is crude in the capture of spatial information. |
Keywords: | Traffic Forecasting, Spatial Correlation, Systematic Review, Traffic Network, Life-cycle |
JEL: | R40 C21 C22 B23 |
Date: | 2016 |
URL: | http://d.repec.org/n?u=RePEc:nex:wpaper:spatiotemporal&r=for |