|
on Forecasting |
By: | Gargano, Antonio; Timmermann, Allan G |
Abstract: | We compare different approaches to accounting for parameter instability in the context of macroeconomic forecasting models that assume either small, frequent changes versus models whose parameters exhibit large, rare changes. An empirical out-of-sample forecasting exercise for U.S. GDP growth and inflation suggests that models that allow for parameter instability generate more accurate density forecasts than constant-parameter models although they fail to produce better point forecasts. Model combinations deliver similar gains in predictive performance although they fail to improve on the predictive accuracy of the single best model which is a specification that allows for time-varying parameters and stochastic volatility. |
Keywords: | GDP growth; inflation; regime switching; stochastic volatility; time-varying parameters |
JEL: | C22 C53 |
Date: | 2016–06 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:11355&r=for |
By: | Elliott, Graham; Timmermann, Allan G |
Abstract: | Practices used to address economic forecasting problems have undergone substantial changes over recent years. We review how such changes have influenced the ways in which a range of forecasting questions are being addressed. We also discuss the promises and challenges arising from access to big data. Finally, we review empirical evidence and experience accumulated from the use of forecasting methods to a range of economic and financial variables. |
Keywords: | Big Data; Forecast evaluation; Forecast models; Model Instability; Parameter Estimation |
Date: | 2016–06 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:11354&r=for |
By: | Michal Franta |
Abstract: | Iterated multi-step forecasts are usually constructed assuming the same model in each forecasting iteration. In this paper, the model coefficients are allowed to change across forecasting iterations according to the in-sample prediction performance at a particular forecasting horizon. The technique can thus be viewed as a combination of iterated and direct forecasting. The superior point and density forecasting performance of this approach is demonstrated on a standard medium-scale vector autoregression employing variables used in the Smets and Wouters (2007) model of the US economy. The estimation of the model and forecasting are carried out in a Bayesian way on data covering the period 1959Q1-2016Q1. |
Keywords: | Bayesian estimation, direct forecasting, iterated forecasting, multi-step forecasts, VAR |
JEL: | C11 C32 C53 |
Date: | 2016–06 |
URL: | http://d.repec.org/n?u=RePEc:cnb:wpaper:2016/05&r=for |
By: | Brave, Scott (Federal Reserve Bank of Chicago); Butters, R. Andrew (Indiana University); Justiniano, Alejandro (Federal Reserve Bank of Chicago) |
Abstract: | Mixed frequency Bayesian vector autoregressions (MF-BVARs) allow forecasters to incorporate a large number of mixed frequency indicators into forecasts of economic activity. This paper evaluates the forecast performance of MF-BVARs relative to surveys of professional forecasters and investigates the influence of certain specification choices on this performance. We leverage a novel real-time dataset to conduct an out-of-sample forecasting exercise for U.S. real gross domestic product (GDP). MF-BVARs are shown to provide an attractive alternative to surveys of professional forecasters for forecasting GDP growth. However, certain specification choices such as model size and prior selection can affect their relative performance. |
Keywords: | Mixed frequency; Bayesian VAR; Real-time data; Nowcasting |
JEL: | C32 C53 E37 |
Date: | 2016–05–20 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedhwp:wp-2016-05&r=for |
By: | Geert Dhaene; Jianbin Wu |
Abstract: | We introduce and evaluate mixed-frequency multivariate GARCH models for forecasting low-frequency (weekly or monthly) multivariate volatility based on high-frequency intra-day returns (at five-minute intervals) and on the overnight returns. The low-frequency conditional volatility matrix is modelled as a weighted sum of an intra-day and an overnight component, driven by the intra-day and the overnight returns, respectively. The components are specified as multivariate GARCH (1,1) models of the BEKK type, adapted to the mixed-frequency data setting. For the intra-day component, the squared high-frequency returns enter the GARCH model through a parametrically specified mixed-data sampling (MIDAS) weight function or through the sum of the intra-day realized volatilities. For the overnight component, the squared overnight returns enter the model with equal weights. Alternatively, the low-frequency conditional volatility matrix may be modelled as a single-component BEKK-GARCH model where the overnight returns and the high-frequency returns enter through the weekly realized volatility (defined as the unweighted sum of squares of overnight and high-frequency returns), or where the overnight returns are simply ignored. All model variants may further be extended by allowing for a non-parametrically estimated slowly-varying long-run volatility matrix. The proposed models are evaluated using five-minute and overnight return data on four DJIA stocks (AXP, GE, HD, and IBM) from January 1988 to November 2014. The focus is on forecasting weekly volatilities (defined as the low frequency). The mixed-frequency GARCH models are found to systematically dominate the low-frequency GARCH model in terms of in-sample fit and out-of-sample forecasting accuracy. They also exhibit much lower low-frequency volatility persistence than the low-frequency GARCH model. Among the mixed-frequency models, the low-frequency persistence estimates decrease as the data frequency increases from daily to five-minute frequency, and as overnight returns are included. That is, ignoring the available high-frequency information leads to spuriously high volatility persistence. Among the other findings are that the single-component model variants perform worse than the two-component variants; that the overnight volatility component exhibits more persistence than the intra-day component; and that MIDAS weighting performs better than not weighting at all (i.e., than realized volatility). |
Date: | 2016–06 |
URL: | http://d.repec.org/n?u=RePEc:ete:ceswps:544330&r=for |
By: | Jack Fosten (University of East Anglia) |
Abstract: | This paper provides an extension of Diebold-Mariano-West (DMW) forecast accuracy tests to allow for factor-augmented models to be compared with non-nested benchmarks. The out-of- sample approach to forecast evaluation requires that both the factors and the forecasting model parameters are estimated in a rolling fashion, which poses several new challenges which we address in this paper. Firstly, we show the convergence rates of factors estimated in different rolling windows, and then give conditions under which the asymptotic distribution of the DMW test statistic is not affected by factor estimation error. Secondly, we draw attention to the issue of "sign-changing" across rolling windows of factor estimates and factor-augmented model coefficients, caused by the lack of sign identification when using Principal Components Analysis to estimate the factors. We show that arbitrary sign-changing does not affect the distribution of the DMW test statistic, but it does prohibit the construction of valid bootstrap critical values using existing procedures. We solve this problem by proposing a novel new normalization for rolling factor estimates, which has the effect of matching the sign of factors estimated in different rolling windows. We establish the first-order validity of a simple-to-implement block bootstrap procedure and illustrate its properties using Monte Carlo simulations and an empirical application to forecasting U.S. CPI inflation. |
Keywords: | boostrap, diffusion index, factor model, predictive ability |
JEL: | C12 C22 C38 C53 |
Date: | 2016–01–28 |
URL: | http://d.repec.org/n?u=RePEc:uea:ueaeco:2016_05&r=for |
By: | Chris McDonald; Craig Thamotheram; Shaun P. Vahey; Elizabeth C. Wakerly (Reserve Bank of New Zealand) |
Abstract: | We consider the fundamental issue of what makes a 'good' probability forecast for a central bank operating within an inflation targeting framework. We provide two examples in which the candidate forecasts comfortably outperform those from benchmark specifications by conventional statistical metrics such as root mean squared prediction errors and average logarithmic scores. Our assessment of economic significance uses an explicit loss function that relates economic value to a forecast communication problem for an inflation targeting central bank. We analyse the Bank of England's forecasts for inflation during the period in which the central bank operated within a strict inflation targeting framework in our first example. In our second example, we consider forecasts for inflation in New Zealand generated from vector autoregressions, when the central bank operated within a flexible inflation targeting framework. In both cases, the economic significance of the performance differential exhibits sensitivity to the parameters of the loss function and, for some values, the differentials are economically negligible. |
Date: | 2016–06 |
URL: | http://d.repec.org/n?u=RePEc:nzb:nzbdps:2016/10&r=for |
By: | Tamal Datta Chaudhuri; Indranil Ghosh |
Abstract: | Any discussion on exchange rate movements and forecasting should include explanatory variables from both the current account and the capital account of the balance of payments. In this paper, we include such factors to forecast the value of the Indian rupee vis a vis the US Dollar. Further, factors reflecting political instability and lack of mechanism for enforcement of contracts that can affect both direct foreign investment and also portfolio investment, have been incorporated. The explanatory variables chosen are the 3 month Rupee Dollar futures exchange rate (FX4), NIFTY returns (NIFTYR), Dow Jones Industrial Average returns (DJIAR), Hang Seng returns (HSR), DAX returns (DR), crude oil price (COP), CBOE VIX (CV) and India VIX (IV). To forecast the exchange rate, we have used two different classes of frameworks namely, Artificial Neural Network (ANN) based models and Time Series Econometric models. Multilayer Feed Forward Neural Network (MLFFNN) and Nonlinear Autoregressive models with Exogenous Input (NARX) Neural Network are the approaches that we have used as ANN models. Generalized Autoregressive Conditional Heteroskedastic (GARCH) and Exponential Generalized Autoregressive Conditional Heteroskedastic (EGARCH) techniques are the ones that we have used as Time Series Econometric methods. Within our framework, our results indicate that, although the two different approaches are quite efficient in forecasting the exchange rate, MLFNN and NARX are the most efficient. |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1607.02093&r=for |
By: | Nick Taylor |
Abstract: | The benefits associated with modeling Box-Cox transformed realised variance data are assessed. In particular, the quality of realised variance forecasts with and without this transformation applied are examined in an out-of-sample forecasting competition. Using various realised variance measures, data transformations, volatility models and assessment methods, and controlling for data mining issues, the results indicate that data transformations can be economically and statistically significant. Moreover, the quartic transformation appears to be the most effective in this regard. The conditions under which the effectiveness of using transformed data varies are identified. |
Keywords: | C22, C53, C58, G17. |
Date: | 2016–06–10 |
URL: | http://d.repec.org/n?u=RePEc:bri:accfin:16/4&r=for |
By: | Jack Fosten (University of East Anglia) |
Abstract: | This paper provides consistent information criteria for the selection of forecasting models which use a subset of both the idiosyncratic and common factor components of a big dataset. This hybrid model approach has been explored by recent empirical studies to relax the strictness of pure factor-augmented model approximations, but no formal model selection procedures have been developed. The main difference to previous factor-augmented model selection procedures is that we must account for estimation error in the idiosyncratic component as well as the factors. Our first contribution shows that this combined estimation error vanishes at a slower rate than in the case of pure factor-augmented models in circumstances in which N is of larger order than sqrt(T), where N and T are the cross-section and time series dimensions respectively. Under these circumstances we show that existing factor-augmented model selection criteria are inconsistent, and the standard BIC is inconsistent regardless of the relationship between N and T. Our main contribution solves this issue by proposing new information criteria which account for the additional source of estimation error, whose properties are explored through a Monte Carlo simulation study. We conclude with an empirical application to long-horizon exchange rate forecasting using a recently proposed model with country-specific idiosyncratic components from a panel of global exchange rates. |
Keywords: | forecasting, factor model, model selection, information criteria, idiosyncratic |
JEL: | C13 C22 C38 C52 C53 |
Date: | 2016–03–14 |
URL: | http://d.repec.org/n?u=RePEc:uea:ueaeco:2016_07&r=for |