|
on Forecasting |
By: | Lanne, Markku; Nyberg, Henri; Saarinen, Erkka |
Abstract: | In this paper, we compare the forecasting performance of univariate noncausal and conventional causal autoregressive models for a comprehensive data set consisting of 170 monthly U.S. macroeconomic and financial time series. The noncausal models consistently outperform the causal models in terms of the mean square and mean absolute forecast errors. For a set of 18 quarterly time series, the improvement in forecast accuracy due to allowing for noncausality is found even greater. |
Keywords: | Noncausal autoregression; forecast comparison; macroeconomic variables; financial variables |
JEL: | C53 C22 E37 E47 |
Date: | 2011–04–05 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:30254&r=for |
By: | Dimitris Korobilis (Université Catholique de Louvain; The Rimini Centre for Economic Analysis (RCEA)) |
Abstract: | This paper builds on a simple unified representation of shrinkage Bayes estimators based on hierarchical Normal-Gamma priors. Various popular penalized least squares estimators for shrinkage and selection in regression models can be recovered using this single hierarchical Bayes formulation. Using 129 U.S. macroeconomic quarterly variables for the period 1959 – 2010 I exhaustively evaluate the forecasting properties of Bayesian shrinkage in regressions with many predictors. Results show that for particular data series hierarchical shrinkage dominates factor model forecasts, and hence it becomes a valuable addition to existing methods for handling large dimensional data. |
Keywords: | Forecasting; shrinkage; factor model; variable selection; Bayesian LASSO |
JEL: | C11 C22 C52 C53 C63 E37 |
Date: | 2011–04 |
URL: | http://d.repec.org/n?u=RePEc:rim:rimwps:21_11&r=for |
By: | Louzis, Dimitrios P.; Xanthopoulos-Sisinis, Spyros; Refenes, Apostolos P. |
Abstract: | In this paper, we assess the Value at Risk (VaR) prediction accuracy and efficiency of six ARCH-type models, six realized volatility models and two GARCH models augmented with realized volatility regressors. The α-th quantile of the innovation’s distribution is estimated with the fully parametric method using either the normal or the skewed student distributions and also with the Filtered Historical Simulation (FHS), or the Extreme Value Theory (EVT) methods. Our analysis is based on two S&P 500 cash index out-of-sample forecasting periods, one of which covers exclusively the recent 2007-2009 financial crisis. Using an extensive array of statistical and regulatory risk management loss functions, we find that the realized volatility and the augmented GARCH models with the FHS or the EVT quantile estimation methods produce superior VaR forecasts and allow for more efficient regulatory capital allocations. The skewed student distribution is also an attractive alternative, especially during periods of high market volatility. |
Keywords: | High frequency intraday data; Filtered Historical Simulation; Extreme Value Theory; Value-at-Risk forecasting; Financial crisis. |
JEL: | C13 C53 G32 G21 |
Date: | 2011–04–18 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:30364&r=for |
By: | Christophe Boucher (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, A.A.Advisors-QCG - ABN AMRO); Bertrand Maillet (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Panthéon-Sorbonne - Paris I, A.A.Advisors-QCG - ABN AMRO, EIF - Europlace Institute of Finance) |
Abstract: | Researchers in finance very often rely on highly persistent - nearly integrated - explanatory variables to predict returns. This paper proposes to stand up to the usual problem of persistent regressor bias, by detrending the highly auto-correlated predictors. We find that the statistical evidence of out-of-sample predictability of stock returns is stronger, once predictors are adjusted for high persistence. |
Keywords: | Forecasting, persistence, detrending, expected returns. |
Date: | 2011–03 |
URL: | http://d.repec.org/n?u=RePEc:hal:cesptp:halshs-00587775&r=for |
By: | Francois-Éric Racicot (Département des sciences administratives, Université du Québec (Outaouais), LRSP et Chaire d'information financière et organisationnelle); Raymond Théoret (Département de stratégie des affaires, Université du Québec (Montréal), Université du Québec (Outaouais), et Chaire d'information financière et organisationnelle) |
Abstract: | In this paper, we aim at forecasting the stochastic volatility of key financial market variables with the Kalman filter using stochastic models developed by Taylor (1986, 1994) and Nelson (1990). First, we compare a stochastic volatility model relying on the Kalman filter to the conditional volatility estimated with the GARCH model. We apply our models to Canadian short-term interest rates. When comparing the profile of the interest rate stochastic volatility to the conditional one, we find that the omission of a constant term in the stochastic volatility model might have a perverse effect leading to a scaling problem, a problem often overlooked in the literature. Stochastic volatility seems to be a better forecasting tool than GARCH(1,1) since it is less conditioned by autoregressive past information. Second, we filter the S&P500 price-earnings (P/E) ratio in order to forecast its value. To make this forecast, we postulate a rational expectations process but our method may accommodate other data generating processes. We find that our forecast is close to a GARCH(1,1) profile. |
Keywords: | Stochastic volatility; Kalman filter; P/E ratio forecast; Interest rate forecast. |
JEL: | C13 C19 C49 G12 G31 |
Date: | 2011–04–12 |
URL: | http://d.repec.org/n?u=RePEc:pqs:wpaper:032011&r=for |
By: | Audrino, Francesco |
Abstract: | We empirically investigate the predictive power of the various components affecting correlations that have been recently introduced in the literature. We focus on models allowing for a flexible specification of the short-run component of correlations as well as the long-run component. Moreover, we also allow the correlation dynamics to be subjected to regime-shift caused by threshold-based structural breaks of a different nature. Our results indicate that in some cases there may be a superimposition of the long- and short-term movements in correlations. Therefore, care is called for in interpretations when estimating the two components. Testing the forecasting accuracy of correlations during the late-2000s financial crisis yields mixed results. In general component models allowing for a richer correlation specification possess a (marginally) increased predictive accuracy. Economically speaking, no relevant gains are found by allowing for more flexibility in the correlation dynamics. |
Keywords: | Correlation forecasting; Component models; Threshold regime-switching models; Mixed data sampling; Performance evaluation |
JEL: | C32 C52 C53 |
Date: | 2011–04 |
URL: | http://d.repec.org/n?u=RePEc:usg:econwp:2011:12&r=for |
By: | Carlo A. Favero; Arie E. Gozluklu; Haoxi Yang |
Abstract: | In this paper we relate the very persistent component of interest rates to a specific demographic variable, MYt, the proportion of middle-aged to young population. We first reconsider the results in Fama (2006) to document how MYt captures the long run component identified by Fama in his analysis of the one-year spot rate. Using MYt to model this low frequency component of interest rates is particularly useful for forecasting the term structure as the demographic variable is exogenous and highly predictable, even at very long horizons. We then study the forecasting performance of a no-arbitrage affine term structure model that allows for the presence of a persistent component driven by demographics. This performance is superior to that of a traditional affine term structure model with macroeconomic factors (e.g. Ang, Dong and Piazzesi, 2005). |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:igi:igierp:388&r=for |
By: | Matteo Kalchschmidt |
Abstract: | While the literature on demand forecasting has examined the best practices in the field, the interpretation and definition of best practices can be difficult due to the different perspectives that the literature has adopted. First, a universalistic perspective can be considered because some specific practices are really best regardless of the context, the forecasting problems, etc. Some other contributions have also taken a contingent approach, which states that best practices depend on the specific kind of company considered or the forecasting scenario. A third potential perspective is the configurational one, which asserts that best practices depend on a set of factors. In this work, we plan to study which of these perspectives really holds true and to what extent they do so. Analysis is conducted by collecting data of more than 500 companies in different countries via the GMRG IV questionnaire. The impact of forecasting is studied in terms of operational performance by designing and testing different sets of propositions that underline the three aforementioned perspectives. |
Keywords: | forecasting, GMRG, universalistic theory, contingency theory, configuration theory |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:brh:wpaper:1102&r=for |
By: | Debernardi, Andrea; Grimaldi, Raffaele; Beria, Paolo |
Abstract: | The assessment of infrastructure investments is often affected by inaccuracy in traffic forecasting, optimism bias and overvaluation of expected benefits. In general, even when such misrepresentation is not strategically introduced by proponents to push their projects, valuators and decision makers must cope with the existence of a risk of demand levels below expectations and consequent problem of overinvestment. In this sense, the concept of option value suggests that flexible or reversible projects may have a higher economic net present value compared with rigid schemes characterised by sunk costs. However, conventionally used cost benefit analysis (CBA) is very seldom used to manage such problem due to the complexity of the issue (for example when introducing a complete risk analysis). Moreover, such CBAs are still conceived as a static tool to decide ex-ante about an investment. In this paper we develop a theoretical framework and a practical application of CBA to formally manage such uncertainty and help the decision makers by postponing some decisions to the following running phase. The idea is to assess the project as split into smaller functional sections and bind the construction of a further section to the compliance of a pre-determined “switching rule”. In practical terms, we adapt a normal CBA procedure to manage also the time dimension of time of investments to reallocate risks already in the early design stage of transport infrastructures. The purpose of the paper is twofold. Firstly, we introduce a way to extend conventional CBA methodology to manage the phasing of projects. Secondly, we demonstrate both theoretically (with a simplified model) and practically (with a more complex case study) the positive effect of phasing under certain conditions (limitedness of sunk-costs due to phasing, predominance of capacity problems). By numerically developing the CBA of the Turin – Lyon high speed rail project, we show how to reduce the risk of overestimation of traffic and its positive effect in terms of NPV of the project: if forecasts are optimistic, only the most effective parts of the scheme will be built. If the traffic forecasts are correct, the new infrastructure will be built as a whole in steps and will generate the highest net benefits. |
Keywords: | cost benefit analysis; option value; optimism bias; strategic misrepresentation; benefit shortfall; planning fallacy; forecasting |
JEL: | H54 L92 R42 D61 |
Date: | 2011–03–13 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:30327&r=for |
By: | Grzegorz Grabek (National Bank of Poland, Economic Institute); Bohdan Klos (National Bank of Poland, Economic Institute); Grzegorz Koloch (National Bank of Poland, Economic Institute) |
Abstract: | The paper documents the effects of work on the dynamic stochastic general equilibrium (DSGE) SOEPL model that has been carried out in the recent years at the National Bank of Poland, initially at the Bureau of Macroeconomic Research and lately at the Bureau of Applied Research of the Economic Institute. In 2009, a team consisting of the authors of this paper developed a new version of the model, called SOEPL−2009 which in 2010 is to be used to obtain routine mid-term forecasts of the inflation processes and the economic trends, supporting and supplementing the traditional structural macroeconometric model and experts’ forecasts applied so far. In the recent years many researchers have engaged in the work over a class of estimated macroeconomic models (of the business cycle) integrating the effects of at least three important lines of economic and econometric research: • methods of macroeconomic modelling (gradual departure from the traditional structural models towards models resistant to Lucas’ and Sims’ critique, strongly motivated with microeconomics); • micro- and macroeconomic theories (monetary policy issues, with emphasis on the consequence of imperfect competition, the role of nominal and real rigidities, as well as anticipating and optimising behaviours of agents in an uncertain environment, with a strong shift of point of view towards general equilibrium); • estimation techniques (reduction in parameters calibration, shift from classical techniques to Bayesian techniques with Bayesian-specific risk quantification as well as systematic and controlled introduction of experts’ knowledge, improvement of projections accuracy). Merger of the three trends has brought about a class of models — DSGE models — with high analytical and developmental potential. The very potential of the models of this class seems to be the most important reason for the interest of central banks in that area, research that may be directly translated into the practice of monetary policy. Along with the development of numerical, econometric methods and the theory of economics, a number of central banks supplement or even replace the traditional structural macroeconometric models, whose forecasting applications are enhanced with experts’ knowledge, with estimated DSGE models, namely models which attempt to translate the economic processes in a more explicit and systematic manner, whereby experts’ knowledge is introduced through Bayesian methods. It happens although no formal reasons exist for which the ex post verified accuracy of forecasts within the DSGE models should be higher than that of classical models. DSGE models give, however, a chance of structural (internally consistent and microfounded) explanations of the reasons for the recently observed phenomena and their consequences for the future. DSGE models present a different image of economic processes than classical macroeconometric models — they capture the world from the perspective of structural disturbances. These disturbances set the economy in motion and economic agents respond to them in an optimal way, which eliminates the consequences of the disturbances, i.e. restores the economy to equilibrium. The analytical knowledge and experience gathered in contact with the traditional structural models rather interferes with than helps interpret the results of DSGE models. In econometric categories, the results of DSGE models are, nevertheless, at least partially compliant with that which may be achieved with VAR and SVAR models, thus, it is hard to speak about revolution here. Following the events of 2008–2009 (global financial crisis), while searching for the reasons for the problems’ occurrence, the usefulness of formalised tools constructed on a uniform, internally coherent (but also restrictive) paradigm for macroeconomic policy tends to be questioned. The reasons for the global economy problems are searched for in models oversimplifying perception of the world and burdening the decisions regarding economic policy. We have noticed that the critique refers to a larger extent to the models as such (i.e. tools) and less to the practice of applying them (i.e. the user). Therefore, we consider that conclusions from a deeper analysis of the sources of 2008–2009 crisis, verification of the directions of economic research and methods of the research, which is likely to be held, as well as the analysis of the current policy less influenced by its rationalisation shall confirm the legitimacy of building and applying models, particularly DSGE class models. The issue of applications using the strong sides of the models remains, however, open. In our opinion, the best we can do is to try to use our model, gather and exchange experience, develop new procedures and thoroughly verify the results. The model whose details we shall present further herein derives from the structure developed at Riksbank — DSGE model for the euro area see Adolfson et al. (2005b). The euro area DSGE model, know-how, methods of estimation and applications received within the technical support of Riksbank enabled us to start several experiments, build different versions of DSGE model (a family of SOEPL models) and develop our own procedures of the model application. Some of the experiments have been described in separate papers, e.g. Grabek et al. (2007), Grabek and Kłos (2009), Grabek and Utzig-Lenarczyk (2009). The alternative we present in this paper summarizes some of the gathered experience. We pass the DSGE SOEPL−2009 model for use, with a view to considering and analysing other interpretation and understanding of economic processes than that proposed by the traditional models. Additionally, systematic work with the model (preparing forecasts and analyses of their accuracy, simulation experiments and analytical works) may reveal issues and problems that will have to be solved. Resulting knowledge shall enable the preparation of a more thorough future modification of the model, taking into account the effects of the parallel research and the conclusions arrived at during use. This paper consists of three basic parts. In the first part — relatively independent of the other parts — we have made an attempt to outline the development of the methods of macroeconomic (macroeconometric) modelling and the economic thought related to monetary policy, which brought about the creation of dynamic stochastic general equilibrium models, pushing aside other classes of models — at least in the academic world. The considerations are illustrated with simple models of real business cycles (RBC) and DSGE model based on new Keynesian paradigm. The second chapter of the first part focuses on the technical aspects of construction, estimation and application of DSGE models, drawing attention to mathematical, statistical and numerical instruments. Although it presents only the keynotes, outlines and ideas, the formalisation and precision of presentation required in that case makes the fragment of the paper slightly hermetic — a reader less interested in the techniques may omit that chapter. The further parts of the paper refer to specification, results of estimations and properties of the DSGE SOEPL−2009 model. We present, therefore, a general non-technical outline of the basic features of the model, illustrating at the same time the correlations with other DSGE models (Chapter 3). The next chapter defines decision-making problems of the optimising agents, their equilibrium conditions as well as characteristics of behaviours of the non- optimising agents. The description of the model specification is completed with balance conditions on a macro scale. The SOEPL−2009 model has been estimated with the use of Bayesian techniques. Identically as in all estimated DSGE models we are aware of, the Bayesian estimation refers solely to some of the parameters (the rest of the parameters have been calibrated). Although due to the application of the Bayesian techniques, the number of calibrated parameters has been clearly reduced, being aware of the consequences of faulty calibration we conducted a sort of sensitivity analysis (examination of the influence of changes in the calibration of parameters on the characteristics of the model). The presented SOEPL−2009 version takes into account the conclusions we arrived at based on the analysis. For the purposes of this paper and the first forecast experiments we use only point estimates of the parameters reflecting the modal value of posterior distribution, in other words our reasoning omits — hopefully temporarily — the issue of uncertainty of the parameters. The results of the estimation of parameters and assumptions made at the subsequent stages of the work (calibrated values, characteristics of prior distributions) have been presented in Chapter 6. A synthetic image of the model characteristics has been presented in Chapters 7–8, which describes the responses of observable variables to structural disturbances taken into account in the model (i.e. impulse response functions), variances decompositions (formally — forecast error decomposition), thanks to which the structure (relative role) of the impact of shocks on the observable variables may be assessed, estimation (identification) of structural disturbances in the sample, examples of historical decompositions (counterfactual experiments) and information about the ex post accuracy of forecasts — this is, thus, a typical set of information allowing understanding the consequences of the assumptions made at the stage of constructing decisionmaking problems (model specification) and choice of parameters. The Appendix presents structural form equations, equations used to determine value at a steady state and a list of variables of the SOEPL−2009 model. |
Date: | 2011 |
URL: | http://d.repec.org/n?u=RePEc:nbp:nbpmis:83&r=for |