|
on Forecasting |
By: | Conflitti, Cristina; De Mol, Christine; Giannone, Domenico |
Abstract: | We consider the problem of optimally combining individual forecasts of gross domestic product (GDP) and inflation from the Survey of Professional Forecasters (SPF) dataset for the Euro Area. Contrary to the common practice of using equal combination weights, we compute optimal weights which minimize the mean square forecast error (MSFE) in the case of point forecasts and maximize a logarithmic score in the case of density forecasts. We show that this is a viable strategy even when the number of forecasts to combine gets large, provided we constrain these weights to be positive and to sum to one. Indeed, this enforces a form of shrinkage on the weights which ensures good out-of-sample performance of the combined forecasts. |
Keywords: | forecast combination; forecast evaluation; high-dimensional data; real-time data; shrinkage; Survey of Professional Forecasters |
JEL: | C22 C53 |
Date: | 2012–08 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:9096&r=for |
By: | Antonakakis, Nikolaos; Darby, Julia |
Abstract: | This paper identifies the best models for forecasting the volatility of daily exchange returns of developing countries. An emerging consensus in the recent literature focusing on industrialised counties has noted the superior performance of the FIGARCH model in the case of industrialised countries, a result that is reaffirmed here. However, we show that when dealing with developing countries’ data the IGARCH model results in substantial gains in terms of the in-sample results and out-of-sample forecasting performance. |
Keywords: | Exchange rate volatility; estimation; forecasting; developing countries |
JEL: | C32 E58 G15 F31 |
Date: | 2012–08–26 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:40875&r=for |
By: | Tanya Molodtsova; David Papell |
Abstract: | This paper evaluates out-of-sample exchange rate predictability of Taylor rule models, where the central bank sets the interest rate in response to inflation and either the output or the unemployment gap, for the euro/dollar exchange rate with real-time data before, during, and after the financial crisis of 2008-2009. While all Taylor rule specifications outperform the random walk with forecasts ending between 2007:Q1 and 2008:Q2, only the specification with both estimated coefficients and the unemployment gap consistently outperforms the random walk from 2007:Q1 through 2012:Q1. Several Taylor rule models that are augmented with credit spreads or financial condition indexes outperform the original Taylor rule models. The performance of the Taylor rule models is superior to the interest rate differentials, monetary, and purchasing power parity models. |
JEL: | C22 F31 |
Date: | 2012–08 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:18330&r=for |
By: | Geoffroy de Clippel; Kareen Rozen |
Abstract: | Theories of bounded rationality are typically characterized over an exhaustive data set. This paper aims to operationalize some leading theories when the available data is limited, as is the case in most practical settings. How does one tell if observed choices are consistent with a theory of bounded rationality if the data is incomplete? What information can be identified about preferences? How can out-of-sample predictions be made? Our approach is contrasted with earlier attempts to examine bounded rationality theories on limited data, showing their notion of consistency is inappropriate for identifiability and out-of-sample prediction. |
Keywords: | # |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:bro:econwp:2012-7&r=for |
By: | Marcello Basili; Silvia Ferrini; Emanuele Montomoli |
Abstract: | Background: During the global pandemic of N1H1 (2009) influenza, many Governments signed contracts with vaccine producers for a universal influenza immunization program and bought hundreds of millions of vaccines doses. We argue that, as Health Ministers assumed the occurrence of the worst possible scenario (generalized pandemic influenza) and followed the strong version of the Precautionary Principle, they undervalued the possibility of mild or weak pandemic wave.Methodology: An alternative decision rule, based on the non-extensive entropy principle, is introduced and a different Precautionary Principle characterization is applied. This approach values extreme negative results (catastrophic events) in a different way than ordinary results (more plausible and mild events), and introduces less pessimistic forecasts in the case of uncertain influenza pandemic outbreaks. A simplified application is presented through an example based on seasonal data of morbidity and severity among Italian children influenza-like illness for the period 2003-2010.Principal Findings: Compared to a pessimistic forecast by experts, who predict an average attack rate of 15% for the next pandemic influenza, we demonstrate that, using the non-extensive maximum entropy principle, a less pessimistic outcome is predicted with a 20% savings in public funding for vaccines doses.Conclusions: The need for an effective influenza pandemic prevention program, coupled with an efficient use of public funding, calls for a rethinking of the Precautionary Principle. The non-extensive maximum entropy principle, which incorporates vague and incomplete information available to decision makers, produces a more coherent forecast of possible influenza pandemic and a conservative spending in public funding. |
JEL: | I15 I28 |
Date: | 2012–07 |
URL: | http://d.repec.org/n?u=RePEc:usi:wpaper:647&r=for |
By: | John Bradley (EMDS - Economic Modelling and Development Strategies); Gerhard Untiedt (GEFRA - Gesellschaft fuer Finanz- und Regionalanalysen) |
Abstract: | How will Ireland emerge from recession and what path will the development take? In our recently published paper about the future prospects of the Irish economy we analyse and forecast using the HERMIN model of the Irish economy. The key message is that it will be a painful and slow recovery and that mainly the production side of the economy matters. |
Date: | 2012–08–20 |
URL: | http://d.repec.org/n?u=RePEc:hrm:wpaper:4-2012&r=for |
By: | Meltem Gulenay Chadwick; Gonul Sengul |
Abstract: | We use linear regression models and Bayesian Model Averaging procedure to investigate whether Google search query data can improve the nowcast performance of the monthly nonagricultural unemployment rate for Turkey for the period from January 2005 to January 2012. We show that Google search query data is successful at nowcasting1 monthly nonagricultural unemployment rate for Turkey both in-sample and out-of-sample. When compared with a benchmark model, where we use only the lag values of the monthly unemployment rate, the best model contains Google search query data and it is 47.8 percent more accurate in-sample and 38.3 percent more accurate for the one month ahead nowcasts in terms of relative root mean square errors (RMSE). We also show via Harvey, Leybourne, and Newbold (1997) modification of the Diebold-Mariano test that models with Google search query data indeed perform statistically better than the benchmark. |
Keywords: | Google Insights, nowcasting, nonagricultural unemployment rate, Bayesian model averaging |
JEL: | C22 C53 E27 |
Date: | 2012 |
URL: | http://d.repec.org/n?u=RePEc:tcb:wpaper:1218&r=for |
By: | Morton, Rebecca; Piovesan, Marco; Tyran, Jean-Robert |
Abstract: | We experimentally investigate information aggregation through majority voting when some voters are biased. In such situations, majority voting can have a “dark side”, i.e. result in groups making choices inferior to those made by individuals acting alone. We develop a model to predict how two types of social information shape efficiency in the presence of biased voters and we test these predictions using a novel experimental design. In line with predictions, we find that information on the popularity of policy choices is beneficial when a minority of voters is biased, but harmful when a majority is biased. In theory, information on the success of policy choices elsewhere de-biases voters and alleviates the inefficiency. In the experiment, providing social information on success is ineffective. While voters with higher cognitive abilities are more likely to be de-biased by such information, most voters do not seem to interpret such information rationally. |
Keywords: | biased voters; information aggregation; majority voting |
JEL: | C92 D02 D03 D7 |
Date: | 2012–08 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:9098&r=for |