|
on Econometrics |
By: | Pötscher, Benedikt |
Abstract: | The finite-sample as well as the asymptotic distribution of Leung and Barron's (2006) model averaging estimator are derived in the context of a linear regression model. An impossibility result regarding the estimation of the finite-sample distribution of the model averaging estimator is obtained. |
Keywords: | Model mixing; model aggregation; combination of estimators; model selection; finite sample distribution; asymptotic distribution; estimation of distribution |
JEL: | C51 C20 C13 C52 C12 |
Date: | 2006–03 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:73&r=ecm |
By: | Frank Windmeijer |
Abstract: | This chapter gives an account of the recent literature on estimating models for panel count data. Specifically, the treatment of unobserved individual heterogeneity that is correlated with the explanatory variables and the presence of explanatory variables that are not strictly exogenous are central. Moment conditions are discussed for these type of problems that enable estimation of the parameters by GMM. As standard Wald tests based on efficient two-step GMM estimation results are known to have poor finite sample behaviour, alternative test procedures that have recently been proposed in the literature are evaluated by means of a Monte Carlo study. |
Keywords: | GMM, Exponential Models, Hypothesis Testing |
JEL: | C12 C13 C23 |
Date: | 2006–10 |
URL: | http://d.repec.org/n?u=RePEc:bri:uobdis:06/591&r=ecm |
By: | Lucia Alessi; Matteo Barigozzi; Marco Capasso |
Abstract: | We propose a new method for multivariate forecasting which combines the Generalized Dynamic Factor Model (GDFM) and the multivariate Generalized Autoregressive Conditionally Heteroskedastic (GARCH) model. We assume that the dynamic common factors are conditionally heteroskedastic. The GDFM, applied to a large number of series, captures the multivariate information and disentangles the common and the idiosyncratic part of each series; it also provides a first identification and estimation of the dynamic factors governing the data set. A time-varying correlation GARCH model applied on the estimated dynamic factors finds the parameters governing their covariances’ evolution. Then a modified version of the Kalman filter gets a more precise estimation of the static and dynamic factors’ in-sample levels and covariances. A method is suggested for predicting conditional out-of-sample variances and covariances of the original data series. Finally, we carry out an empirical application aiming at comparing volatility forecasting results of our Dynamic Factor GARCH model against the univariate GARCH. |
Keywords: | Dynamic Factors, Multivariate GARCH, Covolatility Forecasting |
Date: | 2006–10–02 |
URL: | http://d.repec.org/n?u=RePEc:ssa:lemwps:2006/25&r=ecm |
By: | Jun Yu (School of Economics and Social Sciences, Singapore Management University); Renate Meyer (University of Auckland) |
Abstract: | In this paper we show that fully likelihood-based estimation and comparison of multivariate stochastic volatility (SV) models can be easily performed via a freely available Bayesian software called WinBUGS. Moreover, we introduce to the literature several new specifications which are natural extensions to certain existing models, one of which allows for time varying correlation coefficients. Ideas are illustrated by fitting, to a bivariate time series data of weekly exchange rates, nine multivariate SV models, including the specifications with Granger causality in volatility, time varying correlations, heavytailed error distributions, additive factor structure, and multiplicative factor structure. Empirical results suggest that the most adequate specifications are those that allow for time varying correlation coefficients. |
Keywords: | Multivariate stochastic volatility; Granger causality in volatility; Heavy-tailed distributions; Time varying correlations; Factors; MCMC; DIC. |
JEL: | C11 C15 C30 G12 |
Date: | 2004–11 |
URL: | http://d.repec.org/n?u=RePEc:siu:wpaper:23-2004&r=ecm |
By: | Jan F. Kiviet (Universiteit van Amsterdam); Jerzy Niemczyk (Universiteit van Amsterdam) |
Abstract: | In practice structural equations are often estimated by least-squares, thus neglecting any simultaneity. This paper reveals why this may often be justifiable and when. Assuming data stationarity and existence of the first four moments of the disturbances we find the limiting distribution of the ordinary least-squares (OLS) estimator in a linear simultaneous equations model. In simple static and dynamic models we compare the asymptotic efficiency of this inconsistent estimator with that of consistent simple instrumental variable (IV) estimators and depict cases where -- due to relative weakness of the instruments or mildness of the simultaneity -- the inconsistent estimator is more precise. In addition, we examine by simulation to what extent these first-order asymptotic findings are reflected in finite sample, taking into account non-existence of moments of the IV estimator. By dynamic visualization techniques we enable to appreciate any differences in efficiency over a parameter space of a much higher dimension than just two, viz. in colored animated image sequences (which are not very effective in print, but much more so in live-on-screen projection). |
Keywords: | efficiency of an inconsistent estimator; invalid instruments; simultaneity bias; weak instruments; 4D diagrams |
JEL: | C13 C15 C30 |
Date: | 2006–09–18 |
URL: | http://d.repec.org/n?u=RePEc:dgr:uvatin:20060078&r=ecm |
By: | Richard G. Anderson; Hailong Qian; Robert H. Rasche |
Abstract: | In this paper, we examine the use of Box-Tiao*s (1977) canonical correlation method as an alternative to likelihood-based inferences for vector error-correction models. It is now well-known that testing of cointegration ranks based on Johansen*s (1995) ML-based method suffers from severe small sample size distortions. Furthermore, the distributions of empirical economic and financial time series tend to display fat tails, heteroskedasticity and skewness that are inconsistent with the usual distributional assumptions of likelihood-based approach. The testing statistic based on Box-Tiao*s canonical correlations shows promise as an alternative to Johansen*s ML-based approach for testing of cointegration rank in VECM models. |
Keywords: | Econometric models ; Panel analysis |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedlwp:2006-050&r=ecm |
By: | Fabio C. Bagliano; Claudio Morana |
Abstract: | In this paper a new approach to factor vector autoregressive estimation, based on Stock and Watson (2005), is introduced. Relative to the Stock-Watson approach, the proposed method has the advantage of allowing for a more clear-cut interpretation of the global factors, as well as for the identi.cation of all idiosyncratic shocks. Moreover, it shares with the Stock-Watson approach the advantage of using an iterated procedure in estimation, recovering, asymptotically, full effciency, and also allowing the imposition of appropriate restrictions concerning the lack of Granger causality of the variables versus the factors. Finally, relative to other available methods, our modelling approach has the advantage of allowing for the joint modelling of all variables, without resorting to long-run forcing hypotheses. An application to large-scale macroeconometric modelling is also provided. |
Keywords: | dynamic factor models, vector autoregressions, principal components analysis. |
JEL: | C32 G1 G15 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:cca:wpaper:28&r=ecm |
By: | A. Colin Cameron; Jonah B. Gelbach; Douglas L. Miller |
Abstract: | In this paper we propose a new variance estimator for OLS as well as for nonlinear estimators such as logit, probit and GMM, that provcides cluster-robust inference when there is two-way or multi-way clustering that is non-nested. The variance estimator extends the standard cluster-robust variance estimator or sandwich estimator for one-way clustering (e.g. Liang and Zeger (1986), Arellano (1987)) and relies on similar relatively weak distributional assumptions. Our method is easily implemented in statistical packages, such as Stata and SAS, that already offer cluster-robust standard errors when there is one-way clustering. The method is demonstrated by a Monte Carlo analysis for a two-way random effects model; a Monte Carlo analysis of a placebo law that extends the state-year effects example of Bertrand et al. (2004) to two dimensions; and by application to two studies in the empirical public/labor literature where two-way clustering is present. |
JEL: | C12 C21 C23 |
Date: | 2006–09 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberte:0327&r=ecm |
By: | Clive Bowsher (Nuffield College, University of Oxford); Roland Meeks (Nuffield College, University of Oxford) |
Abstract: | Functional Signal plus Noise (FSN) models are proposed for analysing the dynamics of a large cross-section of yields or asset prices in which contemporaneous observations are functionally related. The FSN models are used to forecast high dimensional yield curves for US Treasury bonds at the one month ahead horizon. The models achieve large reductions in mean square forecast errors relative to a random walk for yields and readily dominate both the Diebold and Li (2006) and random walk forecasts across all maturities studied. We show that the Expectations Theory (ET) of the term structure completely determines the conditional mean of any zero-coupon yield curve. This enables a novel evaluation of the ET in which its 1-step ahead forecasts are compared with those of rival methods such as the FSN models, with the results strongly supporting the growing body of empirical evidence against the ET. Yield spreads do provide important information for forecasting the yield curve, especially in the case of shorter maturities, but not in the manner prescribed by the Expectations Theory. |
Keywords: | Yield curve, term structure, expectations theory, FSN models, functional time series, forecasting, state space form, cubic spline. |
JEL: | C33 C51 C53 E47 G12 |
Date: | 2006–10–02 |
URL: | http://d.repec.org/n?u=RePEc:nuf:econwp:0612&r=ecm |
By: | Jason Allen (Bank of Canada; Queen's University) |
Abstract: | The purpose of this paper is to investigate, using Monte Carlo methods, whether or not Hall’s (2000) centered test of overidentifying restrictions for parameters estimated by Generalized Method of Moments (GMM) is more powerful, once the test is size-adjusted, than the standard test introduced by Hansen (1982). The Monte Carlo evidence shows that very little size-adjusted power is gained over the standard uncentered calculation. calculation. Empirical examples using Epstein and Zin (1991) preferences demonstrates that the centered and uncentered tests sometimes lead to different conclusions about model specification. |
Keywords: | Size, Power, GMM, Overidentifying restrictions |
JEL: | C15 C52 G12 |
Date: | 2005–08 |
URL: | http://d.repec.org/n?u=RePEc:qed:wpaper:1091&r=ecm |
By: | Patrick Marsh |
Abstract: | This paper generalizes the goodness of fit tests of Claeskens and Hjort (2004) and Marsh (2006) to the case where the hypothesis specifies only family of distributions. Data driven versions of these tests are based upon the Akaike and Bayesian selection criteria. The asymptotic distributions of these tests are shown to be standard, unlike those based upon the empirical distribution function. Moreover, numerical evidence suggests that under the null hypothesis performance is very similar to tests such as the Kolmogorov-Smirnov or Anderson-Darling. However, in terms of power under the alternative, the proposed tests seem to have a consistent and significant advantage. |
Date: | 2006–10 |
URL: | http://d.repec.org/n?u=RePEc:yor:yorken:06/20&r=ecm |
By: | B. da Silva Lopes, Artur C. |
Abstract: | In this paper it is demonstrated by simulation that, contrary to a widely held belief, pure seasonal mean shifts - i.e., seasonal structural breaks which affect only the deterministic seasonal cycle - really do matter for Dickey-Fuller long-run unit root tests. |
Keywords: | unit roots; seasonality; Dickey-Fuller tests; structural breaks |
JEL: | C5 C22 |
Date: | 2005–10–15 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:125&r=ecm |
By: | Soiliou Namoro; Wayne-Roy Gayle |
Date: | 2006–01 |
URL: | http://d.repec.org/n?u=RePEc:pit:wpaper:251&r=ecm |
By: | William T. Gavin; Kevin L. Kliesen |
Abstract: | Decision makers, both public and private, use forecasts of economic growth and inflation to make plans and implement policies. In many situations, reasonably good forecasts can be made with simple rules of thumb that are extrapolations of a single data series. In principle, information about other economic indicators should be useful in forecasting a particular series like inflation or output. Including too many variables makes a model unwieldy and not including enough can increase forecast error. A key problem is deciding which other series to include. Recently, studies have shown that Dynamic Factor Models (DFMs) may provide a general solution to this problem. The key is that these models use a large data set to extract a few common factors (thus, the term #data-rich*). This paper uses a monthly DFM model to forecast inflation and output growth at horizons of 3, 12 and 24 months ahead. These forecasts are then compared to simple forecasting rules. |
Keywords: | Inflation (Finance) ; Forecasting |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedlwp:2006-054&r=ecm |
By: | Woodcock, Simon; Benedetto, Gary |
Abstract: | One approach to limiting disclosure risk in public-use microdata is to release multiply-imputed, partially synthetic data sets. These are data on actual respondents, but with confidential data replaced by multiply-imputed synthetic values. A mis-specified imputation model can invalidate inferences because the distribution of synthetic data is completely determined by the model used to generate them. We present two practical methods of generating synthetic values when the imputer has only limited information about the true data generating process. One is applicable when the true likelihood is known up to a monotone transformation. The second requires only limited knowledge of the true likelihood, but nevertheless preserves the conditional distribution of the confidential data, up to sampling error, on arbitrary subdomains. Our method maximizes data utility and minimizes incremental disclosure risk up to posterior uncertainty in the imputation model and sampling error in the estimated transformation. We validate the approach with a simulation and application to a large linked employer-employee database. |
Keywords: | statistical disclosure limitation; confidentiality; privacy; multiple imputation; partially synthetic data |
JEL: | C4 C81 |
Date: | 2006–09 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:155&r=ecm |
By: | Mehmet Caner |
Date: | 2006–01 |
URL: | http://d.repec.org/n?u=RePEc:pit:wpaper:212&r=ecm |
By: | Vargas, Gregorio A. |
Abstract: | The Block DCC model for determining dynamic correlations within and between groups of financial asset returns is extended to account for asymmetric effects. Simulation results show that the Asymmetric Block DCC model is competitive in in-sample forecasting and performs better than alternative DCC models in out-of-sample forecasting of conditional correlation in the presence of asymmetric effect between blocks of asset returns. Empirical results demonstrate that the model is able to capture the asymmetries in conditional correlations of some blocks of currencies in East Asia in the turbulent years of the late 1990s. |
Keywords: | asymmetric effect; block dynamic conditional correlation; multivariate GARCH |
JEL: | C32 G10 C5 |
Date: | 2006–01 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:189&r=ecm |
By: | Mehmet Caner |
Date: | 2005–01 |
URL: | http://d.repec.org/n?u=RePEc:pit:wpaper:211&r=ecm |
By: | Patrick Marsh |
Abstract: | Via the leading unit root case, the problem of testing on a lagged dependent variable is characterized by a nuisance parameter which is present only under the alternative (see Andrews and Ploberger (1994)). This has proven a barrier to the construction of optimal tests. Moreover, in their absence it is impossible to objectively assess the absolute power properties of existing tests. Indeed, feasible tests based upon the optimality criteria used here are found to have numerically superior power properties to both the original Dickey and Fuller (1981) statistics and the efficient detrended versions suggested by Elliott, Rothenberg and Stock (1996) and analysed in Burridge and Taylor (2000). |
Keywords: | Nuisance parameter, invariant test, unit root |
Date: | 2006–10 |
URL: | http://d.repec.org/n?u=RePEc:yor:yorken:06/19&r=ecm |
By: | Mehmet Caner |
Date: | 2006–01 |
URL: | http://d.repec.org/n?u=RePEc:pit:wpaper:210&r=ecm |
By: | Soiliou Namoro; Martin Burda; Wayne-Roy Gayle |
Date: | 2006–01 |
URL: | http://d.repec.org/n?u=RePEc:pit:wpaper:252&r=ecm |
By: | Guglielmo Maria Caporale; Christoph Hanck |
Abstract: | We analyse whether tests of PPP exhibit erratic behaviour (as previously reported by Caporale et al., 2003) even when (possibly unwarranted) homogeneity and proportionality restrictions are not imposed, and trivariate cointegration (stage-three) tests between the nominal exchange rate, domestic and foreign price levels are carried out (instead of stationarity tests on the real exchange rate, as in stage-two tests). We examine the US dollar real exchange rate vis-à-vis 21 other currencies over a period of more than a century, and find that stage-three tests produce similar results to those for stage-two tests, namely the former also behave erratically. This confirms that neither of these traditional approaches to testing for PPP can solve the issue of PPP. |
Keywords: | Purchasing Power Parity (PPP), real exchange rate, cointegration, stationarity, parameter instability |
JEL: | C12 C22 F31 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_1811&r=ecm |
By: | Stan du Plessis (Department of Economics, University of Stellenbosch) |
Abstract: | This paper argues that the sometimes-conflicting results of a modern revisionist literature on data mining in econometrics reflect different approaches to solving the central problem of model uncertainty in a science of non-experimental data. The literature has entered an exciting phase with theoretical development, methodological reflection, considerable technological strides on the computing front and interesting empirical applications providing momentum for this branch of econometrics. The organising principle for this discussion of data mining is a philosophical spectrum that sorts the various econometric traditions according to their epistemological assumptions (about the underlying data-generating-process DGP) starting with nihilism at one end and reaching claims of encompassing the DGP at the other end; call it the DGP-spectrum. In the course of exploring this spectrum the reader will encounter various Bayesian, specific-to-general (S-G) as well general-to-specific (G-S) methods. To set the stage for this exploration the paper starts with a description of data mining, its potential risks and a short section on potential institutional safeguards to these problems. |
Keywords: | Data mining, model selection, automated model selection, general to specific modelling, extreme bounds analysis, Bayesian model selection |
JEL: | C11 C50 C51 C52 C87 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:sza:wpaper:wpapers29&r=ecm |
By: | Mehmet Caner |
Date: | 2005–01 |
URL: | http://d.repec.org/n?u=RePEc:pit:wpaper:208&r=ecm |
By: | Mehmet Caner |
Date: | 2005–01 |
URL: | http://d.repec.org/n?u=RePEc:pit:wpaper:209&r=ecm |
By: | Patrick J. Kehoe |
Abstract: | The common approach to evaluating a model in the structural VAR literature is to compare the impulse responses from structural VARs run on the data to the theoretical impulse responses from the model. The Sims-Cogley-Nason approach instead compares the structural VARs run on the data to identical structural VARs run on data from the model of the same length as the actual data. Chari, Kehoe, and McGrattan (2006) argue that the inappropriate comparison made by the common approach is the root of the problems in the SVAR literature. In practice, the problems can be solved simply. Switching from the common approach to the Sims-Cogley-Nason ap-proach basically involves changing a few lines of computer code and a few lines of text. This switch will vastly increase the value of the structural VAR literature for economic theory. |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedmsr:379&r=ecm |
By: | Mehmet Caner; Dan Berkowitz; Ying Fang |
Date: | 2006–01 |
URL: | http://d.repec.org/n?u=RePEc:pit:wpaper:207&r=ecm |