nep-ecm New Economics Papers
on Econometrics
Issue of 2008‒01‒12
twenty papers chosen by
Sune Karlsson
Orebro University

  1. Panel Unit Root Tests in the Presence of a Multifactor Error Structure By M. Hashem Pesaran; L. Vanessa Smith; Takashi Yamagata
  2. Testing for the presence of noise in long memory processes [in Japanese] By Keiko Yamaguchi
  3. Comparison of time series with unequal length By Caiado, Jorge; Crato, Nuno; Peña, Daniel
  4. A Consistent Nonparametric Test for Causality in Quantile By Kiho Jeong; Wolfgang Härdle
  5. On the Effect of Prior Assumptions in Bayesian Model Averaging with Applications to Growth Regression By Ley, Eduardo; Steel, Mark F.J.
  6. Exact Limit of the Expected Periodogram in the Unit-Root Case By Valle e Azevedo, João
  7. Value-at-Risk and Expected Shortfall when there is long range dependence. By Wolfgang Härdle; Julius Mungo
  8. Do we need time series econometrics? By Rao, B. Bhaskara; Singh, Rup; Kumar, Saten
  9. The Bayesian Additive Classification Tree Applied to Credit Risk Modelling By Junni L. Zhang; Wolfgang Härdle
  10. Sensitivity analysis and density estimation for finite-time ruin probabilities By Stéphane Loisel; Nicolas Privault
  11. Independent Component Analysis Via Copula Techniques By Ray-Bing Chen; Meihui Guo; Wolfgang Härdle; Shih-Feng Huang
  12. Nonparametric Estimation of the Costs of Non-Sequential Search By Jose Luis Moraga-Gonzalez; Zsolt Sandor; Matthijs R. Wildenbeest
  13. A Multivariate Band-Pass Filter By Valle e Azevedo, João
  14. Interpretation of the Effects of Filtering Integrated Time Series By Valle e Azevedo, João
  15. THRET: Threshold Regression with Endogenous Threshold Variables By Stengos, T.; Kourtellos, A.; Tan, C.M.
  16. Health Care Utilization and Self-Assessed Health Specification of Bivariate Models Using Copulas* By José M. R. Murteira; Óscar D. Lourenço
  17. Identifying common spectral and asymmetric features in stock returns By Caiado, Jorge; Crato, Nuno
  18. Joining Panel Data with Cross-Sections for Efficiency Gains: An Application to a Consumption Equation for Nicaragua By Randolph Bruno; Marco Stampini
  19. Identifying the evolution of stock markets stochastic structure after the euro By Caiado, Jorge; Crato, Nuno
  20. Parallelization of Matlab codes under Windows platform for Bayesian estimation: A Dynare application By Ivano Azzini; Riccardo Girardi; Marco Ratto

  1. By: M. Hashem Pesaran (CIMF, University of Cambridge, USC and IZA); L. Vanessa Smith (CFAP, University of Cambridge); Takashi Yamagata (University of York)
    Abstract: This paper extends the cross sectionally augmented panel unit root test proposed by Pesaran (2007) to the case of a multifactor error structure. The basic idea is to exploit information regarding the unobserved factors that are shared by other time series in addition to the variable under consideration. Importantly, our test procedure only requires specification of the maximum number of factors, in contrast to other panel unit root tests based on principal components that require in addition the estimation of the number of factors as well as the factors themselves. Small sample properties of the proposed test are investigated by Monte Carlo experiments, which suggest that it controls well for size in almost all cases, especially in the presence of serial correlation in the error term, contrary to alternative test statistics. Empirical applications to Fisher’s inflation parity and real equity prices across different markets illustrate how the proposed test works in practice.
    Keywords: panel unit root tests, cross section dependence, multi-factor residual structure, Fisher inflation parity, real equity prices
    JEL: C12 C15 C22 C23
    Date: 2007–12
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp3254&r=ecm
  2. By: Keiko Yamaguchi
    Abstract: In this paper, we propose a new test for the presence of noise in the long-memory signal plus white noise model. A similar test was proposed by Sun-Phillips(2003), so we conduct simulation experiments to examine and compare the finite sample properties of these two tests. It is well-known that the realized volatility(RV) follows a long memory process, so we apply these tests to the RVs calculated using the 1- and 5-minutes returns of the Nikkei 225 stock index.
    Keywords: long-term memory, realized volatility, observation error, semi-parametric, local Whittle model
    JEL: C22
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:hst:hstdps:d07-230&r=ecm
  3. By: Caiado, Jorge; Crato, Nuno; Peña, Daniel
    Abstract: The comparison and classification of time series is an important issue in practical time series analysis. For these purposes, various methods have been proposed in the literature, but all have shortcomings, especially when the observed time series have different sample sizes. In this paper, we propose spectral domain methods for handling time series of unequal length. The methods make the spectral estimates comparable, by producing statistics at the same frequency. A first sensible approach may consist on zero-padding the shorter time series in order to increase the corresponding number of periodogram ordinates. We show that this works well provided the sample sizes are not very different, but does not give good results in case the time series lengths are very unbalanced. For this latter case, we study some periodogram-based comparison methods and construct a test. Both the methods and the test display reasonable properties for series of any lengths. Additionally and for reference, we develop a parametric comparison method. The procedures are assessed by a Monte Carlo simulation study. As an illustrative example, a periodogram method is used to compare and cluster industrial production series of some developed countries.
    Keywords: Cluster analysis; Interpolated periodogram; Reduced periodogram; Spectral analysis; Time series; Zero-padding.
    JEL: C32 C0 C12
    Date: 2007–12
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:6605&r=ecm
  4. By: Kiho Jeong; Wolfgang Härdle
    Abstract: This paper proposes a nonparametric test of causality in quantile. Zheng (1998) has proposed an idea to reduce the problem of testing a quantile restriction to a problem of testing a particular type of mean restriction in independent data. We extend Zheng’s approach to the case of dependent data, particularly to the test of Granger causality in quantile. The proposed test statistic is shown to have a second-order degenerate U-statistic as a leading term under the null hypothesis. Using the result on the asymptotic normal distribution for a general second order degenerate U-statistics with weakly dependent data of Fan and Li (1996), we establish the asymptotic distribution of the test statistic for causality in quantile under ß-mixing (absolutely regular) process.
    Keywords: Granger Causality, Quantile, Nonparametric Test
    JEL: C14 C52
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2008-007&r=ecm
  5. By: Ley, Eduardo; Steel, Mark F.J.
    Abstract: We consider the problem of variable selection in linear regression models. Bayesian model averaging has become an important tool in empirical settings with large numbers of potential regressors and relatively limited numbers of observations. We examine the effect of a variety of prior assumptions on the inference concerning model size, posterior inclusion probabilities of regressors and on predictive performance. We illustrate these issues in the context of cross-country growth regressions using three datasets with 41 to 67 potential drivers of growth and 72 to 93 observations. Finally, we recommend priors for use in this and related contexts.
    Keywords: Model size; Model uncertainty; Posterior odds; Prediction; Prior odds; Robustness
    JEL: C11 O47
    Date: 2008–01–06
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:6637&r=ecm
  6. By: Valle e Azevedo, João
    Abstract: We derive the limit of the expected periodogram in the unit-root case under general conditions. This function is seen to be time-independent, thus sharing a fundamental property with the stationary case equivalent. We discuss the consequences of this result to the frequency domain interpretation of filtered integrated time series.
    Keywords: Periodogram; Unit root
    JEL: C22
    Date: 2007–09–21
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:6553&r=ecm
  7. By: Wolfgang Härdle; Julius Mungo
    Abstract: Empirical studies have shown that a large number of financial asset returns exhibit fat tails and are often characterized by volatility clustering and asymmetry. Also revealed as a stylized fact is Long memory or long range dependence in market volatility, with significant impact on pricing and forecasting of market volatility. The implication is that models that accomodate long memory hold the promise of improved long-run volatility forecast as well as accurate pricing of long-term contracts. On the other hand, recent focus is on whether long memory can affect the measurement of market risk in the context of Value-at- Risk (V aR). In this paper, we evaluate the Value-at-Risk (V aR) and Expected Shortfall (ESF) in financial markets under such conditions. We examine one equity portfolio, the British FTSE100 and three stocks of the German DAX index portfolio (Bayer, Siemens and Volkswagen). Classical V aR estimation methodology such as exponential moving average (EMA) as well as extension to cases where long memory is an inherent characteristics of the system are investigated. In particular, we estimate two long memory models, the Fractional Integrated Asymmetric Power-ARCH and the Hyperbolic-GARCH with different error distribution assumptions. Our results show that models that account for asymmetries in the volatility specifications as well as fractional integrated parametrization of the volatility process, perform better in predicting the one-step as well as five-step ahead V aR and ESF for short and long positions than short memory models. This suggests that for proper risk valuation of options, the degree of persistence should be investigated and appropriate models that incorporate the existence of such characteristic be taken into account.
    Keywords: Backtesting, Value-at-Risk, Expected Shortfall, Long Memory, Fractional Integrated Volatility Models
    JEL: C14 C32 C52 C53 G12
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2008-006&r=ecm
  8. By: Rao, B. Bhaskara; Singh, Rup; Kumar, Saten
    Abstract: It is argued that whether or not there is a need for unit roots and cointegration based econometric methods is a methodological issue. An alternative is the econometrics of the London School of Economics (LSE) and Hendry approach based on the simpler classical methods of estimation. This is known as the general to specific method (GETS). Like all other methodological issues this is also difficult to resolve but we think that GETS is very useful.
    Keywords: GETS; Cointegration; Box-Jenkin’s Equations; Hendry; Granger
    JEL: C0 C1
    Date: 2008–01–09
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:6627&r=ecm
  9. By: Junni L. Zhang; Wolfgang Härdle
    Abstract: We propose a new nonlinear classification method based on a Bayesian "sum-of-trees" model, the Bayesian Additive Classification Tree (BACT), which extends the Bayesian Additive Regression Tree (BART) method into the classi- fication context. Like BART, the BACT is a Bayesian nonparametric additive model specified by a prior and a likelihood in which the additive components are trees, and it is fitted by an iterative MCMC algorithm. Each of the trees learns a different part of the underlying function relating the dependent variable to the input variables, but the sum of the trees offers a flexible and robust model. Through several benchmark examples, we show that the BACT has excellent performance. We apply the BACT technique to classify whether firms would be insolvent. This practical example is very important for banks to construct their risk profile and operate successfully. We use the German Creditreform database and classify the solvency status of German firms based on financial statement information. We show that the BACT outperforms the logit model, CART and the Support Vector Machine in identifying insolvent Firms.
    Keywords: Classi¯cation and Regression Tree, Financial Ratio, Misclassification Rate, Accuracy Ratio
    JEL: C14 C11 C45 C01
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2008-003&r=ecm
  10. By: Stéphane Loisel (SAF - EA2429 - Laboratoire de Science Actuarielle et Financière - Université Claude Bernard - Lyon I); Nicolas Privault (Department of Mathematics - City University of Hong Kong)
    Abstract: The goal of this paper is to obtain probabilistic representation formulas that are suitable for the numerical computation of the (possibly non-continuous) density functions of infima of reserve processes commonly used in insurance. In particular we show, using Monte Carlo simulations, that these representation formulas perform better than standard finite difference methods. Our approach differs from standard Malliavin probabilistic representation techniques which generally require more smoothness on random variables, entailing the continuity of their density functions.
    Keywords: Ruin probability; Malliavin calculus; insurance; integration by parts
    Date: 2007–11
    URL: http://d.repec.org/n?u=RePEc:hal:papers:hal-00201347_v2&r=ecm
  11. By: Ray-Bing Chen; Meihui Guo; Wolfgang Härdle; Shih-Feng Huang
    Abstract: Independent component analysis (ICA) is a modern factor analysis tool de- veloped in the last two decades. Given p-dimensional data, we search for that linear combination of data which creates (almost) independent components. Here copulae are used to model the p-dimensional data and then independent components are found by optimizing the copula parameters. Based on this idea, we propose the COPICA method for searching independent components. We illustrate this method using several blind source separation examples, which are mathematically equivalent to ICA problems. Finally performances of our method and FastICA are compared to explore the advantages of this method.
    Keywords: Blind source separation, Canonical maximum likelihood method, Givens rotation matrix, Signal/noise ratio, Simulated annealing algorithm
    JEL: C01 C13 C14 C63
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2008-004&r=ecm
  12. By: Jose Luis Moraga-Gonzalez (University of Groningen and CESifo); Zsolt Sandor (Universidad Carlos III de Madrid); Matthijs R. Wildenbeest (Department of Business Economics and Public Policy, Indiana University Kelley School of Business)
    Abstract: We study a consumer non-sequential search oligopoly model with search cost heterogeneity. We first prove that an equilibrium in mixed strategies always exists. We then examine the nonparametric identification and estimation of the costs of search. We find that the sequence of points on the support of the search cost distribution that can be identified is convergent to zero as the number of firms increases. As a result, when the econometrician has price data from only one market, the search cost distribution cannot be identified accurately at quantiles other than the lowest. To solve this pitfall, we propose to consider a richer framework where the researcher has price data from many markets with the same underlying search cost distribution. We provide conditions under which pooling the data allows for the identification of the search cost distribution at all the points of its support. We estimate the search cost density function directly by a semi-nonparametric density estimator whose parameters are chosen to maximize the joint likelihood corresponding to all the markets. A Monte Carlo study shows the advantages of the new approach and an application using a data set of online prices for memory chips is presented.
    Keywords: consumer search, oligopoly, search costs, semi-nonparametric estimation
    JEL: C14 D43 D83 L13
    Date: 2007–12
    URL: http://d.repec.org/n?u=RePEc:iuk:wpaper:2007-20&r=ecm
  13. By: Valle e Azevedo, João
    Abstract: We develop a multivariate filter which is an optimal (in the mean squared error sense) approximation to the ideal filter that isolates a specified range of fluctuations in a time series, e.g., business cycle fluctuations in macroeconomic time series. This requires knowledge of the true second-order moments of the data. Otherwise these can be estimated and we show empirically that the method still leads to relevant improvements of the extracted signal, especially in the endpoints of the sample. Our filter is an extension of the univariate filter developed by Christiano and Fitzgerald (2003). Specifically, we allow an arbitrary number of covariates to be employed in the estimation of the signal. We illustrate the application of the filter by constructing a business cycle indicator for the U.S. economy. The filter can additionally be used in any similar signal extraction problem demanding accurate real-time estimates.
    JEL: E32 C14 C22
    Date: 2008–01–02
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:6555&r=ecm
  14. By: Valle e Azevedo, João
    Abstract: We resort to a rigorous definition of spectrum of an integrated time series in order to characterise the implications of applying linear filters to such series. We conclude that in the presence of integrated series the transfer function of the filters has exactly the same interpretation as in the covariance stationary case, contrary to what many authors suggest. This disagreement leads to different conclusions regarding the link of the original fluctuations with the transformed fluctuations in the time series data, embodied in various unjustified criticisms to the application of detrending filters. Despite this, and given the frequency domain characteristics of filtered macroeconomic integrated series, we acknowledge that the choice of a particular detrending filter is far from being a neutral task.
    Keywords: Unit roots; Band-pass filters; Pseudo-spectrum
    JEL: E32 C22
    Date: 2007–09–21
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:6574&r=ecm
  15. By: Stengos, T.; Kourtellos, A.; Tan, C.M.
    Date: 2008
    URL: http://d.repec.org/n?u=RePEc:gue:guelph:2008-1&r=ecm
  16. By: José M. R. Murteira; Óscar D. Lourenço
    Abstract: The discernment of relevant factors driving health care utilization constitutes one important research topic in Health Economics. This issue is frequently addressed through specification of regression models for health care use (y – often measured by number of doctor visits) including, among other covariates, a measure of self-assessed health (sah). However, the exogeneity of sah has been questioned, due to the possible presence of unobservables influencing y and sah, and because individuals’ health assessments may depend on the quantity of medical care received. This paper circumvents the potential endogeneity of sah and its associated consequences within conventional regression models (namely the need to find valid instruments) by adopting a full information approach, with specification of bivariate regression models for the discrete variables (sah,y). The approach is implemented with copula functions, which enable separate consideration of each variable margin and their dependence structure. Estimation of these models is through maximum likelihood, with cross-section data from the Portuguese National Health Survey of 1998/99. Results indicate that estimates of regression parameters do not vary much between different copula models. The dependence parameter estimate is negative across joint models, which suggests evidence of simultaneity of (sah,y) and
    Keywords: health care utilization; self-assessed health; endogeneity; discrete data; copulas.
    JEL: I10 C16 C51
    Date: 2007–12
    URL: http://d.repec.org/n?u=RePEc:yor:hectdg:07/27&r=ecm
  17. By: Caiado, Jorge; Crato, Nuno
    Abstract: This paper proposes spectral and asymmetric-volatility based methods for cluster analysis of stock returns. Using the information about both the periodogram of the squared returns and the estimated parameters in the TARCH equation, we compute a distance matrix for the stock returns. Clusters are formed by looking to the hierarchical structure tree (or dendrogram) and the computed principal coordinates. We employ these techniques to investigate the similarities and dissimilarities between the "blue-chip" stocks used to compute the Dow Jones Industrial Average (DJIA) index. For reference, we investigate also the similarities among stock returns by mean and squared correlation methods.
    Keywords: Asymmetric effects; Cluster analysis; DJIA stock returns; Periodogram; Threshold ARCH model; Volatility
    JEL: C32 G10
    Date: 2007–12
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:6607&r=ecm
  18. By: Randolph Bruno (DARRT, University of Bologna and IZA); Marco Stampini (Sant’Anna School of Advanced Studies)
    Abstract: This paper explores how cross-sectional data can be exploited jointly with longitudinal data, in order to increase estimation efficiency while properly tackling the potential bias due to unobserved individual characteristics. We propose an innovative procedure and we show its implementation by analysing the determinants of consumption in Nicaragua, based on data from three Living Standard Measurement Study surveys from 1993, 1998 and 2001. The last two rounds constitute an unbalanced longitudinal data set, while the first is a cross-section of different households. Under the assumption that the relationship between observed and unobserved characteristics is homogeneous across time, information from longitudinal data is used to clean the bias in the unpaired sample. In a second step, corrected unpaired observations are used jointly with panel data. This reduces the standard errors of the estimation coefficients and might increase their significance as well, otherwise compromised by the limited variation provided by the short longitudinal data.
    Keywords: panel data, estimation efficiency, pseudo-panel, consumption model, Nicaragua
    JEL: C33 C42 I38
    Date: 2007–12
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp3231&r=ecm
  19. By: Caiado, Jorge; Crato, Nuno
    Abstract: Previous studies have investigated the comovements of international equity markets by using correlation, cointegration, common factor analysis, and other approaches. In this paper, we investigate the stochastic structure of major euro and non-euro area stock market series from 1994 to 2006, by using cluster analysis techniques for time series. We use an interpolated-periodogram based metric for level and squared returns in order to compute distances between the stock markets. This method captures the stochastic dependence structure of the time series and solves the shortcoming of unequal sample sizes found for different countries. The clusters of countries are formed by the dendrogram and the principal coordinates associated with the sample spectrum for both the series of returns and volatilities. The empirical results suggest that the cross-country groups have become considerably more homogeneous with the introduction of the euro as an electronic currency. For reference, we also explore the pairwise correlations among the series.
    Keywords: Cluster analysis; Euro area; International stock markets; Periodogram; Stock returns; Volatility
    JEL: C32 G15
    Date: 2008–01
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:6609&r=ecm
  20. By: Ivano Azzini; Riccardo Girardi; Marco Ratto (Euro-area Economy Modelling Centre)
    Abstract: In the Bayesian estimation of DSGE models with DYNARE (specifically the Matlab Version for Windows), most of the computing time is devoted to the posterior estimation with the Metropolis algorithm. Usually, the Metropolis is run using multiple parallel chains, to allow more careful convergence testing. In this work we describe a way to parallelize the multiple-chain Metropolis algorithm within the Dynare Framework, running parallel chains on different processors to reduce computational time. To do so, we aimed at the easiest and cheapest possible strategy, i.e. the one which requires the lower level of modification in the basic DYNARE routines and does not need any licensed toolbox. Despite the fact that parallelizing the Metropolis-Hasting algorithm is intrinsically easy (the different chains are completely independent each other and do not require communication between them), MatLab software does not allow concurrent programming, or in other terms it does not support multi-threads, without the use of MatLab Distributed Computing Toolbox. The general idea is that when the execution of the Metropolis should start, instead of running it inside the MatLab session, the control of the execution is passed to the operating system that allows for multi-threading and concurrent threads are launched on different processors. When the metropolis computations are concluded the control is given back to the original MatLab session for post-processing Markov Chain results.
    Keywords: North-South, DSGE models, DYNARE, Matlab, Windows, Parallel Computing
    JEL: C63
    Date: 2007
    URL: http://d.repec.org/n?u=RePEc:eem:wpaper:1&r=ecm

This nep-ecm issue is ©2008 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.