|
on Econometrics |
By: | Nickl, Richard; Pötscher, Benedikt M. |
Abstract: | Given a random sample from a parametric model, we show how indirect inference estimators based on appropriate nonparametric density estimators (i.e., simulation-based minimum distance estimators) can be constructed that, under mild assumptions, are asymptotically normal with variance-covarince matrix equal to the Cramér-Rao bound. |
Keywords: | Simulation-based minimum distance estimation; indirect inference |
JEL: | C01 |
Date: | 2009–03 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:16608&r=ecm |
By: | Emma M. Iglesias; Oliver Linton |
Abstract: | We propose a method of estimating the Pareto tail thickness parameter of the unconditional distribution of a financial time series by exploiting the implications of a GJR-GARCH volatility model. The method is based on some recent work on the extremes of GARCH-type processes and extends the method proposed by Berkes, Horváth and Kokoszka (2003). We show that the estimator of tail thickness is consistent and converges at rate ?T to a normal distribution (where T is the sample size), provided the model for conditional variance is correctly specified as a GJR-GARCH. This is much faster than the convergence rate of the Hill estimator, since that procedure only uses a vanishing fraction of the sample. We also develop new specification tests based on this method and propose new alternative estimates of unconditional value at risk. We show in Monte Carlo simulations the advantages of our procedure in finite samples; and finally an application concludes the paper |
Keywords: | Pareto tail thickness parameter, GARCH-type models, Value-at-Risk, Extreme value theory, Heavy tails |
JEL: | C12 C13 C22 G11 G32 |
Date: | 2009–06 |
URL: | http://d.repec.org/n?u=RePEc:cte:werepe:we094726&r=ecm |
By: | Christian Kascha (Norges Bank (Central Bank of Norway)); Carsten Trenkler (University of Mannheim) |
Abstract: | We investigate the small-sample size and power properties of bootstrapped likelihood ratio systems cointegration tests via Monte Carlo simulations when the true lag order of the data generating process is unknown. A recursive bootstrap scheme is employed. We estimate the order by minimizing different information criteria. In comparison to the standard asymptotic likelihood ratio test based on an estimated lag order we found that the recursive bootstrap procedure can lead to improvements in small samples even when the true lag order is unknown while the power loss is moderate. |
Keywords: | Cointegration tests, Bootstrapping, Information criteria |
JEL: | C15 C32 |
Date: | 2009–08–04 |
URL: | http://d.repec.org/n?u=RePEc:bno:worpap:2009_12&r=ecm |
By: | Gordon Anderson; Oliver Linton; Yoon-Jae Wang |
Abstract: | This paper develops a methodology for nonparametric estimation of a polarization measure due to Anderson Ge and Leo (2006) based on kernel estimation techniques. We give the asymptotic theory of our estimator, which in some cases is non standard due to boundary value problems. We also propose a method for conducting inference based on estimation of unknown quantities in the limiting distribution and show that our method yields consistent inference in all cases we consider. We investigate finite sample proerties of our estimator by simulation methods. We give an application to the study of polarization in China in recent years. |
Keywords: | kernel estimation, Inequality, Overlap Coefficient, Poissonization. |
JEL: | C12 C13 C14 |
Date: | 2009–07–30 |
URL: | http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-363&r=ecm |
By: | Qian Chen (School of Public Finance & Public Policy, Central University of Finance & Economics, People's Republic of China); David E. Giles (Department of Economics, University of Victoria) |
Abstract: | We examine the finite sample properties of the maximum likelihood estimator for the binary logit model with random covariates. Analytic expressions for the first-order bias and second-order mean squared error function for the maximum likelihood estimator in this model are derived, and we undertake some numerical evaluations to analyze and illustrate these analytic results for the single covariate case. For various data distributions, the bias of the estimator is signed the same as the covariate’s coefficient, and both the absolute bias and the mean squared errors increase symmetrically with the absolute value of that parameter. The behaviour of a bias-adjusted maximum likelihood estimator, constructed by subtracting the (maximum likelihood) estimator of the first-order bias from the original estimator, is examined in a Monte Carlo experiment. This bias-correction is effective in all of the cases considered, and is recommended when the logit model is estimated by maximum likelihood with small samples. |
Keywords: | Logit model, bias, mean squared error, bias correction, random covariates |
JEL: | C01 C13 C25 |
Date: | 2009–08–05 |
URL: | http://d.repec.org/n?u=RePEc:vic:vicewp:0906&r=ecm |
By: | Alessandro De Gregorio; Stefano Iacus (Department of Economics, Business and Statistics, University of Milan, IT) |
Abstract: | We consider parametric hypotheses testing for multidimensional It\^o processes, possibly with jumps, observed at discrete time. To this aim, we propose the whole class of pseudo $\phi$-divergence test statistics, which include as a special case the well-known likelihood ratio test but also many other test statistics as well as new ones. Although the final goal is to apply these test procedures to multidimensional It\^o processes, we formulate the problem in the very general setting of regular statistical experiments and then particularize the results to our model of interest. In this general framework we prove that, contrary to what happens to true $\phi$-divergence test statistics, the limiting distribution of the pseudo $\phi$-divergence test statistic is characterized by the function $\phi$ which defines the divergence itself. In the case of contiguous alternatives, it is also possible to study in detail the power function of the test. Although all tests in this class are asymptotically equivalent, we show by Monte Carlo analysis that, in small sample case, the performance of the test strictly depends on the choice of the function $\phi$. In particular, we see that even in the i.i.d. case, the power function of the generalized likelihood ratio test ($\phi=\log$) is strictly dominated by other pseudo $\phi$-divergences test statistics. |
Keywords: | diffusion processes with jumps, Ito processes, power of the test, parametric hypotheses testing, phi-divergences, generalized likelihood ratio test, |
Date: | 2009–05–21 |
URL: | http://d.repec.org/n?u=RePEc:bep:unimip:1083&r=ecm |
By: | Francq, Christian; Zakoian, Jean-Michel |
Abstract: | This article is concerned by testing the nullity of coefficients in GARCH models. The problem is non standard because the quasi-maximum likelihood estimator is subject to positivity constraints. The paper establishes the asymptotic null and local alternative distributions of Wald, score, and quasi-likelihood ratio tests. Efficiency comparisons under fixed alternatives are also considered. Two cases of special interest are: (i) tests of the null hypothesis of one coefficient equal to zero and (ii) tests of the null hypothesis of no conditional heteroscedasticity. Finally, the proposed approach is used in the analysis of a set of financial data and leads to reconsider the preeminence of GARCH(1,1) among GARCH models. |
Keywords: | Asymptotic efficiency of tests; Boundary; Chi-bar distribution; GARCH model; Quasi Maximum Likelihood Estimation; Local alternatives |
JEL: | C12 C22 C01 |
Date: | 2008 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:16672&r=ecm |
By: | Tatsuya Kubokawa (Faculty of Economics, University of Tokyo) |
Abstract: | In the small area estimation, the empirical best linear unbiased predictor (EBLUP) or the empirical Bayes estimator (EB) in the linear mixed model is recognized useful because it gives a stable and reliable estimate for a mean of a small area. In practical situations where EBLUP is applied to real data, it is important to evaluate how much EBLUP is reliable. One method for the purpose is to construct a confidence interval based on EBLUP. In this paper, we obtain an asymptotically corrected empirical Bayes confidence interval in a nested error regression model with unbalanced sample sizes and unknown components of variance. The coverage probability is shown to satisfy the confidence level in the second order asymptotics. It is numerically revealed that the corrected confidence interval is superior to the conventional confidence interval based on the sample mean in terms of the coverage probability and the expected width of the interval. Finally, it is applied to the posted land price data in Tokyo and the neighboring prefecture. |
Date: | 2009–08 |
URL: | http://d.repec.org/n?u=RePEc:tky:fseres:2009cf632&r=ecm |
By: | Eduardo Rossi (Dipartimento di economia politica e metodi quantitativi, University of Pavia, Italy.); Paolo Santucci de Magistris (Dipartimento di economia politica e metodi quantitativi, University of Pavia, Italy) |
Abstract: | The no arbitrage relation between futures and spot prices implies an analogous relation between futures and spot volatilities as measured by daily range. Long memory features of the range-based volatility estimators of the two series are analyzed, and their joint dynamics are modeled via a fractional vector error correction model (FVECM), in order to explicitly consider the no arbitrage constraints. We introduce a two-step estimation procedure for the FVECM parameters and we show the properties by a Monte Carlo simulation. The out-of-sample forecasting superiority of FVECM, with respect to competing models, is documented. The results highlight the importance of giving fully account of long-run equilibria in volatilities in order to obtain better forecasts. |
Keywords: | Range-based volatility estimator, Long memory, Fractional cointegration, Fractional VECM, Stock Index Futures |
JEL: | C32 C13 G13 |
Date: | 2009–07–15 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2009-31&r=ecm |
By: | Steven T. Berry (Cowles Foundation, Yale University); Philip A. Haile (Cowles Foundation, Yale University) |
Abstract: | We consider identification of nonparametric random utility models of multinomial choice using "micro data," i.e., observation of the characteristics and choices of individual consumers. Our model of preferences nests random coefficients discrete choice models widely used in practice with parametric functional form and distributional assumptions. However, the model is nonparametric and distribution free. It allows choice-specific unobservables, endogenous choice characteristics, unknown heteroskedasticity, and high-dimensional correlated taste shocks. Under standard "large support" and instrumental variables assumptions, we show identifiability of the random utility model. We demonstrate robustness of these results to relaxation of the large support condition and show that when it is replaced with a weaker "common choice probability" condition, the demand structure is still identified. We show that key maintained hypotheses are testable. |
Keywords: | Nonparametric identification, Discrete choice demand, Differentiated products |
JEL: | C35 |
Date: | 2009–09 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:1718&r=ecm |
By: | Koji Miyawaki (Nihon University, Population Research Institute); Yasuhiro Omori (Faculty of Economics, University of Tokyo); Akira Hibiki (Department of Social Engineering, Tokyo Institute of Technology) |
Abstract: | This article proposes a Bayesian estimation method of demand functions under block rate pricing, focusing on increasing one, where we first considered the separability condition explicitly which has been ignored in the previous literature. Under this pricing structure, price changes when consumption exceeds a certain threshold and the consumer faces a utility maximization problem subject to a piecewise-linear budget constraint. Solving this maximization problem leads to a statistical model that includes many inequalities, such as the so-called separability condition. Because of them, it is virtually impractical to numerically maximize the likelihood function. Thus, taking a hierarchical Bayesian approach, we implement a Markov chain Monte Carlo simulation to properly estimate the demand function. We find, however, that the convergence of the distribution of simulated samples to the posterior distribution is slow, requiring an additional scale transformation step for parameters to the Gibbs sampler. These proposed methods are applied to estimate the Japanese residential water demand function. |
Date: | 2009–08 |
URL: | http://d.repec.org/n?u=RePEc:tky:fseres:2009cf631&r=ecm |
By: | Anisha Ghosh; Oliver Linton |
Abstract: | Prominent asset pricing models imply a linear, time-invariant relation between the equity premium and its conditional variance. We propose an approach to estimating this relation that overcomes some of the limitations of the existing literature. First, we do not require any functional form assumptions about the conditional moments. Second, the GMM approach is used to overcome the endogeneity problem inherent in the regression. Third, we correct for the measurement error arising because of using a proxy for the latent variance. The empirical findings reveal significant time-variation in the relation that coincide with structural break dates in the market-wide price-dividend ratio. |
Keywords: | Bias-Correction, Measurement Error, Nonparametric Volatility, Return, Risk |
JEL: | C14 G12 |
Date: | 2009–07 |
URL: | http://d.repec.org/n?u=RePEc:cte:werepe:we094928&r=ecm |
By: | Paul Clarke; Frank Windmeijer |
Abstract: | Structural mean models (SMMs) are used to estimate causal effects among those selecting treatment in randomised controlled trials affected by non-ignorable non-compliance. These causal effects can be identified by assuming that there is no effect modification, namely, that the causal effect is equal for the treated subgroups randomised to treatment and to control. By analysing simple structural models for binary outcomes, we argue that the no effect modification assumption does not hold in general, and so SMMs do not estimate causal effects for the treated. An exception is for designs in which those randomised to control can be completely excluded from receiving the treatment. However, when there is non-compliance in the control arm, local (or complier) causal effects can be identified provided that the further assumption of monotonic selection into treatment holds. We demonstrate these issues using numerical examples. |
Keywords: | structural mean models, identification, local average treatment effects, complier average treatment effects |
JEL: | C13 C14 |
Date: | 2009–06 |
URL: | http://d.repec.org/n?u=RePEc:bri:cmpowp:09/217&r=ecm |
By: | Richard Harris; Victoria Kravtsova |
Abstract: | This paper provides a survey and critique of how spatial links are taken into account inempirical analysis by applied economists/regional scientists. Spatial spillovers and spatialinterrelationships between economic variables (e.g. unemployment, GDP, etc) are likely to beimportant, especially because of the role of local knowledge diffusion and how trade (interregionalexports and imports) can potentially act to diffuse technology. Since most empiricaleconomic studies ignore spatial autocorrelation they are thus potentially mis-specified. Thishas led to various approaches to taking account of spatial spillovers, including econometricmodels that dependent on specifying (correctly) the spatial weights matrix, W. The paperdiscusses the standard approaches (e.g., contiguity and distance measures) in constructing W,and the implications of using such approaches in terms of the potential mis-specification ofW. We then look at more recent attempts to measure W in the literature, including: Bayesian(searching for 'best fit'); non-parametric techniques; the use of spatial correlation to estimateW; and other iteration techniques. The paper then considers alternative approaches forincluding spatial spillovers in econometric models such as: constructing (weighted) spillovervariables which directly enter the model; allowing non-contiguous spatial variables to enterthe model; and the use of spatial VAR models. Lastly, we discuss the likely form of spatialspillovers and therefore whether the standard approach to measuring W is likely to besufficient. |
Keywords: | spatial weights spatial dependence spatial models |
JEL: | C31 R11 |
Date: | 2009–03 |
URL: | http://d.repec.org/n?u=RePEc:cep:sercdp:0017&r=ecm |
By: | John M Maheu; Thomas H McCurdy; Yong Song |
Abstract: | Traditional methods used to partition the market index into bull and bear regimes often sort returns ex post based on a deterministic rule. We model the entire return distribution; two states govern the bull regime and two govern the bear regime, allowing for rich and heterogeneous intra-regime dynamics. Our model can capture bear market rallies and bull market corrections. A Bayesian estimation approach accounts for parameter and regime uncertainty and provides probability statements regarding future regimes and returns. Applied to 123 years of data our model provides superior identification of trends in stock prices. |
Keywords: | Markov switching, bear market rallies, bull market corrections, Gibbs sampling |
JEL: | C11 C22 C50 G10 |
Date: | 2009–08–06 |
URL: | http://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-369&r=ecm |
By: | Jennifer Mason |
Abstract: | This paper is written as a practical and accessible guide to some key issues in mixed methods research. It explores six broad strategies that can underpin the mixing of methods and linking of different forms of data, be they qualitative, quantitative, or spanning this divide. The paper outlines challenges and opportunities that each of the six strategies brings for mixed methods practice and analysis, giving each a verdict. [NCRM WP]. |
Keywords: | mixed methods research, research, mixed methods, data, social science research, social science, qualitative, quantitative |
Date: | 2009 |
URL: | http://d.repec.org/n?u=RePEc:ess:wpaper:id:2168&r=ecm |