|
on Econometrics |
By: | Joseph P. Romano; Michael Wolf |
Abstract: | Linear regression models form the cornerstone of applied research in economics and other scientific disciplines. When conditional heteroskedasticity is present, or at least suspected, the practice of reweighting the data has long been abandoned in favor of estimating model parameters by ordinary least squares (OLS), in conjunction with using heteroskedasticity consistent (HC) standard errors. However, we argue for reintroducing the practice of reweighting the data, since doing so can lead to large efficiency gains of the resulting weighted least squares (WLS) estimator over OLS even when the model for reweighting the data is misspecified. Efficiency gains manifest in a first-order asymptotic sense and thus should be considered in current empirical practice. Crucially, we also derive how asymptotically valid inference based on the WLS estimator can be obtained even when the model for reweighting the data is misspecified. The idea is that, just like the OLS estimator, the WLS estimator can also be accompanied by HC standard errors without knowledge of the functional form of conditional heteroskedasticity. A Monte Carly study demonstrates attractive finite-sample properties of our proposals compared to the status quo, both in terms of estimation and making inference. |
Keywords: | Conditional heteroskedasticity, HC standard errors, weighted least squares |
JEL: | C12 C13 C21 |
Date: | 2014–09 |
URL: | http://d.repec.org/n?u=RePEc:zur:econwp:172&r=ecm |
By: | Minxian Yang (School of Economics, Australian School of Business, the University of New South Wales) |
Abstract: | The idea of identifying structural parameters via heteroskedasticity is explored in the context of binary choice models with an endogenous regressor. Sufficient conditions for parameter identification are derived for probit models without relying on instruments or additional restrictions. The results are extendable to other parametric binary choice models. The semi- parametric model of Manski (1975, 1985), with endogeneity, is also shown to be identifiable in the presence of heteroskedasticity. The role of heteroskedasticity in identifying and estimating structural parameters is demonstrated by Monte Carlo experiments. |
Keywords: | Qualitative response, Probit, Logit, Linear median regression, Endogeneity, Identification, Heteroskedasticity |
JEL: | C25 C35 C13 |
URL: | http://d.repec.org/n?u=RePEc:swe:wpaper:2014-34&r=ecm |
By: | Javier Gómez Biscarri; Javier Hualde |
Abstract: | We propose a residual-based augmented Dickey-Fuller (ADF) test statistic that allows for detection of stationary cointegration within a system that may contain both I (2) and I (1) observables. The test is also consistent under the alternative of multicointegration, where first differences of the I (2) observables enter the cointegrating relationships. We find the null limiting distribution of this statistic and justify why our proposal improves over related approaches. Critical values are computed for a variety of situations. Additionally, building on this ADF test statistic, we propose a procedure to test the null of no stationary cointegration which overcomes the drawback, suffered by any residual-based method, of the lack of power with respect to some relevant alternatives. Finally, a Monte Carlo experiment is carried out and an empirical application is provided as an illustrative example. |
Keywords: | I(2) systems; stationary cointegration; multicointegration; residual-based tests. |
JEL: | C12 C22 C32 |
Date: | 2014–09 |
URL: | http://d.repec.org/n?u=RePEc:upf:upfgen:1439&r=ecm |
By: | Aman Ullah (Department of Economics, University of California Riverside); Yong Bao (Purdue University); Ru Zhang (University of California, Riverside) |
Abstract: | Phillips (1977a, 1977b) made seminal contributions to time series finite-sample theory, and then, he was among the first to develop the distributions of estimators and forecasts in stationary time series models, see Phillips (1978, 1979), among others. From the mid-eighties Phillips (1987a, 1987b), through his fundamental papers, opened the path of asymptotic (large-sample) theory for the unit root type non-stationary models. This has certainly created a large literature of important papers, including many of Phillips’’own papers. However, not much is known about the analytical finite-sample properties of estimators under the unit root, although see Kiviet and Phillips (2005) for the case when the errors are normally distributed. An objective of this paper is to analyze the …finite-sample behavior of the estimator in the first-order autoregressive model with unit root and nonnormal errors. In particular, we derive analytical approximations for the first two moments in terms of model parameters and the distribution parameters. Through Monte Carlo simulations, we find that our approximate formula perform quite well across different distribution specifications in small samples. However, when the noise to signal ratio is huge, and bias distortion can be quite substantial, and our approximations do not fare well. |
Keywords: | unit root, nonnormal, moment approximation. |
Date: | 2014–09 |
URL: | http://d.repec.org/n?u=RePEc:ucr:wpaper:201401&r=ecm |
By: | Søren Johansen (Dept of Economics, University of Copenhagen and CREATES, Dept of Economics and Business, Aarhus University); Bent Nielsen (Nuffield College and Dept of Economics) |
Abstract: | We review recent asymptotic results on some robust methods for multiple regres- sion. The regressors include stationary and non-stationary time series as well as polynomial terms. The methods include the Huber-skip M-estimator, 1-step Huber-skip M-estimators, in particular the Impulse Indicator Saturation, iterated 1-step Huber-skip M-estimators and the Forward Search. These methods classify observations as outliers or not. From the as- ymptotic results we establish a new asymptotic theory for the gauge of these methods, which is the expected frequency of falsely detected outliers. The asymptotic theory involves normal distribution results and Poisson distribution results. The theory is applied to a time series data set. |
Keywords: | Huber-skip M-estimators, 1-step Huber-skip M-estimators, iteration, Forward Search, Impulse Indicator Saturation, Robusti?ed Least Squares, weighted and marked em- pirical processes, iterated martingale inequality, gauge. |
Date: | 2014–09–08 |
URL: | http://d.repec.org/n?u=RePEc:nuf:econwp:1404&r=ecm |
By: | Chen, Cathy W.S.; Gerlach, Richard; Lin, Edward M.H. |
Abstract: | Methods for Bayesian testing and assessment of dynamic quantile forecasts are proposed. Specifically, Bayes factor analogues of popular frequentist tests for independence of violations from, and for correct coverage of a time series of, quantile forecasts are developed. To evaluate the relevant marginal likelihoods involved, analytic integration methods are utilised when possible, otherwise multivariate adaptive quadrature methods are employed to estimate the required quantities. The usual Bayesian interval estimate for a proportion is also examined in this context. The size and power properties of the proposed methods are examined via a simulation study, illustrating favourable comparisons both overall and with their frequentist counterparts. An empirical study employs the proposed methods, in comparison with standard tests, to assess the adequacy of a range of forecasting models for Value at Risk (VaR) in several financial market data series. |
Keywords: | quantile regression; Value-at-Risk; asymmetric-Laplace distribution; Bayes factor; Bayesian Hypothesis testing |
Date: | 2014–09–10 |
URL: | http://d.repec.org/n?u=RePEc:syb:wpbsba:2123/11816&r=ecm |
By: | Roberto Leon-Gonzalez (National Graduate Institute for Policy Studies) |
Abstract: | This paper develops a novel and efficient algorithm for Bayesian inference in inverse Gamma Stochastic Volatility models. It is shown that by conditioning on auxiliary variables, it is possible to sample all the volatilities jointly directly from their posterior conditional density, using simple and easy to draw from distributions. Furthermore, this paper develops a generalized inverse Gamma process with more flexible tails in the distribution of volatilities, which still allows for simple and efficient calculations. Using several macroeconomic and fi nancial datasets, it is shown that the inverse Gamma and Generalized inverse Gamma processes can greatly outperform the commonly used log normal volatility processes with student-t errors. |
Date: | 2014–09 |
URL: | http://d.repec.org/n?u=RePEc:ngi:dpaper:14-12&r=ecm |
By: | Bloechl, Andreas |
Abstract: | Penalized splines are widespread tools for the estimation of trend and cycle, since they allow a data driven estimation of the penalization parameter by the incorporation into a linear mixed model. Based on the equivalence of penalized splines and the Hodrick-Prescott filter, this paper connects the mixed model framework of penalized splines to the Wiener- Kolmogorov filter. In the case that trend and cycle are described by ARIMA-processes, this filter yields the mean squarred error minimizing estimations of both components. It is shown that for certain settings of the parameters, a penalized spline within the mixed model framework is equal to the Wiener-Kolmogorov filter for a second fold integrated random walk as the trend and a stationary ARMA-process as the cyclical component. |
Keywords: | Hodrick-Prescott filter; mixed models; penalized splines; trend estimation; Wiener-Kolmogorov filter |
JEL: | C22 C52 |
Date: | 2014 |
URL: | http://d.repec.org/n?u=RePEc:lmu:muenec:21406&r=ecm |
By: | Bertrand Hassani (CES - Centre d'économie de la Sorbonne - CNRS : UMR8174 - Université Paris I - Panthéon-Sorbonne, Santander UK - Santander UK); Alexis Renaudin (Aon GRC - Aon Global Risk Consulting -) |
Abstract: | According to the last proposals of the Basel Committee on Banking Supervision, banks under the Advanced Measurement Approach (AMA) must use four different sources of information to assess their Operational Risk capital requirement. The fourth including "business environment and internal control factors", i.e. qualitative criteria, the three main quantitative sources available to banks to build the Loss Distribution are Internal Loss Data, External Loss Data, and Scenario Analysis. This paper proposes an innovative methodology to bring together these three different sources in the Loss Distribution Approach (LDA) framework through a Bayesian strategy. The integration of the different elements is performed in two different steps to ensure an internal data driven model is obtained. In a first step, scenarios are used to inform the prior distributions and external data informs the likelihood component of the posterior function. In the second step, the initial posterior function is used as the prior distribution and the internal loss data inform the likelihood component of the second posterior. This latter posterior function enables the estimation of the parameters of the severity distribution selected to represent the Operational Risk event types. |
Keywords: | Operational Risk; Loss Distribution Approach; Bayesian inference; Marchov Chain Monte Carlo; Extreme Value Theory; non-parametric statistics; risk measures |
Date: | 2013–02 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:halshs-00795046&r=ecm |
By: | Tae-Hwy Lee (Department of Economics, University of California Riverside); Yundong Tu (Peking University, Beijing, China); Aman Ullah (University of California, Riverside) |
Abstract: | The equity premium, return on equity minus return on risk-free asset, is expected to be positive. We consider imposing such positivity constraint in local historical average (LHA) in nonparametric kernel regression framework. It is also extended to the semiparametric single index model when multiple predictors are used. We construct the constrained LHA estimator via an indicator function which operates as `model-selection' between the unconstrained LHA and the bound of the constraint (zero for the positivity constraint). We smooth the indicator function by bagging (Breiman 1996a), which operates as `model-averaging' and yields a combined forecast of unconstrained LHA forecasts and the bound of the constraint. The local combining weights are determined by the probability that the constraint is binding. Asymptotic properties of the constrained LHA estimators without and with bagging are established, which show how the positive constraint and bagging can help reduce the asymptotic variance and mean squared errors. Monte Carlo simulations are conducted to show the finite sample behavior of the asymptotic properties. In predicting U.S. equity premium, we show that substantial nonlinearity can be captured by LHA and that the local positivity constraint can improve out-of-sample prediction of the equity premium. |
Keywords: | Equity premium; Nonparametric local historical average model; Positivity constraint; Bagging; Model averaging; Semiparametric single index model. |
JEL: | C14 C50 C53 G17 |
Date: | 2014–09 |
URL: | http://d.repec.org/n?u=RePEc:ucr:wpaper:201405&r=ecm |
By: | Tae-Hwy Lee (Department of Economics, University of California Riverside); Huiyu Huang (Grantham, Mayo, Van Otterloo and Company LLC) |
Abstract: | When the observed price process is the true underlying price process plus microstructure noise, it is known that realized volatility (RV) estimates will be overwhelmed by the noise when the sampling frequency approaches infinity. Therefore, it may be optimal to sample less frequently, and averaging the less frequently sampled subsamples can improve estimation for quadratic variation. In this paper, we extend this idea to forecasting daily realized volatility. While the subsample-averaging has been proposed and used in estimating RV, this paper is the first that uses the subsample-averaging for forecasting RV. The subsample averaging method we examine incorporates the high frequency data in different levels of systematic sampling. It first pools the high frequency data into several subsamples, that generates forecasts from each subsample, and then combine these forecasts. We find that, in daily S&P 500 return RV forecasts, subsample-averaging generates better forecasts than those using only one subsample without averaging over all subsamples. |
Keywords: | Subsample averaging. Forecast combination. High-frequency data. Realized volatility. ARFIMA model. HAR model. |
JEL: | C53 C58 G17 |
Date: | 2014–09 |
URL: | http://d.repec.org/n?u=RePEc:ucr:wpaper:201410&r=ecm |
By: | Raffaella Giacomini; Barbara Rossi |
Abstract: | The goal of this paper is to develop formal tests to evaluate the relative in-sample per- formance of two competing, misspecified, non-nested models in the presence of possible data instability. Compared to previous approaches to model selection, which are based on measures of global performance, we focus on the local relative performance of the models. We propose three tests that are based on different measures of local performance and that correspond to different null and alternative hypotheses. The empirical application provides insights into the time variation in the performance of a representative DSGE model of the European economy relative to that of VARs. |
Keywords: | Model Selection Tests, Misspeci.cation, Structural Change, Kullback-Leibler Information Criterion |
JEL: | C22 C52 C53 |
Date: | 2014–08 |
URL: | http://d.repec.org/n?u=RePEc:upf:upfgen:1437&r=ecm |
By: | Massimiliano Caporin (University of Padova); Eduardo Rossi (University of Pavia); Paolo Santucci de Magistris (Aarhus University and CREATES) |
Abstract: | The realized volatility of financial returns is characterized by persistence and occurrence of unpredictable large increments. To capture those features, we introduce the Multiplicative Error Model with jumps (MEM-J). When a jump component is included in the multiplicative specification, the conditional density of the realized measure is shown to be a countably infinite mixture of Gamma and K distributions. Strict stationarity conditions are derived. A Monte Carlo simulation experiment shows that maximum likelihood estimates of the model parameters are reliable even when jumps are rare events. We estimate alternative specifications of the model using a set of daily bipower measures for 7 stock indexes and 16 individual NYSE stocks. The estimates of the jump component confirm that the probability of jumps dramatically increases during the financial crises. Compared to other realized volatility models, the introduction of the jump component provides a sensible improvement in the fit, as well as for in-sample and out-of-sample volatility tail forecasts. |
Keywords: | Multiplicative Error Model with Jumps, Jumps in volatility, Realized measures, Volatility-at-Risk |
JEL: | C22 C58 G01 |
Date: | 2014–08–29 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2014-29&r=ecm |
By: | Andrew J. Buck (Department of Economics, Temple University); George M. Lady (Department of Economics, Temple University) |
Abstract: | This paper shows that a qualitative analysis can always be used in evaluating a model's validity both in general and compared to other hypothesized models. The analysis relates the sign patterns and possibly other information of hypothesized structural arrays to the sign pattern of the estimated reduced form. It is demonstrated that such an analysis can always potentially falsify the hypothesized structural sign patterns or support an analysis of the relative likelihoods of alternative structural hypotheses, if neither are falsified. It is also noted that a partially specified structural hypothesis can be sometimes falsified by estimating as few as one reduced form equation. Additionally, zero restrictions in the structure can themselves be falsified; and, when so, current practice proposes estimated structural arrays that are impossible. It is further shown how the information content of the hypothesized structural sign patterns can be measured using Shannon's (1948) concept of entropy. In general, the lower the hypothesized structural sign pattern's entropy, the more a priori information it proposes about the sign pattern of the estimated reduced form. As an hypothesized structural sign pattern has a lower entropy, it is more subject to type 1 error and less subject to type 2 error. |
Keywords: | Model Verification, Falsification, structural arrays, sign pattern, entropy |
JEL: | C15 C18 C51 C52 |
Date: | 2014–08 |
URL: | http://d.repec.org/n?u=RePEc:tem:wpaper:1403&r=ecm |
By: | Rohini Grover (Indira Gandhi Institute of Development Research); Ajay Shah (National Institute of Public Finance and Policy) |
Abstract: | Concerns about sampling noise arise when a VIX estimator is computed by aggregating several imprecise implied volatility estimates. We propose a bootstrap strategy to measure the imprecision of a model based VIX estimator. We find that the imprecision of VIX is economically significant. We propose a model selection strategy,where alternative statistical estimators of VIX are evaluated based on this imprecision. |
Keywords: | Implied volatility, volatility index, imprecision |
JEL: | G12 G13 G17 |
Date: | 2014–08 |
URL: | http://d.repec.org/n?u=RePEc:ind:igiwpp:2014-031&r=ecm |
By: | Frédéric Jouneau-Sion (GATE Lyon Saint-Étienne - Groupe d'analyse et de théorie économique - CNRS : UMR5824 - Université Lumière - Lyon II - École Normale Supérieure (ENS) - Lyon - PRES Université de Lyon - Université Jean Monnet - Saint-Etienne - Université Claude Bernard - Lyon I (UCBL)); Olivier Torrès (EQUIPPE - ECONOMIE QUANTITATIVE, INTEGRATION, POLITIQUES PUBLIQUES ET ECONOMETRIE - Université Lille I - Sciences et technologies - Université Lille II - Droit et santé - Université Lille III - Sciences humaines et sociales - PRES Université Lille Nord de France) |
Abstract: | We consider testing about the slope parameter β when Y - X β is assumed to be an exchangeable process conditionally on X. This framework encompasses the semi-parametric linear regression model. We show that the usual Fisher's procedure have non trivial exact rejection bound under the null hypothesis R β = ϒ. This bound derives from the Markov inequality and a close inspection of multivariate moments of self-normalized, self-centered, exchangeable processes. Improvement by higher order versions of the Markov inequality are also presented. The bounds do not require the existence of any moment, so they remain valid even if TCL do not apply. We generalize the framework to multivariate and order-1 auto-regressive models with exogenous variables. |
Keywords: | Exact testing; linear model; AR(1) model; multivariate model; exchangeable process |
Date: | 2014 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:halshs-01062623&r=ecm |
By: | Takeshi Kimura (Bank of Japan); Jouchi Nakajima (Bank of Japan) |
Abstract: | This paper proposes a new estimation framework for identifying monetary policy shocks in both conventional and unconventional policy regimes using a structural VAR model. Exploiting a latent threshold modeling strategy that induces time-varying shrinkage of the parameters, we explore a recursive identification switching with a time-varying overidentification for the interest rate zero lower bound. We empirically analyze Japan's monetary policy to illustrate the proposed approach for modeling regime-switching between conventional and unconventional monetary policy periods, and find that the proposed model is preferred over a nested standard time-varying parameter VAR model. The estimation results show that increasing bank reserves lowers long-term interest rates in the unconventional policy periods, and that the impulse responses of inflation and the output gap to a bank reserve shock appear to be positive but highly uncertain. |
Keywords: | Identification; Latent threshold models; Monetary policy; Time-varying parameter VAR; Zero lower bound |
JEL: | C32 E52 |
Date: | 2013–05–02 |
URL: | http://d.repec.org/n?u=RePEc:boj:bojwps:13-e-7&r=ecm |
By: | Takashi Isogai (Bank of Japan) |
Abstract: | This paper analyzes Value at Risk (VaR) and Expected Shortfall (ES) calculation methods in terms of bias and dispersion against benchmarks computed from a fat-tailed parametric distribution. The daily log returns of the Nikkei-225 stock index are modeled by a truncated stable distribution. The VaR and ES values of the fitted distribution are regarded as benchmarks. The fitted distribution is also used as a sampling distribution; sample returns with different sizes are generated for the simulations of the VaR and ES calculations. Two parametric methods: normal distribution and generalized Pareto distribution and two non-parametric methods: historical simulation and kernel smoothing are selected as the targets of this analysis. A comparison of the simulated VaR, ES, and the ES/VaR ratio with the benchmarks at multiple confidence levels reveals that the normal distribution approximation has a significant downward bias, especially in the ES calculation. The estimates by the other three methods are much closer to the benchmarks on average, although some of them become unstable with smaller sample sizes and/or at higher confidence levels. Specifically, ES tends to be more biased and unstable than VaR at higher confidence levels. |
Keywords: | Value at Risk; Expected Shortfall; Fat-Tailed Distribution; Truncated Stable Distribution; Numerical Simulation |
Date: | 2014–01–17 |
URL: | http://d.repec.org/n?u=RePEc:boj:bojwps:wp14e01&r=ecm |
By: | Nien-Lin Liu; Hoang-Long Ngo |
Abstract: | In order to study the geometry of interest rates market dynamics, Malliavin, Mancino and Recchioni [A non-parametric calibration of the HJM geometry: an application of It\^o calculus to financial statistics, {\it Japanese Journal of Mathematics}, 2, pp.55--77, 2007] introduced a scheme, which is based on the Fourier Series method, to estimate eigenvalues of a spot cross volatility matrix. In this paper, we present another estimation scheme based on the Quadratic Variation method. We first establish limit theorems for each scheme and then we use a stochastic volatility model of Heston's type to compare the effectiveness of these two schemes. |
Date: | 2014–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1409.2214&r=ecm |