|
on Econometrics |
By: | David I. Harvey; Stephen J. Leybourne; Emily J. Whitehouse |
Abstract: | In this paper, we show that when computing standard Diebold-Mariano-type tests for equal forecast accuracy and forecast encompassing, the long-run variance can frequently be negative when dealing with multi-step-ahead predictions in small, but empirically relevant, sample sizes. We subsequently consider a number of alternative approaches to dealing with this problem, including direct inference in the problem cases and use of long-run variance estimators that guarantee positivity. The finite sample size and power of the different approaches are evaluated using extensive Monte Carlo simulation exercises. Overall, for multi-step-ahead forecasts, we find that the recently proposed Coroneo and Iacone (2016) test, which is based on a weighted periodogram long-run variance estimator, offers the best finite sample size and power performance. |
Keywords: | Forecast evaluation; Long-run variance estimation; Simulation; Diebold-Mariano test; Forecasting JEL classification: C2 |
URL: | http://d.repec.org/n?u=RePEc:not:notgts:17/03&r=ecm |
By: | Tommaso Proietti (DEF and CEIS, Università di Roma "Tor Vergata"); Alessandro Giovannelli (Department of Economics and Finance, University of Rome "Tor Vergata") |
Abstract: | We consider the problem of estimating the high-dimensional autocovariance matrix of a stationary random process, with the purpose of out of sample prediction and feature extraction. This problem has received several solutions. In the nonparametric framework, the literature has concentrated on banding and tapering the sample autocovariance matrix. This paper proposes and evaluates an alternative approach, based on regularizing the sample partial autocorrelation function, via a modified Durbin-Levinson algorithm that receives as input the banded and tapered partial autocorrelations and returns a sample autocovariance sequence which is positive definite. We show that the regularized estimator of the autocovariance matrix is consistent and its convergence rates is established. We then focus on constructing the optimal linear predictor and we assess its properties. The computational complexity of the estimator is of the order of the square of the banding parameter, which renders our method scalable for high-dimensional time series. The performance of the autocovariance estimator and the corresponding linear predictor is evaluated by simulation and empirical applications. |
Keywords: | Toeplitz systems; Optimal linear prediction; Partial autocorrelation function |
JEL: | C22 C53 C55 |
Date: | 2017–07–17 |
URL: | http://d.repec.org/n?u=RePEc:rtv:ceisrp:410&r=ecm |
By: | Arkadiusz Szyd?owski |
Abstract: | Despite an abundance of semiparametric estimators of the transformation model, no procedure has been proposed yet to test the hypothesis that the transformation function belongs to a finite-dimensional parametric family against a nonparametric alternative. In this paper we introduce a bootstrap test based on integrated squared distance between a nonparametric estimator and a parametric null. As a special case, our procedure can be used to test the parametric specification of the integrated baseline hazard in a semiparametric mixed proportional hazard (MPH) model. We investigate the finite sample performance of our test in a Monte Carlo study. Finally, we apply the proposed test to Kennan’s strike durations data. |
Keywords: | Specification testing, Transformation model, Duration model, Bootstrap, Rank estimation |
JEL: | C12 C14 C41 |
Date: | 2017–07 |
URL: | http://d.repec.org/n?u=RePEc:lec:leecon:17/15&r=ecm |
By: | Luciano de Castro (University of Iowa); Antonio F. Galvao (University of Arizona); David M. Kaplan (University of Missouri) |
Abstract: | This paper develops theory for feasible estimation and testing of finite-dimensional parameters identified by general conditional quantile restrictions. This includes instrumental variables nonlinear quantile regression as a special case, under much weaker assumptions than previously seen in the literature. More specifically, we consider a set of unconditional moments implied by the conditional quantile restrictions and provide conditions for local identification. Since estimators based on the sample moments are generally impossible to compute numerically in practice, we study a feasible estimator based on \emph{smoothed} sample moments. We establish consistency and asymptotic normality under general conditions that allow for weakly dependent data and nonlinear structural models, and we explore options for testing general nonlinear hypotheses.Simulations with iid and time series data illustrate the finite-sample properties of the estimators and tests. Our in-depth empirical application concerns the consumption Euler equation derived from quantile utility maximization. Advantages of the quantile Euler equation include robustness to fat tails, decoupling of risk attitude from the elasticity of intertemporal substitution, and log-linearization without any approximation error. For the four countries we examine, the quantile estimates of discount factor and elasticity of intertemporal substitution are economically reasonable for a range of quantiles just above the median, even when two-stage least squares estimates are not reasonable. Code is provided for all methods, simulations, and applications at the third author's website. |
Keywords: | instrumental variables, nonlinear quantile regression, quantile utility maximization |
JEL: | C31 C32 C36 |
Date: | 2017–07–10 |
URL: | http://d.repec.org/n?u=RePEc:umc:wpaper:1710&r=ecm |
By: | Davy Paindaveine; Germain Van Bever |
Abstract: | In many problems from multivariate analysis (principal component analysis, testing for sphericity, etc.), the parameter of interest is a shape matrix, that is, a normalised version of the corresponding scatter or dispersion matrix. In this paper, we propose a depth concept for shape matrices which is of a sign nature, in the sense that it involves data points only through their directions from the center of the distribution. We use the terminology Tyler shape depth since the resulting estimator of shape — namely, the deepest shape matrix — is the depth-based counterpart of the celebrated M-estimator of shape from Tyler (1987). We in- vestigate the invariance, quasi-concavity and continuity properties of Tyler shape depth, as well as the topological and boundedness properties of the corresponding depth regions. We study existence of a deepest shape matrix and prove Fisher consistency in the elliptical case. We derive a Glivenko-Cantelli-type result and establish the almost sure consistency of the deepest shape matrix estimator. We also consider depth-based tests for shape and investigate their finite-sample per- formances through simulations. Finally, we illustrate the practical relevance of the proposed depth concept on a real data example. |
Keywords: | Elliptical distribution; Robustness; Shape matrix; Statistical depth; Test for sphericity |
Date: | 2017–07 |
URL: | http://d.repec.org/n?u=RePEc:eca:wpaper:2013/255000&r=ecm |
By: | Nalan Basturk (Maastricht University & RCEA); Lennart Hoogerheide (Vrije Universiteit Amsterdam & Tinbergen Institute); Herman K. van Dijk (Erasmus University Rotterdam, Norges Bank (Central Bank of Norway) & Tinbergen Institute & RCEA) |
Abstract: | Weak empirical evidence near and at the boundary of the parameter region is a predominant feature in econometric models. Examples are macroeconometric models with weak information on the number of stable relations, microeconometric models measuring connectivity between variables with weak instruments, financial econometric models like the random walk with weak evidence on the efficient market hypothesis and factor models for investment policies with weak information on the number of unobserved factors. A Bayesian analysis is presented of the common issue in these models, which refers to the topic of a reduced rank. Reduced rank is a boundary issue and its effect on the shape of the posteriors of the equation system parameters with a reduced rank is explored systematically. These shapes refer to ridges due to weak identification, fat tails and multimodality. Discussing several alternative routes to construct regularization priors, we show that flat posterior surfaces are integrable even though the marginal posterior tends to infinity if the parameters tend to the values corresponding to local non-identification. We introduce a lasso type shrinkage prior combined with orthogonal normalization which restricts the range of the parameters in a plausible way. This can be combined with other shrinkage, smoothness and data based priors using training samples or dummy observations. Using such classes of priors, it is shown how conditional probabilities of evidence near and at the boundary can be evaluated effectively. These results allow for Bayesian inference using mixtures of posteriors under the boundary state and the near-boundary state. The approach is applied to the estimation of education-income effect in all states of the US economy. The empirical results indicate that there exist substantial differences of this effect between almost all states. This may affect important national and state-wise policies on required length of education. The use of the proposed approach may, in general, lead to more accurate forecasting and decision analysis in other problems in economics, finance and marketing. |
Date: | 2017–06–27 |
URL: | http://d.repec.org/n?u=RePEc:bno:worpap:2017_11&r=ecm |
By: | Magne Mogstad; Andres Santos; Alexander Torgovitsky |
Abstract: | We propose a method for using instrumental variables (IV) to draw inference about causal effects for individuals other than those affected by the instrument at hand. Policy relevance and external validity turns on the ability to do this reliably. Our method exploits the insight that both the IV estimand and many treatment parameters can be expressed as weighted averages of the same underlying marginal treatment effects. Since the weights are known or identified, knowledge of the IV estimand generally places some restrictions on the unknown marginal treatment effects, and hence on the values of the treatment parameters of interest. We show how to extract information about the average effect of interest from the IV estimand, and, more generally, from a class of IV-like estimands that includes the two stage least squares and ordinary least squares estimands, among many others. Our method has several applications. First, it can be used to construct nonparametric bounds on the average causal effect of a hypothetical policy change. Second, our method allows the researcher to flexibly incorporate shape restrictions and parametric assumptions, thereby enabling extrapolation of the average effects for compliers to the average effects for different or larger populations. Third, our method can be used to test model specification and hypotheses about behavior, such as no selection bias and/or no selection on gain. To accommodate these diverse applications, we devise a novel inference procedure that is designed to exploit the convexity of our setting. We develop uniformly valid tests that allow for an infinite number of IV--like estimands and a general convex parameter space. We apply our method to analyze the effects of price subsidies on the adoption and usage of an antimalarial bed net in Kenya. |
JEL: | C21 C36 |
Date: | 2017–07 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:23568&r=ecm |
By: | James Morley; Benjamin Wong |
Abstract: | We demonstrate how Bayesian shrinkage can address problems with utilizing large information sets to calculate trend and cycle via a multivariate Beveridge-Nelson (BN) decomposition. We illustrate our approach by estimating the U.S. output gap with large Bayesian vector autoregressions that include up to 138 variables. Because the BN trend and cycle are linear functions of historical forecast errors, we are also able to account for the estimated output gap in terms of different sources of information, as well as particular underlying structural shocks given identification restrictions. Our empirical analysis suggests that, in addition to output growth, the unemployment rate, CPI inflation, and, to a lesser extent, housing starts, consumption, stock prices, real M1, and the federal funds rate are important conditioning variables for estimating the U.S. output gap, with estimates largely robust to incorporating additional variables. Using standard identification restrictions, we find that the role of monetary policy shocks in driving the output gap is small, while oil price shocks explain about 10% of the variance over different horizons. |
Keywords: | Beveridge-Nelson decomposition, output gap, Bayesian estimation, multivariate information |
JEL: | C18 E17 E32 |
Date: | 2017–07 |
URL: | http://d.repec.org/n?u=RePEc:een:camaaa:2017-46&r=ecm |
By: | Bisio, Laura; Moauro, Filippo |
Abstract: | In this paper we discuss the most recent developments of temporal disaggregation techniques carried out at ISTAT. They concern the extension from static to dynamic autoregressive distributed lag ADL regressions and the change to a state-space framework for the statistical treatment of temporal disaggregation. Beyond the development of a unified procedure for both static and dynamic methods from one side and the treatment of the logarithmic transformation from the other, we provide short guidelines for model selection. From the empirical side we evaluate the new dynamic methods by implementing a large scale temporal disaggregation exercise using ISTAT annual value added data jointly with quarterly industrial production by branch of economic activity over the period 1995-2013. The main finding of this application is that ADL models either in levels and logarithms can reduce the errors due to extrapolating disaggregated data in last quarters before the annual benchmarks become available. When the attention moves to the correlations with the high-frequency indicators the ADL disaggregations are also generally in line with those produced by the static Chow-Lin variants, with problematic outcomes limited to few cases. |
Keywords: | temporal disaggregation; state-space form; Kalman filter; ADL models; linear Gaussian approximating model; quarterly national accounts. |
JEL: | C15 C22 C53 C61 |
Date: | 2017–07–14 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:80211&r=ecm |
By: | Joachim Freyberger; Bradley J. Larsen |
Abstract: | This study provides new identification and estimation results for ascending (traditional English or online) auctions with unobserved auction-level heterogeneity and an unknown number of bidders. When the seller's reserve price and two order statistics of bids are observed, we derive conditions under which the distributions of buyer valuations, unobserved heterogeneity, and number of participants are point identified. We also derive conditions for point identification in cases where reserve prices are binding (in which case bids may be unobserved in some auctions) and present general conditions for partial identification. We propose a nonparametric maximum likelihood approach for estimation and inference. We apply our approach to the online market for used iPhones and analyze the effects of recent regulatory changes banning consumers from circumventing digital rights management technologies used to lock phones to service providers. We find that buyer valuations for unlocked phones dropped after the unlocking ban took effect. |
JEL: | C1 C57 D44 L0 L96 O3 |
Date: | 2017–07 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:23569&r=ecm |
By: | David I. Harvey; Stephen J. Leybourne; Emily J. Whitehouse |
Abstract: | In this paper we examine the local power of unit root tests against globally stationary exponential smooth transition autoregressive [ESTAR] alternatives under two sources of uncertainty: the degree of nonlinearity in the ESTAR model, and the presence of a linear deterministric trend. First we show that the Kapetanios, Shin and Snell (2003, Journal of Econometrics, 112, 359-379) [KSS] test for nonlinear stationarity has local asymptotic power gains over standard Dickey-Fuller [DF] tests for certain degrees of nonlinearity in the ESTAR model, but that for other degrees of nonlinearity, the linear DF test has superior power. Second, we derive limiting distributions of demeaned, and demeaned and detrended KSS and DF tests under a local ESTAR alternative when a local trend is present in the DGP. We show that the power of the demeaned tests outperforms that of the detrended tests when no trend is present in the DGP, but deteriorates as the magnitude of the trend increases. We propose a union of rejections testing procedure that combines all four individual tests and show that this captures most of the power available from the individual tests across different degrees of nonlinearity and trend magnitudes. We also show that incorporating a trend detection procedure into this union testing strategy can result in higher power when a large trend is present in the DGP. |
Keywords: | Nonlinearity, Trend uncertainty, Union of rejections JEL classification: C12, C22, C53 |
URL: | http://d.repec.org/n?u=RePEc:not:notgts:17/02&r=ecm |
By: | Andrew J. Patton; Johanna F. Ziegel; Rui Chen |
Abstract: | Expected Shortfall (ES) is the average return on a risky asset conditional on the return being below some quantile of its distribution, namely its Value-at-Risk (VaR). The Basel III Accord, which will be implemented in the years leading up to 2019, places new attention on ES, but unlike VaR, there is little existing work on modeling ES. We use recent results from statistical decision theory to overcome the problem of "elicitability" for ES by jointly modelling ES and VaR, and propose new dynamic models for these risk measures. We provide estimation and inference methods for the proposed models, and confirm via simulation studies that the methods have good finite-sample properties. We apply these models to daily returns on four international equity indices, and find the proposed new ES-VaR models outperform forecasts based on GARCH or rolling window models. |
Date: | 2017–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1707.05108&r=ecm |
By: | Sylvain Barde; Sander van der Hoog |
Abstract: | Despite recent advances in bringing agent-based models (ABMs) to the data, the estimation or calibration of model parameters remains a challenge, especially when it comes to large-scale agent-based macroeconomic models. Most methods, such as the method of simulated moments (MSM), require in-the-loop simulation of new data, which may not be feasible for such computationally heavy simulation models. The purpose of this paper is to provide a proof-of-concept of a generic empirical validation methodology for such large-scale simulation models. We introduce an alternative 'large-scale' empirical validation approach, and apply it to the Eurace@Unibi macroeconomic simulation model (Dawid et al., 2016). This model was selected because it displays strong emergent behaviour and is able to generate a wide variety of nonlinear economic dynamics, including endogenous business- and financial cycles. In addition, it is a computationally heavy simulation model, so it ts our targeted use-case. The validation protocol consists of three stages. At the first stage we use Nearly-Orthogonal Latin Hypercube sampling (NOLH) in order to generate a set of 513 parameter combinations with good space-filling properties. At the second stage we use the recently developed Markov Information Criterion (MIC) to score the simulated data against empirical data. Finally, at the third stage we use stochastic kriging to construct a surrogate model of the MIC response surface, resulting in an interpolation of the response surface as a function of the parameters. The parameter combinations providing the best fit to the data are then identified as the local minima of the interpolated MIC response surface. The Model Confidence Set (MCS) procedure of Hansen et al. (2011) is used to restrict the set of model calibrations to those models that cannot be rejected to have equal predictive ability, at a given confidence level. Validation of the surrogate model is carried out by re-running the second stage of the analysis on the so identified optima and cross-checking that the realised MIC scores equal the MIC scores predicted by the surrogate model. The results we obtain so far look promising as a first proof-of-concept for the empirical validation methodology since we are able to validate the model using empirical data series for 30 OECD countries and the euro area. The internal validation procedure of the surrogate model also suggests that the combination of NOLH sampling, MIC measurement and stochastic kriging yields reliable predictions of the MIC scores for samples not included in the original NOLH sample set. In our opinion, this is a strong indication that the method we propose could provide a viable statistical machine learning technique for the empirical validation of (large-scale) ABMs. |
Keywords: | Statistical machine learning; surrogate modelling; empirical validation |
Date: | 2017–07 |
URL: | http://d.repec.org/n?u=RePEc:ukc:ukcedp:1712&r=ecm |