|
on Econometrics |
By: | Sokbae Lee; Yuan Liao; Myung Hwan Seo; Youngki Shin |
Abstract: | We develop a new method of online inference for a vector of parameters estimated by the Polyak-Ruppert averaging procedure of stochastic gradient descent (SGD) algorithms. We leverage insights from time series regression in econometrics and construct asymptotically pivotal statistics via random scaling. Our approach is fully operational with online data and is rigorously underpinned by a functional central limit theorem. Our proposed inference method has a couple of key advantages over the existing methods. First, the test statistic is computed in an online fashion with only SGD iterates and the critical values can be obtained without any resampling methods, thereby allowing for efficient implementation suitable for massive online data. Second, there is no need to estimate the asymptotic variance and our inference method is shown to be robust to changes in the tuning parameters for SGD algorithms in simulation experiments with synthetic data. |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2106.03156&r= |
By: | Taras Bodnar; Nestor Parolya; Erik Thorsen |
Abstract: | In this paper, new results in random matrix theory are derived which allow us to construct a shrinkage estimator of the global minimum variance (GMV) portfolio when the shrinkage target is a random object. More specifically, the shrinkage target is determined as the holding portfolio estimated from previous data. The theoretical findings are applied to develop theory for dynamic estimation of the GMV portfolio, where the new estimator of its weights is shrunk to the holding portfolio at each time of reconstruction. Both cases with and without overlapping samples are considered in the paper. The non-overlapping samples corresponds to the case when different data of the asset returns are used to construct the traditional estimator of the GMV portfolio weights and to determine the target portfolio, while the overlapping case allows intersections between the samples. The theoretical results are derived under weak assumptions imposed on the data-generating process. No specific distribution is assumed for the asset returns except from the assumption of finite $4+\varepsilon$, $\varepsilon>0$, moments. Also, the population covariance matrix with unbounded spectrum can be considered. The performance of new trading strategies is investigated via an extensive simulation. Finally, the theoretical findings are implemented in an empirical illustration based on the returns on stocks included in the S\&P 500 index. |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2106.02131&r= |
By: | Andreï Kostyrka (Department of Economics and Management, Université du Luxembourg); Dmitry Igorevich Malakhov, (HSE University, Moscow, RS) |
Abstract: | We propose a novel univariate conditional density model and decompose asset returns into a sum of copula-connected unobserved ‘good’ and ‘bad’ shocks. The novelty of this approach comes from two factors: we explicitly model correlation between unobserved shocks and allow for the presence of copula-connected discrete jumps. The proposed framework is very flexible and subsumes other models, such as ‘bad environments, good environments’. Our model shows certain hidden characteristics of returns, explains investors’ behaviour in greater detail, and yields better forecasts of risk measures. The in-sample and out-of-sample performance of our model is better than that of 40 popular GARCH variants. A Monte-Carlo simulation shows that the proposed model recovers the structural parameters of the unobserved dynamics. We estimate the model on S&P 500 data and find that time-dependent non-negative covariance between ‘good’ and ‘bad’ shocks with a leverage-like effect is an important component of total variance. Asymmetric reaction to shocks is present almost in all characteristics of returns. Conditional distribution of seems to be very time-dependent with skewness both in the centre and tails. We conclude that continuous shocks are more important than discrete jumps at least at daily frequency. |
Keywords: | GARCH, conditional density, leverage effect, jumps, bad volatility, good volatility. |
JEL: | C53 C58 C63 G17 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:luc:wpaper:21-09&r= |
By: | Yu-Chin Hsu; Martin Huber; Ying-Ying Lee; Chu-An Liu |
Abstract: | While most treatment evaluations focus on binary interventions, a growing literature also considers continuously distributed treatments, e.g. hours spent in a training program to assess its effect on labor market outcomes. In this paper, we propose a Cram\'er-von Mises-type test for testing whether the mean potential outcome given a specific treatment has a weakly monotonic relationship with the treatment dose under a weak unconfoundedness assumption. This appears interesting for testing shape restrictions, e.g. whether increasing the treatment dose always has a non-negative effect, no matter what the baseline level of treatment is. We formally show that the proposed test controls asymptotic size and is consistent against any fixed alternative. These theoretical findings are supported by the method's finite sample behavior in our Monte-Carlo simulations. As an empirical illustration, we apply our test to the Job Corps study and reject a weakly monotonic relationship between the treatment (hours in academic and vocational training) and labor market outcomes like earnings or employment. |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2106.04237&r= |
By: | Paul Goldsmith-Pinkham; Peter Hull; Michal Koles\'ar |
Abstract: | We study the causal interpretation of regressions on multiple dependent treatments and flexible controls. Such regressions are often used to analyze randomized control trials with multiple intervention arms, and to estimate institutional quality (e.g. teacher value-added) with observational data. We show that, unlike with a single binary treatment, these regressions do not generally estimate convex averages of causal effects-even when the treatments are conditionally randomly assigned and the controls fully address omitted variables bias. We discuss different solutions to this issue, and propose as a solution anew class of efficient estimators of weighted average treatment effects. |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2106.05024&r= |
By: | Vivek F. Farias; Andrew A. Li; Tianyi Peng |
Abstract: | The problem of causal inference with panel data is a central econometric question. The following is a fundamental version of this problem: Let $M^*$ be a low rank matrix and $E$ be a zero-mean noise matrix. For a `treatment' matrix $Z$ with entries in $\{0,1\}$ we observe the matrix $O$ with entries $O_{ij} := M^*_{ij} + E_{ij} + \mathcal{T}_{ij} Z_{ij}$ where $\mathcal{T}_{ij} $ are unknown, heterogenous treatment effects. The problem requires we estimate the average treatment effect $\tau^* := \sum_{ij} \mathcal{T}_{ij} Z_{ij} / \sum_{ij} Z_{ij}$. The synthetic control paradigm provides an approach to estimating $\tau^*$ when $Z$ places support on a single row. This paper extends that framework to allow rate-optimal recovery of $\tau^*$ for general $Z$, thus broadly expanding its applicability. Our guarantees are the first of their type in this general setting. Computational experiments on synthetic and real-world data show a substantial advantage over competing estimators. |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2106.02780&r= |
By: | Breen, Richard; Ermisch, John |
Abstract: | Heterogeneous effects of treatment on an outcome is a plausible assumption to make about the vast majority of causal relationships studied in the social sciences. In these circumstances the IV estimator is often interpreted as yielding an estimate of a Local Average Treatment Effect (LATE): a marginal change in the outcome for those whose treatment is changed by the variation of the particular instrument in the study. Our aim is to explain the relationship between the LATE parameter and its IV estimator by using a simple model which is easily accessible to applied researchers, and by relating the model to examples from the demographic literature. A focus of the paper is how additional heterogeneity in the instrument – treatment relationship affects the properties and interpretation of the IV estimator. We show that if the two kinds of heterogeneity are correlated, then the LATE parameter combines both the underlying treatment effects and the parameters from the instrument – treatment relationship. It is then a more complicated concept than many researchers realise. |
Date: | 2021–06–03 |
URL: | http://d.repec.org/n?u=RePEc:osf:socarx:vx9m7&r= |
By: | Christophe Gaillac (TSE - Toulouse School of Economics - UT1 - Université Toulouse 1 Capitole - EHESS - École des hautes études en sciences sociales - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Eric Gautier (TSE - Toulouse School of Economics - UT1 - Université Toulouse 1 Capitole - EHESS - École des hautes études en sciences sociales - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement, UT1 - Université Toulouse 1 Capitole) |
Abstract: | This paper studies point identification of the distribution of the coefficients in some random coefficients models with exogenous regressors when their support is a proper subset, possibly discrete but countable. We exhibit trade-offs between restrictions on the distribution of the random coefficients and the support of the regressors. We consider linear models including those with nonlinear transforms of a baseline regressor, with an infinite number of regressors and deconvolution, the binary choice model, and panel data models such as single-index panel data models and an extension of the Kotlarski lemma. |
Keywords: | Deconvolution AMS 2010 Subject Classification: Primary 62P20,Deconvolution,Random Coefficients,Quasi-analyticity,Identification,secondary 42A99,62G07,62G08 |
Date: | 2021–05–21 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-03231392&r= |
By: | Alessandro Casini; Pierre Perron |
Abstract: | This paper develops change-point methods for the time-varying spectrum of a time series. We focus on time series with a bounded spectral density that change smoothly under the null hypothesis but exhibits change-points or becomes less smooth under the alternative. We provide a general theory for inference about the degree of smoothness of the spectral density over time. We address two local problems. The first is the detection of discontinuities (or breaks) in the spectrum at unknown dates and frequencies. The second involves abrupt yet continuous changes in the spectrum over a short time period at an unknown frequency without signifying a break. We consider estimation and minimax-optimal testing. We determine the optimal rate for the minimax distinguishable boundary, i.e., the minimum break magnitude such that we are still able to uniformly control type I and type II errors. We propose a novel procedure for the estimation of the change-points based on a wild sequential top-down algorithm and show its consistency under shrinking shifts and possibly growing number of change-points. Our method can be used across many fields and a companion program is made available in popular software packages. |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2106.02031&r= |
By: | Zhang, Han (The Hong Kong University of Science and Technology) |
Abstract: | Social scientists have increasingly been applying machine learning algorithms to "big data" to measure theoretical concepts they cannot easily measure before, and then been using these machine-predicted variables in a regression. This article first demonstrates that directly inserting binary predictions (i.e., classification) without regard for prediction error will generally lead to attenuation biases of either slope coefficients or marginal effect estimates. We then propose several estimators to obtain consistent estimates of coefficients. The estimators require the existence of validation data, of which researchers have both machine prediction and true values. This validation data is either automatically available during training algorithms or can be easily obtained. Monte Carlo simulations demonstrate the effectiveness of the proposed estimators. Finally, we summarize the usage pattern of machine learning predictions in 18 recent publications in top social science journals, apply our proposed estimators to two of them, and offer some practical recommendations. |
Date: | 2021–05–29 |
URL: | http://d.repec.org/n?u=RePEc:osf:socarx:453jk&r= |
By: | Javed, Farrukh (Örebro University School of Business); Mazur, Stepan (Örebro University School of Business); Thorsén, Erik (Stockholm Universtity) |
Abstract: | In this paper, we investigate the distributional properties of the estimated tangency portfolio (TP) weights assuming that the asset returns follow a matrix variate closed skew-normal distribution.We establish a stochastic representation of the linear combination of the estimated TP weights that fully characterize its distribution. Using the stochastic representation we derive the mean and variance of the estimated weights of TP which are of key importance in portfolio analysis. Furthermore, we provide the asymptotic distribution of the linear combination of the estimated TP weights under the high-dimensional asymptotic regime, i.e. the dimension of the portfolio p and the sample size n tend to infinity such that p/n → c ∈ (0, 1). A good performance of the theoretical findings is documented in the simulation study. In the empirical study, we apply the theoretical results to real data of the stocks included in the S&P 500 index. |
Keywords: | Asset allocation; high-dimensional asymptotics; matrix variate skew-normal distribution; stochastic representation; tangency portfolio |
JEL: | C13 G11 |
Date: | 2021–06–10 |
URL: | http://d.repec.org/n?u=RePEc:hhs:oruesi:2021_013&r= |
By: | Knut Are Aastveit (BI Norwegian Business School); Jamie Cross (BI Norwegian Business School); Herman K. van Dijk (Erasmus University Rotterdam) |
Abstract: | We propose a novel and numerically efficient quantification approach to forecast uncertainty of the real price of oil using a combination of probabilistic individual model forecasts. Our combination method extends earlier approaches that have been applied to oil price forecasting, by allowing for sequentially updating of time-varying combination weights, estimation of time-varying forecast biases and facets of miscalibration of individual forecast densities and time-varying inter-dependencies among models. To illustrate the usefulness of the method, we present an extensive set of empirical results about time-varying forecast uncertainty and risk for the real price of oil over the period 1974-2018. We show that the combination approach systematically outperforms commonly used benchmark models and combination approaches, both in terms of point and density forecasts. The dynamic patterns of the estimated individual model weights are highly time-varying, reflecting a large time variation in the relative performance of the various individual models. The combination approach has built-in diagnostic information measures about forecast inaccuracy and/or model set incompleteness, which provide clear signals of model incompleteness during three crisis periods. To highlight that our approach also can be useful for policy analysis, we present a basic analysis of profit-loss and hedging against price risk. |
Keywords: | Oil price, Forecast density combination, Bayesian forecasting, Instabilities, Model uncertainty |
Date: | 2021–06–13 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20210053&r= |
By: | Jason Poulos; Andrea Albanese; Andrea Mercatanti; Fan Li |
Abstract: | We propose a method of retrospective counterfactual imputation in panel data settings with later-treated and always-treated units, but no never-treated units. We use the observed outcomes to impute the counterfactual outcomes of the later-treated using a matrix completion estimator. We propose a novel propensity-score and elapsed-time weighting of the estimator's objective function to correct for differences in the observed covariate and unobserved fixed effects distributions, and elapsed time since treatment between groups. Our methodology is motivated by studying the effect of two milestones of European integration -- the Free Movement of persons and the Schengen Agreement -- on the share of cross-border workers in sending border regions. We apply the proposed method to the European Labour Force Survey (ELFS) data and provide evidence that opening the border almost doubled the probability of working beyond the border in Eastern European regions. |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2106.00788&r= |
By: | Shosei Sakaguchi |
Abstract: | This paper studies statistical decisions for dynamic treatment assignment problems. Many policies involve dynamics in their treatment assignments where treatments are sequentially assigned to individuals across multiple stages and the effect of treatment at each stage is usually heterogeneous with respect to the prior treatments, past outcomes, and observed covariates. We consider estimating an optimal dynamic treatment rule that guides the optimal treatment assignment for each individual at each stage based on the individual's history. This paper proposes an empirical welfare maximization approach in a dynamic framework. The approach estimates the optimal dynamic treatment rule from panel data taken from an experimental or quasi-experimental study. The paper proposes two estimation methods: one solves the treatment assignment problem at each stage through backward induction, and the other solves the whole dynamic treatment assignment problem simultaneously across all stages. We derive finite-sample upper bounds on the worst-case average welfare-regrets for the proposed methods and show $n^{-1/2}$-minimax convergence rates. We also modify the simultaneous estimation method to incorporate intertemporal budget/capacity constraints. |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2106.05031&r= |
By: | Victor Chernozhukov; Whitney K. Newey; Rahul Singh |
Abstract: | Debiased machine learning is a meta algorithm based on bias correction and sample splitting to calculate confidence intervals for functionals (i.e. scalar summaries) of machine learning algorithms. For example, an analyst may desire the confidence interval for a treatment effect estimated with a neural network. We provide a nonasymptotic debiased machine learning theorem that encompasses any global or local functional of any machine learning algorithm that satisfies a few simple, interpretable conditions. Formally, we prove consistency, Gaussian approximation, and semiparametric efficiency by finite sample arguments. The rate of convergence is root-n for global functionals, and it degrades gracefully for local functionals. Our results culminate in a simple set of conditions that an analyst can use to translate modern learning theory rates into traditional statistical inference. The conditions reveal a new double robustness property for ill posed inverse problems. |
Date: | 2021–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2105.15197&r= |
By: | Andrew Chia |
Abstract: | We show how the random coefficient logistic demand (BLP) model can be phrased as an automatically differentiable moment function, including the incorporation of numerical safeguards proposed in the literature. This allows gradient-based frequentist and quasi-Bayesian estimation using the Continuously Updating Estimator (CUE). Drawing from the machine learning literature, we outline hitherto under-utilized best practices in both frequentist and Bayesian estimation techniques. Our Monte Carlo experiments compare the performance of CUE, 2S-GMM, and LTE estimation. Preliminary findings indicate that the CUE estimated using LTE and frequentist optimization has a lower bias but higher MAE compared to the traditional 2-Stage GMM (2S-GMM) approach. We also find that using credible intervals from MCMC sampling for the non-linear parameters together with frequentist analytical standard errors for the concentrated out linear parameters provides empirical coverage closest to the nominal level. The accompanying admest Python package provides a platform for replication and extensibility. |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2106.04636&r= |
By: | Eli Ben-Michael; Avi Feller; Jesse Rothstein |
Abstract: | Staggered adoption of policies by different units at different times creates promising opportunities for observational causal inference. Estimation remains challenging, however, and common regression methods can give misleading results. A promising alternative is the synthetic control method (SCM), which finds a weighted average of control units that closely balances the treated unit’s pre-treatment outcomes. In this paper, we generalize SCM, originally designed to study a single treated unit, to the staggered adoption setting. We first bound the error for the average effect and show that it depends on both the imbalance for each treated unit separately and the imbalance for the average of the treated units. We then propose "partially pooled" SCM weights to minimize a weighted combination of these measures; approaches that focus only on balancing one of the two components can lead to bias. We extend this approach to incorporate unit-level intercept shifts and auxiliary covariates. We assess the performance of the proposed method via extensive simulations and apply our results to the question of whether teacher collective bargaining leads to higher school spending, finding minimal impacts. We implement the proposed method in the augsynth R package. |
JEL: | C21 C23 I21 J5 |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:28886&r= |
By: | Schneider, Eric |
Abstract: | Economic historians have long been aware that sample-selection bias and other forms of bias could lead to spurious causal inferences. However, our approach to these biases has been muddled at times by dealing with each bias separately and by confusion about the sources of bias and how to mitigate them. This paper shows how the methodology of directed acyclical graphs (DAGs) formulated by Pearl (2009) and particularly the concept of collider bias can provide economic historians with a unified approach to managing a wide range of biases that can distort causal inference. I present ten examples of collider bias drawn from economic history research, focussing mainly on examples where the authors were able to overcome or mitigate the bias. Thus, the paper highlights how to diagnose collider bias and also strategies for managing it. The paper also shows that quasi-random experimental designs are rarely able to overcome collider bias. Although all of these biases were understood by economic historians before, conceptualising them as collider bias will improve economic historians' understanding of the limitations of particular sources and help us develop better research designs in the future. |
Keywords: | collider bias; directed acyclical graphs; sample-selection bias |
JEL: | N01 N30 |
Date: | 2020–06 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:14940&r= |
By: | Ruijun Bu; Rodrigo Hizmeri; Marwan Izzeldin; Anthony Murphy; Mike G. Tsionas |
Abstract: | This paper proposes a novel approach to decompose realized jump measures by type of activity (infinite/finite) and by sign. It also provides noise-robust versions of the ABD jump test (Andersen et al., 2007b) and realized semivariance measures for use at high-frequency sampling intervals. The volatility forecasting exercise involves the use of different types of jumps, forecast horizons, sampling frequencies, calendar and transaction time-based sampling schemes, as well as standard and noise-robust volatility measures. We find that infinite (finite) jumps improve the forecasts at shorter (longer) horizons; but the contribution of signed jumps is limited. Noise-robust estimators, that identify jumps in the presence of microstructure noise, deliver substantial forecast improvements at higher sampling frequencies. However, standard volatility measures at the 300-second frequency generate the smallest MSPEs. Since no single model dominates across sampling frequency and forecast horizon, we show that model averaged volatility forecasts –using time-varying weights and models from the model confidence set– generally outperform forecasts from both the benchmark and single best extended HAR model. |
Keywords: | Realized volatility, Signed Jumps, Finite Jumps, Infinite Jumps, Volatility Forecasts, Noise-Robust Volatility, Model Averaging |
JEL: | C22 C51 C53 C58 |
URL: | http://d.repec.org/n?u=RePEc:liv:livedp:202109&r= |
By: | Luisa Corrado (University of Rome Tor Vergata); Stefano Grassi (University of Rome Tor Vergata and CREATES); Aldo Paolillo (University of Rome Tor Vergata) |
Abstract: | This paper proposes and estimates a new Two-Sector One-Agent model that features large shocks. The resulting medium-scale New Keynesian model includes the standard real and nominal frictions used in the empirical literature and allows for heterogeneous COVID-19 pandemic exposure across sectors. We solve the model nonlinearly and we propose a new nonlinear, non-Gaussian filter designed to handle large pandemic shocks to make inference feasible. Monte Carlo experiments show that it correctly identifies the source and time location of shocks with a massively reduced running time, making the estimation of macro-models with disaster shocks feasible. The estimation is carried out using the Sequential Monte Carlo sampler recently proposed by Herbst and Schorfheide (2014). Our empirical results show that the pandemic-induced economic downturn can be reconciled with a combination of large demand and supply shocks. More precisely, starting from the second quarter of 2020, the model detects the occurrence of a large negative demand shock in consuming all kinds of goods, together with a large negative demand shock in consuming contact-intensive products. On the supply side, our proposed method detects a large labor supply shock to the general sector and a large labor productivity shock in the pandemic-sensitive sector. |
Keywords: | COVID-19, Nonlinear, Non-Gaussian, Large shocks, DSGE |
JEL: | C11 C51 E30 |
Date: | 2021–06–15 |
URL: | http://d.repec.org/n?u=RePEc:aah:create:2021-08&r= |