|
on Econometrics |
By: | Ziwei Mei; Liugang Sheng; Zhentao Shi |
Abstract: | Local projection (LP) is a widely used method in empirical macroeconomics for estimating the impulse response function (IRF) through a series of time series ordinary least squares (OLS) regressions. To apply LP to panel data, researchers usually replace OLS with the fixed effect (FE) estimator to control for time-invariant individual heterogeneity. However, we find an implicit Nickell bias present in this seemingly natural extension, even when no lagged dependent variables appear in the panel data regression specification. This bias invalidates standard asymptotic statistical inference when the cross-sectional dimension N and the time-series dimension T are of the same order, which is a common scenario in practice. We recommend using the half-panel jackknife method to remove the implicit Nickell bias and restore the validity of standard statistical inference when T is at least moderate. Our theoretical results are supported by Monte Carlo simulations, and we present three panel data applications of macro finance that illustrate the differences between these estimators. |
Date: | 2023–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2302.13455&r=ecm |
By: | Han, Jinyue; Wang, Jun; Gao, Wei; Tang, Man-Lai |
Abstract: | Semiparametric models are useful in econometrics, social sciences and medicine application. In this paper, a new estimator based on least square methods is proposed to estimate the direction of unknown parameters in semi-parametric models. The proposed estimator is consistent and has asymptotic distribution under mild conditions without the knowledge of the form of link function. simulations show that the proposed estimator is significantly superior to maximum score estimator given by Manski (1975) for binary response variables. When the error term is long-tailed distributions or distribution with no moments, the proposed estimator perform well. Its application is illustrated with data of exportibg participation of manufactures in Guangdong |
Keywords: | Binary model, direction, least squares estimator, maximum score, semi-parametric models, single index model. |
JEL: | C2 C25 C4 C51 |
Date: | 2023–02–13 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:116365&r=ecm |
By: | Zhenhong Huang; Chen Wang; Jianfeng Yao |
Abstract: | This paper develops a new specification test for the instrument weakness when the number of instruments $K_n$ is large with a magnitude comparable to the sample size $n$. The test relies on the fact that the difference between the two-stage least squares (2SLS) estimator and the ordinary least squares (OLS) estimator asymptotically disappears when there are many weak instruments, but otherwise converges to a non-zero limit. We establish the limiting distribution of the difference within the above two specifications, and introduce a delete-$d$ Jackknife procedure to consistently estimate the asymptotic variance/covariance of the difference. Monte Carlo experiments demonstrate the good performance of the test procedure for both cases of single and multiple endogenous variables. Additionally, we re-examine the analysis of returns to education data in Angrist and Keueger (1991) using our proposed test. Both the simulation results and empirical analysis indicate the reliability of the test. |
Date: | 2023–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2302.14396&r=ecm |
By: | Taoufik Bouezmarni (Universite de Sherbrooke); Mohamed Doukali (School of Economics, University of East Anglia); Abderrahim Taamouti (University of Liverpool) |
Abstract: | This paper aims to use copulas to derive alternative estimators of Health Concentration Curve, hereafter CH, and Gini coefficient for health distribution. We motivate the importance of expressing health inequality measures in terms of copula, which we in turn use to build copula-based semi and non-parametric estimators of the above measures. Thereafter, we study the asymptotic properties of these estimators. In particular, we establish their consistency and asymptotic normality. We provide expressions for their variances, which can be used to construct confidence intervals and build tests for health concentration curve and Gini health coe¢ cient. A Monte-Carlo simulation exercise shows that the semiparametric estimator outperforms the smoothed nonparametric estimator, and that the latter does better than the empirical estimator in terms of Mean Squared Error. We also run an extensive empirical study where we apply our CH and Gini health coe¢ cient estimators to show that the inequalities across U.S. states socioeconomic variables like income/poverty and race/ethnicity explain the observed inequalities in the U.S. COVID-19s infections and deaths. |
Keywords: | Health concentration curve, Gini health coe¢ cient, inequality, copula, semi- and non-parametric estimators, COVID-19 infections and deaths |
JEL: | C13 C14 I14 |
Date: | 2023–01 |
URL: | http://d.repec.org/n?u=RePEc:uea:ueaeco:2023-01&r=ecm |
By: | Achille Nazaret; Claudia Shi; David M. Blei |
Abstract: | The synthetic control (SC) method is a popular approach for estimating treatment effects from observational panel data. It rests on a crucial assumption that we can write the treated unit as a linear combination of the untreated units. This linearity assumption, however, can be unlikely to hold in practice and, when violated, the resulting SC estimates are incorrect. In this paper we examine two questions: (1) How large can the misspecification error be? (2) How can we limit it? First, we provide theoretical bounds to quantify the misspecification error. The bounds are comforting: small misspecifications induce small errors. With these bounds in hand, we then develop new SC estimators that are specially designed to minimize misspecification error. The estimators are based on additional data about each unit, which is used to produce the SC weights. (For example, if the units are countries then the additional data might be demographic information about each.) We study our estimators on synthetic data; we find they produce more accurate causal estimates than standard synthetic controls. We then re-analyze the California tobacco-program data of the original SC paper, now including additional data from the US census about per-state demographics. Our estimators show that the observations in the pre-treatment period lie within the bounds of misspecification error, and that the observations post-treatment lie outside of those bounds. This is evidence that our SC methods have uncovered a true effect. |
Date: | 2023–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2302.12777&r=ecm |
By: | Bonsoo Koo; Benjamin Wong; Ze-Yu Zhong |
Abstract: | We disentangle structural breaks in dynamic factor models by establishing a projection based equivalent representation theorem which decomposes any break into a rotational change and orthogonal shift. Our decomposition leads to the natural interpretation of these changes as a change in the factor variance and loadings respectively, which allows us to formulate two separate tests to differentiate between these two cases, unlike the pre-existing literature at large. We derive the asymptotic distributions of the two tests, and demonstrate their good finite sample performance. We apply the tests to the FRED-MD dataset focusing on the Great Moderation and Global Financial Crisis as candidate breaks, and find evidence that the Great Moderation may be better characterised as a break in the factor variance as opposed to a break in the loadings, whereas the Global Financial Crisis is a break in both. Our empirical results highlight how distinguishing between the breaks can nuance the interpretation attributed to them by existing methods. |
Keywords: | factor space, structural instability, breaks, principal components, dynamic factor models |
JEL: | C12 C38 C55 E37 |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:een:camaaa:2023-15&r=ecm |
By: | Jadidzadeh, Ali |
Abstract: | This article discusses the limitations of linear models in explaining certain aspects of homelessness-related data and proposes the use of nonlinear models to allow for state-dependent or regime-switching behavior. The threshold autoregressive (TAR) model and its smooth transition autoregressive (STAR) extensions are introduced as a popular class of nonlinear models. The article explains how these models can be applied to univariate time series data to investigate variations in weather conditions on the flow of homeless shelters over time. The objective is to identify the sensitivity of publicly-funded emergency shelter use to changes in weather conditions and better inform social agencies and government funders of predictable and unpredictable changes in demand for shelter beds. The smooth transition regression (STR) model is proposed as a useful tool for investigating nonlinearities in non-autoregressive contexts using both time series and panel data. The article concludes by highlighting the advantages of STR models and their three-stage modeling procedure: model specification, estimation, and evaluation. |
Keywords: | Homelessness; nonlinear models; smooth transition regression (STR) model. |
JEL: | C01 C53 I32 |
Date: | 2022–12 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:116356&r=ecm |
By: | Dovern, Jonas; Glas, Alexander; Kenny, Geoff |
Abstract: | We propose to treat survey-based density expectations as compositional data when testing either for heterogeneity in density forecasts across different groups of agents or for changes over time. Monte Carlo simulations show that the proposed test has more power relative to both a bootstrap approach based on the KLIC and an approach which involves multiple testing for differences of individual parts of the density. In addition, the test is computaionally much faster than the KLIC-based one, which relies on simulations, and allows for comparisons across multiple groups. Using density expectations from the ECB Survey of Professional Forecasters and the U.S. Survey of Consumer Expectations, we show the usefulness of the test in detecting possible changes in density expectations over time and across different types of forecasters. |
Keywords: | compositional data, density forecasts, survey forecasts, disagreement |
JEL: | C12 D84 E27 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:zbw:pp1859:39&r=ecm |
By: | Caio Waisman; Brett R. Gordon |
Abstract: | Experiments are an important tool to measure the impacts of interventions. However, in experimental settings with one-sided noncompliance, extant empirical approaches may not produce the estimates a decision-maker needs to solve their problem. For example, these experimental designs are common in digital advertising settings, but they are uninformative of decisions regarding the intensive margin -- how much should be spent or how many consumers should be reached with a campaign. We propose a solution that combines a novel multi-cell experimental design with modern estimation techniques that enables decision-makers to recover enough information to solve problems with an intensive margin. Our design is straightforward to implement. Using data from advertising experiments at Facebook, we demonstrate that our approach outperforms standard techniques in recovering treatment effect parameters. Through a simple advertising reach decision problem, we show that our approach generates better decisions relative to standard techniques. |
Date: | 2023–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2302.13857&r=ecm |
By: | Celia Gil-Bermejo Lazo (Instituto Complutense de Estudios Internacionales (ICEI), Universidad Complutense de Madrid.); Jorge Onrubia Fernández (Instituto Complutense de Estudios Internacionales (ICEI), Universidad Complutense de Madrid.); Antonio Jesús Sánchez Fuentes (Instituto Complutense de Estudios Internacionales (ICEI), Universidad Complutense de Madrid.) |
Abstract: | In this paper, we propose a new approach to both test Granger Causality in a multivariate panel data environment and determine one ultimate “causality path” excluding those relationships which are redundant. For the sake of concreteness, we combine recent developments introduced to estimate Granger causality procedure based on Meta-analysis in heterogeneous mixed panels (Emirmahmutoglu and Kose, 2011 and Dumitrescu and Hurlin, 2012) and graphical models proposed in a growing literature (Spirtes et al, 2000, Demiralp and Hoover, 2003, Eicher, 2007 and 2012) searching iteratively for the existing dependencies between a multivariate set of information. Finally, we illustrate our proposal by revisiting existing studies in the context of panel Vector Autoregressive (VAR) models to the analysis of the fiscal policy-growth nexus. |
Keywords: | Granger causality; Panel data; Causal maps |
Date: | 2022 |
URL: | http://d.repec.org/n?u=RePEc:ucm:wpaper:2202&r=ecm |
By: | Dongwoo Kim; Pallavi Pal |
Abstract: | This paper presents an empirical model of sponsored search auctions in which advertisers are ranked by bid and ad quality. We introduce a new nonparametric estimator for the advertiser’s ad value and its distribution under the ‘incomplete information’ assumption. The ad value is characterized by a tractable analytical solution given observed auction parameters. Using Yahoo! search auction data, we estimate value distributions and study the bidding behavior across product categories. We find that advertisers shade their bids more when facing less competition. We also conduct counterfactual analysis to evaluate the impact of score squashing (ad quality raised to power θ |
Date: | 2023–03–06 |
URL: | http://d.repec.org/n?u=RePEc:azt:cemmap:05/23&r=ecm |
By: | De Bacco, Caterina; Contisciani, Martina; Cardoso Silva, Jon; Safdari, Hadiseh; Borges, Gabriela Lima; Baptista, Diego; Sweet, Tracy; Young, Jean-Gabriel; Jeremy, Koster; Ross, Cody T; McElreath, Richard; Redhead, Daniel; Power, Eleanor |
Abstract: | Social network data are often constructed by incorporating reports from multiple individuals. However, it is not obvious how to reconcile discordant responses from individuals. There may be particular risks with multiply reported data if people’s responses reflect normative expectations—such as an expectation of balanced, reciprocal relationships. Here, we propose a probabilistic model that incorporates ties reported by multiple individuals to estimate the unobserved network structure. In addition to estimating a parameter for each reporter that is related to their tendency of over- or under-reporting relationships, the model explicitly incorporates a term for ‘mutuality’, the tendency to report ties in both directions involving the same alter. Our model’s algorithmic implementation is based on variational inference, which makes it efficient and scalable to large systems. We apply our model to data from a Nicaraguan community collected with a roster-based design and 75 Indian villages collected with a name-generator design. We observe strong evidence of ‘mutuality’ in both datasets, and find that this value varies by relationship type. Consequently, our model estimates networks with reciprocity values that are substantially different than those resulting from standard deterministic aggregation approaches, demonstrating the need to consider such issues when gathering, constructing, and analysing survey-based network data. |
Keywords: | social network data; mutuality; reliability; variational inference; latent network; network measurement; OUP deal |
JEL: | C1 |
Date: | 2023–02–08 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:117271&r=ecm |
By: | Raluca Ursu; Stephan Seiler; Elisabeth Honka |
Abstract: | We provide a detailed overview of the empirical implementation of the sequential search model proposed by Weitzman (1979). We discuss the assumptions underlying the model, the identifica-tion of search cost and preference parameters, the necessary normalizations of utility parameters, counterfactuals that require a search model framework, and different estimation approaches. The goal of this paper is to consolidate knowledge and provide a unified treatment of various aspects of sequential search models that are relevant for empirical work. |
Keywords: | sequential search model |
JEL: | D43 D83 L13 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_10264&r=ecm |
By: | Emir Malikov; Jingfang Zhang; Shunan Zhao; Subal C. Kumbhakar |
Abstract: | Motivated by the long-standing interest in understanding the role of location for firm performance, this paper provides a semiparametric methodology to accommodate locational heterogeneity in production analysis. Our approach is novel in that we explicitly model spatial variation in parameters in the production-function estimation. We accomplish this by allowing both the input-elasticity and productivity parameters to be unknown functions of the firm's geographic location and estimate them via local kernel methods. This allows the production technology to vary across space, thereby accommodating neighborhood influences on firm production. In doing so, we are also able to examine the role of cross-location differences in explaining the variation in operational productivity among firms. Our model is superior to the alternative spatial production-function formulations because it (i) explicitly estimates the cross-locational variation in production functions, (ii) is readily reconcilable with the conventional production axioms and, more importantly, (iii) can be identified from the data by building on the popular proxy-variable methods, which we extend to incorporate locational heterogeneity. Using our methodology, we study China's chemicals manufacturing industry and find that differences in technology (as opposed to in idiosyncratic firm heterogeneity) are the main source of the cross-location differential in total productivity in this industry. |
Date: | 2023–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2302.13430&r=ecm |
By: | Manganelli, Simone |
Abstract: | Bayesian decisions are observationally identical to decisions with judgment. Decisions with judgment test whether a judgmental decision is optimal and, in case of rejection, move to the closest boundary of the confidence interval, for a given confidence level. The resulting decisions condition on sample realizations, which are used to construct the confidence interval itself. Bayesian decisions condition on sample realizations twice, with the tested hypothesis and with the choice of the confidence level. The second conditioning reveals that Bayesian decision makers have an ex ante confidence level equal to one, which is equivalent to assuming an uncertainty neutral behavior. Robust Bayesian decisions are characterized by an ex ante confidence level strictly lower than one and are therefore uncertainty averse. JEL Classification: C1, C11, C12, C13 |
Keywords: | ambiguity aversion, confidence intervals, hypothesis testing, statistical decision theory |
Date: | 2023–02 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20232786&r=ecm |
By: | Michail Tsagris; Omar Alzeley |
Abstract: | Two new distributions are proposed: the circular projected and the spherical projected Cauchy distributions. A special case of the circular projected Cauchy coincides with the wrapped Cauchy distribution, and for this, a generalization is suggested that offers better fit via the inclusion of an extra parameter. For the spherical case, by imposing two conditions on the scatter matrix we end up with an elliptically symmetric distribution. |
Keywords: | Directional data, Cauchy distribution, projected distribution |
JEL: | C13 |
Date: | 2023–02–08 |
URL: | http://d.repec.org/n?u=RePEc:crt:wpaper:2305&r=ecm |