|
on Econometrics |
By: | Ye Yang; Osman Dogan; Suleyman Taspinar; Fei Jin |
Abstract: | The matrix exponential spatial models exhibit similarities to the conventional spatial autoregressive model in spatial econometrics but offer analytical, computational, and interpretive advantages. This paper provides a comprehensive review of the literature on the estimation, inference, and model selection approaches for the cross-sectional matrix exponential spatial models. We discuss summary measures for the marginal effects of regressors and detail the matrix-vector product method for efficient estimation. Our aim is not only to summarize the main findings from the spatial econometric literature but also to make them more accessible to applied researchers. Additionally, we contribute to the literature by introducing some new results. We propose an M-estimation approach for models with heteroskedastic error terms and demonstrate that the resulting M-estimator is consistent and has an asymptotic normal distribution. We also consider some new results for model selection exercises. In a Monte Carlo study, we examine the finite sample properties of various estimators from the literature alongside the M-estimator. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.14813&r=ecm |
By: | Pedro Picchetti |
Abstract: | This paper develops a novel nonparametric identification method for treatment effects in settings where individuals self-select into treatment sequences. I propose an identification strategy which relies on a dynamic version of standard Instrumental Variables (IV) assumptions and builds on a dynamic version of the Marginal Treatment Effects (MTE) as the fundamental building block for treatment effects. The main contribution of the paper is to relax assumptions on the support of the observed variables and on unobservable gains of treatment that are present in the dynamic treatment effects literature. Monte Carlo simulation studies illustrate the desirable finite-sample performance of a sieve estimator for MTEs and Average Treatment Effects (ATEs) on a close-to-application simulation study. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.18555&r=ecm |
By: | Yiwei Sun |
Abstract: | Canonical RD designs yield credible local estimates of the treatment effect at the cutoff under mild continuity assumptions, but they fail to identify treatment effects away from the cutoff without additional assumptions. The fundamental challenge of identifying treatment effects away from the cutoff is that the counterfactual outcome under the alternative treatment status is never observed. This paper aims to provide a methodological blueprint to identify treatment effects away from the cutoff in various empirical settings by offering a non-exhaustive list of assumptions on the counterfactual outcome. Instead of assuming the exact evolution of the counterfactual outcome, this paper bounds its variation using the data and sensitivity parameters. The proposed assumptions are weaker than those introduced previously in the literature, resulting in partially identified treatment effects that are less susceptible to assumption violations. This approach accommodates both single cutoff and multi-cutoff designs. The specific choice of the extrapolation assumption depends on the institutional background of each empirical application. Additionally, researchers are recommended to conduct sensitivity analysis on the chosen parameter and assess resulting shifts in conclusions. The paper compares the proposed identification results with results using previous methods via an empirical application and simulated data. It demonstrates that set identification yields a more credible conclusion about the sign of the treatment effect. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.18136&r=ecm |
By: | Jungjun Choi; Hyukjun Kwon; Yuan Liao |
Abstract: | This paper studies the inference about linear functionals of high-dimensional low-rank matrices. While most existing inference methods would require consistent estimation of the true rank, our procedure is robust to rank misspecification, making it a promising approach in applications where rank estimation can be unreliable. We estimate the low-rank spaces using pre-specified weighting matrices, known as diversified projections. A novel statistical insight is that, unlike the usual statistical wisdom that overfitting mainly introduces additional variances, the over-estimated low-rank space also gives rise to a non-negligible bias due to an implicit ridge-type regularization. We develop a new inference procedure and show that the central limit theorem holds as long as the pre-specified rank is no smaller than the true rank. Empirically, we apply our method to the U.S. federal grants allocation data and test the existence of pork-barrel politics. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.16440&r=ecm |
By: | Wenzheng Gao; Zhenting Sun |
Abstract: | The partially linear binary choice model can be used for estimating structural equations where nonlinearity may appear due to diminishing marginal returns, different life cycle regimes, or hectic physical phenomena. The inference procedure for this model based on the analytic asymptotic approximation could be unreliable in finite samples if the sample size is not sufficiently large. This paper proposes a bootstrap inference approach for the model. Monte Carlo simulations show that the proposed inference method performs well in finite samples compared to the procedure based on the asymptotic approximation. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.18759&r=ecm |
By: | Manu Navjeevan |
Abstract: | I propose a new identification-robust test for the structural parameter in a heteroskedastic linear instrumental variables model. The proposed test statistic is similar in spirit to a jackknife version of the K-statistic and the resulting test has exact asymptotic size so long as an auxiliary parameter can be consistently estimated. This is possible under approximate sparsity even when the number of instruments is much larger than the sample size. As the number of instruments is allowed, but not required, to be large, the limiting behavior of the test statistic is difficult to examine via existing central limit theorems. Instead, I derive the asymptotic chi-squared distribution of the test statistic using a direct Gaussian approximation technique. To improve power against certain alternatives, I propose a simple combination with the sup-score statistic of Belloni et al. (2012) based on a thresholding rule. I demonstrate favorable size control and power properties in a simulation study and apply the new methods to revisit the effect of social spillovers in movie consumption. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.14892&r=ecm |
By: | Dmitry Arkhangelsky; Guido Imbens |
Abstract: | This survey discusses the recent causal panel data literature. This recent literature has focused on credibly estimating causal effects of binary interventions in settings with longitudinal data, with an emphasis on practical advice for empirical researchers. It pays particular attention to heterogeneity in the causal effects, often in situations where few units are treated. The literature has extended earlier work on difference-in-differences or two-way-fixed-effect estimators and more generally incorporated factor models or interactive fixed effects. It has also developed novel methods using synthetic control approaches. |
JEL: | C23 |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:31942&r=ecm |
By: | Thomas Wiemann |
Abstract: | This paper discusses estimation with a categorical instrumental variable in settings with potentially few observations per category. The proposed categorical instrumental variable estimator (CIV) leverages a regularization assumption that implies existence of a latent categorical variable with fixed finite support achieving the same first stage fit as the observed instrument. In asymptotic regimes that allow the number of observations per category to grow at arbitrary small polynomial rate with the sample size, I show that when the cardinality of the support of the optimal instrument is known, CIV is root-n asymptotically normal, achieves the same asymptotic variance as the oracle IV estimator that presumes knowledge of the optimal instrument, and is semiparametrically efficient under homoskedasticity. Under-specifying the number of support points reduces efficiency but maintains asymptotic normality. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.17021&r=ecm |
By: | David S. Lee; Justin McCrary; Marcelo J. Moreira; Jack Porter; Luther Yap |
Abstract: | For the over-identified linear instrumental variables model, researchers commonly report the 2SLS estimate along with the robust standard error and seek to conduct inference with these quantities. If errors are homoskedastic, one can control the degree of inferential distortion using the first-stage F critical values from Stock and Yogo (2005), or use the robust-to-weak instruments Conditional Wald critical values of Moreira (2003). If errors are non-homoskedastic, these methods do not apply. We derive the generalization of Conditional Wald critical values that is robust to non-homoskedastic errors (e.g., heteroskedasticity or clustered variance structures), which can also be applied to nonlinear weakly-identified models (e.g. weakly-identified GMM). |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.15952&r=ecm |
By: | Karim M Abadir (American University in Cairo and Imperial College London); Michel Lubrano (Aix-Marseille Univ., CNRS, AMSE, Marseille, France) |
Abstract: | We show that least squares cross-validation (CV) methods share a common structure which has an explicit asymptotic solution, when the chosen kernel is asymptotically separable in bandwidth and data. For density estimation with a multivariate Student t(ν) kernel, the CV criterion becomes asymptotically equivalent to a polynomial of only three terms. Our bandwidth formulae are simple and non-iterative (leading to very fast computations), their integrated squared-error dominates traditional CV implementations, they alleviate the notorious sample variability of CV, and overcome its breakdown in the case of repeated observations. We illustrate with univariate and bivariate applications, of density estimation and nonparametric regressions, to a large dataset of Michigan State University academic wages and experience. |
Keywords: | Bandwidth Choice, Cross Validation, Explicit Analytical Solution, Nonparametric Density Estimation, Academic Wages |
JEL: | C14 J31 |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:aim:wpaimx:2336&r=ecm |
By: | Sukjin Han; Haiqing Xu |
Abstract: | This paper investigates how certain relationship between observed and counterfactual distributions serves as an identifying condition for treatment effects when the treatment is endogenous, and shows that this condition holds in a range of nonparametric models for treatment effects. To this end, we first provide a novel characterization of the prevalent assumption restricting treatment heterogeneity in the literature, namely rank similarity. Our characterization demonstrates the stringency of this assumption and allows us to relax it in an economically meaningful way, resulting in our identifying condition. It also justifies the quest of richer exogenous variations in the data (e.g., multi-valued or multiple instrumental variables) in exchange for weaker identifying conditions. The primary goal of this investigation is to provide empirical researchers with tools that are robust and easy to implement but still yield tight policy evaluations. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.15871&r=ecm |
By: | Delhelle, Morine (Université catholique de Louvain, LIDAM/ISBA, Belgium); Van Keilegom, Ingrid (Université catholique de Louvain, LIDAM/ISBA, Belgium) |
Abstract: | In this paper we consider a time-to-event variable T that is subject to random right censoring, and we assume that the censoring time C is stochastically dependent on T and that there is a positive probability of not observing the event. There are various situations in practice where this happens, and appropriate models and methods need to be considered to avoid biased estimators of the survival function or incorrect conclusions in clinical trials. We consider a fully parametric model for the bivariate distribution of (T, C), that takes these features into account. The model depends on a parametric copula (with unknown association parameter) and on parametric marginal distributions for T and C. Sufficient conditions are developed under which the model is identified, and an estimation procedure is proposed. In particular, our model allows to identify and estimate the association between T and C, even though only the smallest of these variables is observable. The asymptotic behaviour of the estimated parameters is studied, and their finite sample performance is illustrated by means of a thorough simulation study and the analysis of breast cancer data. |
Keywords: | Copulas ; Cure models ; Dependent censoring ; Identifiability ; Inference ; Survival analysis |
Date: | 2023–12–01 |
URL: | http://d.repec.org/n?u=RePEc:aiz:louvad:2023036&r=ecm |
By: | Clarke, Damian (University of Chile); Torres, Nicolás Paris (University of Chile); Villena-Roldan, Benjamin (Diego Portales University) |
Abstract: | We demonstrate that regression models can be estimated by working independently in a row-wise fashion. We document a simple procedure which allows for a wide class of econometric estimators to be implemented cumulatively, where, in the limit, estimators can be produced without ever storing more than a single line of data in a computer's memory. This result is useful in understanding the mechanics of many common regression models. These procedures can be used to speed up the computation of estimates computed via OLS, IV, Ridge regression, LASSO, Elastic Net, and Non-linear models including probit and logit, with all common modes of inference. This has implications for estimation and inference with 'big data', where memory constraints may imply that working with all data at once is particularly costly. We additionally show that even with moderately sized datasets, this method can reduce computation time compared with traditional estimation routines. |
Keywords: | big data, estimation, regression, matrix inversion |
JEL: | C55 C61 C87 |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp16630&r=ecm |
By: | Nadja van 't Hoff |
Abstract: | This paper addresses the challenge of identifying causal effects of nonbinary, ordered treatments with multiple binary instruments. Next to presenting novel insights into the widely-applied two-stage least squares estimand, I show that a weighted average of local average treatment effects for combined complier populations is identified under the limited monotonicity assumption. This novel causal parameter has an intuitive interpretation, offering an appealing alternative to two-stage least squares. I employ recent advances in causal machine learning for estimation. I further demonstrate how causal forests can be used to detect local violations of the underlying limited monotonicity assumption. The methodology is applied to study the impact of community nurseries on child health outcomes. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.17575&r=ecm |
By: | James J. Heckman (The University of Chicago); Rodrigo Pinto (University of California, Los Angeles); Azeem Shaikh (The University of Chicago) |
Abstract: | This paper considers the problem of making inferences about the effects of a program on multiple outcomes when the assignment of treatment status is imperfectly randomized. By imperfect randomization we mean that treatment status is reassigned after an initial randomization on the basis of characteristics that may be observed or unobserved by the analyst. We develop a partial identification approach to this problem that makes use of information limiting the extent to which randomization is imperfect to show that it is still possible to make nontrivial inferences about the effects of the program in such settings. We consider a family of null hypotheses in which each null hypothesis specifies that the program has no effect on one of many outcomes of interest. Under weak assumptions, we construct a procedure for testing this family of null hypotheses in a way that controls the familywise error rate – the probability of even one false rejection – in finite samples. We develop our methodology in the context of a reanalysis of the HighScope Perry Preschool program. We find statistically significant effects of the program on a number of different outcomes of interest, including outcomes related to criminal activity for males and females, even after accounting for imperfections in the randomization and the multiplicity of null hypotheses. |
Keywords: | exact inference, experiment, familywise error rate, multiple testing, multiple outcomes, permutation testing, program evaluation |
JEL: | C31 I21 J13 |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:hka:wpaper:2023-031&r=ecm |
By: | Chakraborty, Somnath (Ruhr-Universität Bochum); Lederer, Johannes (Universität Hamburg); von Sachs, Rainer (Université catholique de Louvain, LIDAM/ISBA, Belgium) |
Abstract: | We develop a finite-sample theory for estimating the coefficients and for the prediction of multiple stable autoregressive processes that (i) share an unknown lag order but (ii) can differ in their individual sample sizes. Our technique is based on penalisation similar to hierarchical, overlapping group-Lasso but requires a new mathematical set-up to accommodate (i) and (ii). The set-up differs from existing work considerably, for example, in that we estimate the common lag order directly from the data rather than using extrinsic criteria. We prove that the estimated autoregressive processes enjoy stability, and we establish rates for both the estimation and prediction error that can outmatch the known rates in our setting. Our insights on the lag selection and the stability are also of interest for the case of individual autoregressive processes. |
Keywords: | Autoregressive process ; effective noise ; statistical guarantees ; tuning parameter ; dual norm ; hierarchical-group norm ; group LASSO ; lag selection ; false discoveries ; regularised least square ; sample complexity ; stable AR process ; gaussian innovation ; restricted eigenvalue property |
Date: | 2023–12–01 |
URL: | http://d.repec.org/n?u=RePEc:aiz:louvad:2023037&r=ecm |
By: | David M. Kaplan (University of Missouri); Qian Wu (Southwestern University of Finance and Economics) |
Abstract: | We develop methodology for testing stochastic monotonicity when the outcome variable is ordinal. Rather than testing a single null hypothesis, we use multiple testing to evaluate where the ordinal outcome is stochastically increasing in the covariate. By inverting our multiple testing procedure that controls the familywise error rate, we construct "inner" and "outer" confidence sets for the true set of points consistent with stochastic increasingness. Simulations show reasonable finite-sample properties. Empirically, we apply our methodology to the relationship between mental health and education. Practically, we provide code implementing our multiple testing procedure and replicating our empirical results. |
Keywords: | familywise error rate, confidence set, mental health |
JEL: | C25 I10 |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:umc:wpaper:2313&r=ecm |
By: | David M. Ritzwoller; Joseph P. Romano |
Abstract: | Statistical inference is often simplified by sample-splitting. This simplification comes at the cost of the introduction of randomness that is not native to the data. We propose a simple procedure for sequentially aggregating statistics constructed with multiple splits of the same sample. The user specifies a bound and a nominal error rate. If the procedure is implemented twice on the same data, the nominal error rate approximates the chance that the results differ by more than the bound. We provide a non-asymptotic analysis of the accuracy of the nominal error rate and illustrate the application of the procedure to several widely applied statistical methods. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.14204&r=ecm |
By: | Yifan Cui; Sukjin Han |
Abstract: | In this paper, we explore optimal treatment allocation policies that target distributional welfare. Most literature on treatment choice has considered utilitarian welfare based on the conditional average treatment effect (ATE). While average welfare is intuitive, it may yield undesirable allocations especially when individuals are heterogeneous (e.g., with outliers) - the very reason individualized treatments were introduced in the first place. This observation motivates us to propose an optimal policy that allocates the treatment based on the conditional \emph{quantile of individual treatment effects} (QoTE). Depending on the choice of the quantile probability, this criterion can accommodate a policymaker who is either prudent or negligent. The challenge of identifying the QoTE lies in its requirement for knowledge of the joint distribution of the counterfactual outcomes, which is generally hard to recover even with experimental data. Therefore, we introduce minimax optimal policies that are robust to model uncertainty. We then propose a range of identifying assumptions under which we can point or partially identify the QoTE. We establish the asymptotic bound on the regret of implementing the proposed policies. We consider both stochastic and deterministic rules. In simulations and two empirical applications, we compare optimal decisions based on the QoTE with decisions based on other criteria. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.15878&r=ecm |
By: | Andrea Teruzzi |
Abstract: | The issue related to the quantification of the tail risk of cryptocurrencies is considered in this paper. The statistical methods used in the study are those concerning recent developments in Extreme Value Theory (EVT) for weakly dependent data. This research proposes an expectile-based approach for assessing the tail risk of dependent data. Expectile is a summary statistic that generalizes the concept of mean, as the quantile generalizes the concept of the median. We present the empirical findings for a dataset of cryptocurrencies. We propose a method for dynamically evaluating the level of the expectiles by estimating the level of the expectiles of the residuals of a heteroscedastic regression, such as a GARCH model. Finally, we introduce the Marginal Expected Shortfall (MES) as a tool for measuring the marginal impact of single assets on systemic shortfalls. In our case of interest, we are focused on the impact of a single cryptocurrency on the systemic risk of the whole cryptocurrency market. In particular, we present an expectile-based MES for dependent data. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.17239&r=ecm |
By: | Yiyi Huo; Yingying Fan; Fang Han |
Abstract: | Researchers often hold the belief that random forests are "the cure to the world's ills" (Bickel, 2010). But how exactly do they achieve this? Focused on the recently introduced causal forests (Athey and Imbens, 2016; Wager and Athey, 2018), this manuscript aims to contribute to an ongoing research trend towards answering this question, proving that causal forests can adapt to the unknown covariate manifold structure. In particular, our analysis shows that a causal forest estimator can achieve the optimal rate of convergence for estimating the conditional average treatment effect, with the covariate dimension automatically replaced by the manifold dimension. These findings align with analogous observations in the realm of deep learning and resonate with the insights presented in Peter Bickel's 2004 Rietz lecture. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.16486&r=ecm |
By: | Quintaba Pablo Aníbal; Herrera Gómez Marcos |
Abstract: | The spatial weighting matrix plays a pivotal role in spatial econometrics and remains an active area of research. In this study, we apply recent advancements in machine learning for estimating the spatial weights matrix in econometric models. By employing LASSO strategies and incorporating geographical restrictions, we directly derive the spatial weighting matrix from the available data. This approach removes the necessity for arbitrary criteria set by researchers. As an empirical example, we explore the relationship among the salary of registered salary workers of Argentine provinces. Using monthly information between 2014 and 2022, we identify breakpoints in the wage time series and determine whether the breaks occur due to the movements within each province or due to neighboring provinces. |
JEL: | C21 C23 |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:aep:anales:4688&r=ecm |
By: | Felix Holzmeister; Magnus Johannesson; Robert Böhm; Anna Dreber; Jürgen Huber; Michael Kirchler |
Abstract: | A typical empirical study involves choosing a sample, a research design, and an analysis path. Variation in such choices across studies leads to heterogeneity in results that introduce an additional layer of uncertainty not accounted for in reported standard errors and confidence intervals. We provide a framework for studying heterogeneity in the social sciences and divide heterogeneity into population heterogeneity, design heterogeneity, and analytical heterogeneity. We estimate each type's heterogeneity from multi-lab replication studies, prospective meta-analyses of studies varying experimental designs, and multi-analyst studies. Our results suggest that population heterogeneity tends to be relatively small, whereas design and analytical heterogeneity are large. A conservative interpretation of the estimates suggests that incorporating the uncertainty due to heterogeneity would approximately double sample standard errors and confidence intervals. We illustrate that heterogeneity of this magnitude — unless properly accounted for —has severe implications for statistical inference with strongly increased rates of false scientific claims. |
Keywords: | Conflict, contest, conflict resolution, group decision-making, group identity, alliance, experiment |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:inn:wpaper:2023-17&r=ecm |
By: | Robert Stok; Paul Bilokon |
Abstract: | Calculating true volatility is an essential task for option pricing and risk management. However, it is made difficult by market microstructure noise. Particle filtering has been proposed to solve this problem as it favorable statistical properties, but relies on assumptions about underlying market dynamics. Machine learning methods have also been proposed but lack interpretability, and often lag in performance. In this paper we implement the SV-PF-RNN: a hybrid neural network and particle filter architecture. Our SV-PF-RNN is designed specifically with stochastic volatility estimation in mind. We then show that it can improve on the performance of a basic particle filter. |
Date: | 2023–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.06256&r=ecm |
By: | Mourahib, Anas (Université catholique de Louvain, LIDAM/ISBA, Belgium); Kiriliouk, Anna (UNamur); Segers, Johan (Université catholique de Louvain, LIDAM/ISBA, Belgium) |
Abstract: | When modeling a vector of risk variables, extreme scenarios are often of special interest. The peaks-over-thresholds method hinges on the notion that, asymptotically, the excesses over a vector of high thresholds follow a multivariate generalized Pareto distribution. However, existing literature has primarily concentrated on the setting when all risk variables are always large simultaneously. In reality, this assumption is often not met, especially in high dimensions. In response to this limitation, we study scenarios where distinct groups of risk variables may exhibit joint extremes while others do not. These discernible groups are derived from the angular measure inherent in the corresponding max-stable distribution, whence the term extreme direction. We explore such extreme directions within the framework of multivariate generalized Pareto distributions, with a focus on their probability density functions in relation to an appropriate dominating measure. Furthermore, we provide a stochastic construction that allows any prespecified set of risk groups to constitute the distribution’s extreme directions. This construction takes the form of a smoothed max-linear model and accommodates the full spectrum of conceivable max-stable dependence structures. Additionally, we introduce a generic simulation algorithm tailored for multivariate generalized Pareto distributions, offering specific implementations for extensions of the logistic and Hüsler–Reiss families capable of carrying arbitrary extreme directions. |
Date: | 2023–11–09 |
URL: | http://d.repec.org/n?u=RePEc:aiz:louvad:2023034&r=ecm |
By: | Sommervoll, Dag Einar (Centre for Land Tenure Studies, Norwegian University of Life Sciences); Holden, Stein T. (Centre for Land Tenure Studies, Norwegian University of Life Sciences); Tilahun, Mesfin (Centre for Land Tenure Studies, Norwegian University of Life Sciences) |
Abstract: | The experiments designed to estimate real-life discount rates in intertemporal choice often rely on ordered choice lists, where the list by design aims to capture a switch point between near- and far-future alternatives. Structural models like a Samuelson discounted utility model are often fitted to the model using maximal likelihood estimation. We show that dominated tasks, that is, choices that do not define the switch point, may bias ML estimates profoundly and predictably. More (less) dominated near future tasks give higher (lower) discount rates. Simulation analysis indicates estimates may remain largely unbiased using switch point-defining tasks only. |
Keywords: | Choice lists; time discounting; maximal likelihood estimation |
JEL: | C13 C81 C93 D91 |
Date: | 2023–12–19 |
URL: | http://d.repec.org/n?u=RePEc:hhs:nlsclt:2023_009&r=ecm |
By: | Ruslan Tepelyan; Achintya Gopal |
Abstract: | The use of machine learning to generate synthetic data has grown in popularity with the proliferation of text-to-image models and especially large language models. The core methodology these models use is to learn the distribution of the underlying data, similar to the classical methods common in finance of fitting statistical models to data. In this work, we explore the efficacy of using modern machine learning methods, specifically conditional importance weighted autoencoders (a variant of variational autoencoders) and conditional normalizing flows, for the task of modeling the returns of equities. The main problem we work to address is modeling the joint distribution of all the members of the S&P 500, or, in other words, learning a 500-dimensional joint distribution. We show that this generative model has a broad range of applications in finance, including generating realistic synthetic data, volatility and correlation estimation, risk analysis (e.g., value at risk, or VaR, of portfolios), and portfolio optimization. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.14735&r=ecm |
By: | Santiago Acerenza; Vitor Possebom; Pedro H. C. Sant'Anna |
Abstract: | This paper presents new econometric tools to unpack the treatment effect heterogeneity of punishing misdemeanor offenses on time-to-recidivism. We show how one can identify, estimate, and make inferences on the distributional, quantile, and average marginal treatment effects in setups where the treatment selection is endogenous and the outcome of interest, usually a duration variable, is potentially right-censored. We explore our proposed econometric methodology to evaluate the effect of fines and community service sentences as a form of punishment on time-to-recidivism in the State of S\~ao Paulo, Brazil, between 2010 and 2019, leveraging the as-if random assignment of judges to cases. Our results highlight substantial treatment effect heterogeneity that other tools are not meant to capture. For instance, we find that people whom most judges would punish take longer to recidivate as a consequence of the punishment, while people who would be punished only by strict judges recidivate at an earlier date than if they were not punished. This result suggests that designing sentencing guidelines that encourage strict judges to become more lenient could reduce recidivism. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.13969&r=ecm |
By: | Müller, Ulrich; Watson, Mark |
JEL: | C12 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:zbw:vfsc23:277567&r=ecm |