|
on Econometrics |
By: | Giuseppe De Luca (University of Palermo); Jan Magnus (Vrije Universiteit Amsterdam); Franco Peracchi (University of Rome Tor Vergata) |
Abstract: | We investigate the asymptotic behavior of the WALS estimator, a model-averaging estimator with attractive finite-sample and computational properties. WALS is closely related to the normal location model, and hence much of the paper concerns the asymptotic behavior of the estimator of the unknown mean in the normal local model. Since we adopt a frequentist-Bayesian approach, this specializes to the asymptotic behavior of the posterior mean as a frequentist estimator of the normal location parameter. We emphasize two challenging issues. First, our definition of ignorance in the Bayesian step involves a prior on the t-ratio rather than on the parameter itself. Second, instead of assuming a local misspecification framework, we consider a standard asymptotic setup with fixed parameters. We show that, under suitable conditions on the prior, the WALS estimator is sqrt(n)-consistent and its asymptotic distribution essentially coincides with that of the unrestricted least-squares estimator. Monte Carlo simulations confirm our theoretical results. |
Keywords: | Model averaging, normal location model, consistency, asymptotic normality, WALS |
JEL: | C11 C13 C51 C52 |
Date: | 2022–02–24 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20220022&r= |
By: | Christian Gourieroux; Joann Jasiak |
Abstract: | This paper introduces a local-to-unity/small sigma process for a stationary time series with strong persistence and non-negligible long run risk. This process represents the stationary long run component in an unobserved short- and long-run components model involving different time scales. More specifically, the short run component evolves in the calendar time and the long run component evolves in an ultra long time scale. We develop the methods of estimation and long run prediction for the univariate and multivariate Structural VAR (SVAR) models with unobserved components and reveal the impossibility to consistently estimate some of the long run parameters. The approach is illustrated by a Monte-Carlo study and an application to macroeconomic data. |
Date: | 2022–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2202.09473&r= |
By: | Minkah, Richard; de Wet, Tertius; Ghosh, Abhik |
Abstract: | The estimation of extreme quantiles is one of the main objectives of statistics of extremes ( which deals with the estimation of rare events). In this paper, a robust estimator of extreme quantile of a heavy-tailed distribution is considered. The estimator is obtained through the minimum density power divergence criterion on an exponential regression model. The proposed estimator was compared with two estimators of extreme quantiles in the literature in a simulation study. The results show that the proposed estimator is stable to the choice of the number of top order statistics and show lesser bias and mean square error. Practical application of the proposed estimator is illustrated with data from pedochemical and insurance industries. |
Date: | 2022–03–25 |
URL: | http://d.repec.org/n?u=RePEc:osf:africa:hf7vk&r= |
By: | Perepolkin, Dmytro; Goodrich, Benjamin; Sahlin, Ullrika |
Abstract: | This paper extends the application of indirect Bayesian inference to probability distributions defined in terms of quantiles of the observable quantities. Quantile-parameterized distributions are characterized by high shape flexibility and interpretability of its parameters, and are therefore useful for elicitation on observables. To encode uncertainty in the quantiles elicited from experts, we propose a Bayesian model based on the metalog distribution and a version of the Dirichlet prior. The resulting “hybrid” expert elicitation protocol for characterizing uncertainty in parameters using questions about the observable quantities is discussed and contrasted to parametric and predictive elicitation. |
Date: | 2021–09–28 |
URL: | http://d.repec.org/n?u=RePEc:osf:osfxxx:paby6&r= |
By: | Gery Andr\'es D\'iaz Rubio; Simone Giannerini; Greta Goracci |
Abstract: | The Misspecification-Resistant Information Criterion (MRIC) proposed in [H.-L. Hsu, C.-K. Ing, H. Tong: On model selection from a finite family of possibly misspecified time series models. The Annals of Statistics. 47 (2), 1061--1087 (2019)] is a model selection criterion for univariate parametric time series that enjoys both the property of consistency and asymptotic efficiency. In this article we extend the MRIC to the case where the response is a multivariate time series and the predictor is univariate. The extension requires novel derivations based upon random matrix theory. We obtain an asymptotic expression for the mean squared prediction error matrix, the vectorial MRIC and prove the consistency of its method-of-moments estimator. Moreover, we prove its asymptotic efficiency. Finally, we show with an example that, in presence of misspecification, the vectorial MRIC identifies the best predictive model whereas traditional information criteria like AIC or BIC fail to achieve the task. |
Date: | 2022–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2202.09225&r= |
By: | Taras Bodnar; Nestor Parolya; Erik Thors\'en |
Abstract: | In this paper we construct a shrinkage estimator of the global minimum variance (GMV) portfolio by a combination of two techniques: Tikhonov regularization and direct shrinkage of portfolio weights. More specifically, we employ a double shrinkage approach, where the covariance matrix and portfolio weights are shrunk simultaneously. The ridge parameter controls the stability of the covariance matrix, while the portfolio shrinkage intensity shrinks the regularized portfolio weights to a predefined target. Both parameters simultaneously minimize with probability one the out-of-sample variance as the number of assets $p$ and the sample size $n$ tend to infinity, while their ratio $p/n$ tends to a constant $c>0$. This method can also be seen as the optimal combination of the well-established linear shrinkage approach of Ledoit and Wolf (2004, JMVA) and the shrinkage of the portfolio weights by Bodnar et al. (2018, EJOR). No specific distribution is assumed for the asset returns except of the assumption of finite $4+\varepsilon$ moments. The performance of the double shrinkage estimator is investigated via extensive simulation and empirical studies. The suggested method significantly outperforms its predecessor (without regularization) and the nonlinear shrinkage approach in terms of the out-of-sample variance, Sharpe ratio and other empirical measures in the majority of scenarios. Moreover, it obeys the most stable portfolio weights with uniformly smallest turnover. |
Date: | 2022–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2202.06666&r= |
By: | Javier Hidalgo; Heejun Lee; Heejun Lee; Jungyoon Lee; Myung Hwan Seo |
Abstract: | We derive a risk lower bound in estimating the threshold parameter without knowing whether the threshold regression model is continuous or not. The bound goes to zero as the sample size n grows only at the cube root rate. Motivated by this nding, we develop a continuity test for the threshold regression model and a bootstrap to compute its p-values. The validity of the bootstrap is established, and its nite sample property is explored through Monte Carlo simulations. |
Keywords: | Continuity Test, Kink, Risk lower bound, Unknown Threshold |
Date: | 2021–09 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:622&r= |
By: | Damir Filipović (Ecole Polytechnique Fédérale de Lausanne; Swiss Finance Institute); Markus Pelger (Stanford University - Department of Management Science & Engineering); Ye Ye (Stanford University) |
Abstract: | We introduce a robust, flexible and easy-to-implement method for estimating the yield curve from Treasury securities. This method is non-parametric and optimally learns basis functions in reproducing Hilbert spaces with an economically motivated smoothness reward. We provide a closed-form solution of our machine learning estimator as a simple kernel ridge regression, which is straightforward and fast to implement. We show in an extensive empirical study on U.S. Treasury securities, that our method strongly dominates all parametric and non-parametric benchmarks. Our method achieves substantially smaller out-of-sample yield and pricing errors, while being robust to outliers and data selection choices. We attribute the superior performance to the optimal trade-off between flexibility and smoothness, which positions our method as the new standard for yield curve estimation. |
Keywords: | yield curve estimation, U.S. Treasury securities, term structure of interest rates, nonparametric method, machine learning in finance, reproducing kernel Hilbert space |
JEL: | C14 C38 C55 E43 G12 |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp2224&r= |
By: | Ryan Thompson; Catherine S. Forbes; Steven N. MacEachern; Mario Peruggia |
Abstract: | Statistical hypotheses are translations of scientific hypotheses into statements about one or more distributions, often concerning their center. Tests that assess statistical hypotheses of center implicitly assume a specific center, e.g., the mean or median. Yet, scientific hypotheses do not always specify a particular center. This ambiguity leaves the possibility for a gap between scientific theory and statistical practice that can lead to rejection of a true null. In the face of replicability crises in many scientific disciplines, "significant results" of this kind are concerning. Rather than testing a single center, this paper proposes testing a family of plausible centers, such as that induced by the Huber loss function (the "Huber family"). Each center in the family generates a testing problem, and the resulting family of hypotheses constitutes a familial hypothesis. A Bayesian nonparametric procedure is devised to test familial hypotheses, enabled by a pathwise optimization routine to fit the Huber family. The favorable properties of the new test are verified through numerical simulation in one- and two-sample settings. Two experiments from psychology serve as real-world case studies. |
Keywords: | Bayesian bootstrap, Dirichlet process, Huber loss, hypothesis testing, pathwise optimization |
Date: | 2022 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2022-2&r= |
By: | Charles Beach |
Abstract: | This paper provides the tools and procedures for empirically implementing several dominance criteria for social welfare comparisons and broad income inequality comparisons. Dominance criteria are expressed in terms of vectors of quantile ordinates based on income shares or quantile means. Statistical properties of these sample ordinates are established that allow a framework for statistical inference on these vectors. And practical empirical criteria are forwarded for using formal statistical inference tests to reach conclusions about ranking social welfare and inequality between distributions. Examples include rank dominance, generalized Lorenz dominance, dominance with crossing Lorenz curves, and distributional distance dominance between income groups. |
Keywords: | welfare testing, inequality dominance, dominance testing |
JEL: | C12 C46 D31 D63 |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:qed:wpaper:1484&r= |
By: | Shi, Chengchun; Wang, Xiaoyu; Luo, Shikai; Zhu, Hongtu; Ye, Jieping; Song, Rui |
Abstract: | A/B testing, or online experiment is a standard business strategy to compare a new product with an old one in pharmaceutical, technological, and traditional industries. Major challenges arise in online experiments of two-sided marketplace platforms (e.g., Uber) where there is only one unit that receives a sequence of treatments over time. In those experiments, the treatment at a given time impacts current outcome as well as future outcomes. The aim of this paper is to introduce a reinforcement learning frame- work for carrying A/B testing in these experiments, while characterizing the long-term treatment effects. Our proposed testing procedure allows for sequential monitoring and online updating. It is generally applicable to a variety of treatment designs in different industries. In addition, we systematically investigate the theoretical properties (e.g., size and power) of our testing procedure. Finally, we apply our framework to both simulated data and a real-world data example obtained from a technological company to illustrate its advantage over the current practice. A Python implementation of our test is available at https://github.com/callmespring/CausalRL . |
Keywords: | A/B testing; online experiment; reinforcement learning; causal inference; sequential testing; online updating; Research Support Fund; NSF-DMS-1555244; NSF-DMS-2113637; T&F deal |
JEL: | C1 |
Date: | 2022–01–20 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:113310&r= |
By: | Gary Koop; Stuart McIntyre; James Mitchell; Aubrey Poon |
Abstract: | Recent decades have seen advances in using econometric methods to produce more timely and higher-frequency estimates of economic activity at the national level, enabling better tracking of the economy in real time. These advances have not generally been replicated at the sub–national level, likely because of the empirical challenges that nowcasting at a regional level presents, notably, the short time series of available data, changes in data frequency over time, and the hierarchical structure of the data. This paper develops a mixed– frequency Bayesian VAR model to address common features of the regional nowcasting context, using an application to regional productivity in the UK. We evaluate the contribution that different features of our model provide to the accuracy of point and density nowcasts, in particular the role of hierarchical aggregation constraints. We show that these aggregation constraints, imposed in stochastic form, play a key role in delivering improved regional nowcasts; they prove to be more important than adding region-specific predictors when the equivalent national data are known, but not when this aggregate is unknown. |
Keywords: | Regional data; Mixed frequency; Nowcasting; Bayesian methods; Real-time data; Vector autoregressions |
JEL: | C32 C53 E37 |
Date: | 2022–03–03 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedcwq:93793&r= |
By: | Chao Zhang; Yihuang Zhang; Mihai Cucuringu; Zhongmin Qian |
Abstract: | We apply machine learning models to forecast intraday realized volatility (RV), by exploiting commonality in intraday volatility via pooling stock data together, and by incorporating a proxy for the market volatility. Neural networks dominate linear regressions and tree models in terms of performance, due to their ability to uncover and model complex latent interactions among variables. Our findings remain robust when we apply trained models to new stocks that have not been included in the training set, thus providing new empirical evidence for a universal volatility mechanism among stocks. Finally, we propose a new approach to forecasting one-day-ahead RVs using past intraday RVs as predictors, and highlight interesting diurnal effects that aid the forecasting mechanism. The results demonstrate that the proposed methodology yields superior out-of-sample forecasts over a strong set of traditional baselines that only rely on past daily RVs. |
Date: | 2022–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2202.08962&r= |
By: | Todd E. Clark; Florian Huber; Gary Koop; Massimiliano Marcellino |
Abstract: | The relationship between inflation and predictors such as unemployment is potentially nonlinear with a strength that varies over time, and prediction errors error may be subject to large, asymmetric shocks. Inspired by these concerns, we develop a model for inflation forecasting that is nonparametric both in the conditional mean and in the error using Gaussian and Dirichlet processes, respectively. We discuss how both these features may be important in producing accurate forecasts of inflation. In a forecasting exercise involving CPI inflation, we find that our approach has substantial benefits, both overall and in the left tail, with nonparametric modeling of the conditional mean being of particular importance. |
Keywords: | nonparametric regression; Gaussian process; Dirichlet process mixture; inflation forecasting |
JEL: | C11 C32 C53 |
Date: | 2022–03–02 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedcwq:93787&r= |
By: | Eckert, C.; J. Hohberger (Jan); Franses, Ph.H.B.F. |
Abstract: | Park and Gupta’s (2012) introduction of the Gaussian Copula (GC) approach to deal with endogeneity has made a significant impact on empirical marketing research with many papers using this approach. Recent studies have however started to explore and examine the approach and its underlying assumptions more closely, resulting in a more critical picture of it. A particular challenge is the non-testable assumption that the dependency structure between the endogenous regressor and the error term should be described by a Gaussian copula. In general, there exists a limited understanding of what this assumption implies, what causes its violation, and potential remedies. Our study addresses this explicitly. We provide a detailed discussion of the dependency structure assumption and how thresholds in the data can lead to its violation and biased results. We use real and simulated data to show how threshold detection before applying the GC approach can overcome this problem and thereby provide researchers with a useful tool to increase the likelihood of the GC approach’s assumptions being met. |
Keywords: | Gaussian Copula, Endogeneity, Threshold Regression, Research Methods |
JEL: | C13 C24 |
Date: | 2022–02–01 |
URL: | http://d.repec.org/n?u=RePEc:ems:eureir:137107&r= |
By: | Rendtel, Ulrich; Alho, Juha M. |
Abstract: | We propose a novel view of selection bias in longitudinal surveys. Such bias may arise from initial nonresponse in a probability sample, or it may be caused by self-selection in an internet survey. A contraction theorem from mathematical demography is used to show that an initial bias can "fade-away" in later panel waves, if the transition laws in the observed sample and the population are identical. Panel attrition is incorporated into the Markovian framework. Extensions to Markov chains of higher order are given, and the limitations of our approach under population heterogeneity are discussed. We use empirical data from a German Labour Market Panel to demonstrate the extend and speed of the fade-away effect. The implications of the new approach on the treatment of nonresponse, and attrition weighting, are discussed. |
Keywords: | longitudinal survey,panel survey,internet recruitment,panel attrition,nonresponse bias,self-selection bias,Markov chain,Mover-Stayer model,weak ergodicity |
Date: | 2022 |
URL: | http://d.repec.org/n?u=RePEc:zbw:fubsbe:20224&r= |
By: | César Garcia-Gomez (University of Valladolid); Ana Pérez (University of Valladolid); Mercedes Prieto-Alaiz (University of Valladolid) |
Abstract: | This paper proposes a graphical tool based on the copula function, namely the mul-tivariate tail concentration function, to represent the dependence structure on the tailsof a multivariate joint distribution. We illustrate the use of this function to measuredependence between poverty dimensions. In particular, we analyse how multivariate taildependence between the dimensions of the AROPE rate evolved in the EU-28 between2008 and 2018. We nd evidence of lower tail dependence in all EU-28 countries, al-though this dependence is time-varying over the period analysed and the e ect of theGreat Recession on this dependence is not homogeneous over all countries. |
Keywords: | Multivariate tail dependence; Copula; Poverty; AROPE rate; Europe. |
JEL: | D63 I32 O52 |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:inq:inqwps:ecineq2022-605&r= |