|
on Econometrics |
By: | Benjamin Poignard (Graduate School of Economics, Osaka University); Manabu Asaiz (FacultyofEconomics,SokaUniversity) |
Abstract: | Although multivariate stochastic volatility (MSV) models usually produce more accurate forecasts compared to multivariate GARCH models, their estimation techniques such as Monte Carlo likelihood or Bayesian Markov Chain Monte Carlo are computationally demanding and thus suffer from the so-called gcurse of dimensionality": using such methods, the applications are typically restricted to low-dimensional vectors. In this paper, we propose a fast estimation approach for MSV models based on a penalised ordinary least squares framework. Specifying the MSV model as a multivariate state-space model, we propose a two-step penalised procedure for estimating the latter using a broad range of potentially non-convex penalty functions. In the first step, we approximate an EGARCH type dynamic using a penalised AR process with a sufficiently large number of lags, providing a sparse estimator. Conditionally on this first step estimator, we estimate the state vector based on a AR type dynamic. This two-step procedure relies on OLS based loss functions and thus easily accommodates high-dimensional vectors. We provide the large sample properties of the two-step estimator together with the so- called support recovery of the first step estimator. The empirical performances of our method are illustrated through in-sample simulations and out-of-sample variance-covariance matrix forecasts, where we consider as competitors commonly used MGARCH models. |
Keywords: | Forecasting;MultivariateStochasticVolatility;OracleProperty;PenalisedM-estimation |
JEL: | C13 C32 |
URL: | http://d.repec.org/n?u=RePEc:osk:wpaper:2002&r=all |
By: | Bonsoo Koo; Davide La Vecchia; Oliver Linton |
Abstract: | We develop estimation methodology for an additive nonparametric panel model that is suitable for capturing the pricing of coupon-paying government bonds followed over many time periods. We use our model to estimate the discount function and yield curve of nominally riskless government bonds. The novelty of our approach is the combination of two different techniques: cross-sectional nonparametric methods and kernel estimation for time varying dynamics in the time series context. The resulting estimator is used for predicting individual bond prices given the full schedule of their future payments. In addition, it is able to capture the yield curve shapes and dynamics commonly observed in the fixed income markets. We establish the consistency, the rate of convergence, and the asymptotic normality of the proposed estimator. A Monte Carlo exercise illustrates the good performance of the method under different scenarios. We apply our methodology to the daily CRSP bond market dataset, and compare ours with the popular Diebold and Li (2006) method. |
Keywords: | nonparametric inference, panel data, time varying, yield curve dynamics |
JEL: | C13 C14 C22 G12 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2020-4&r=all |
By: | Edvard Bakhitov |
Abstract: | This paper shows how to shrink extremum estimators towards inequality constraints motivated by economic theory. We propose an Inequality Constrained Shrinkage Estimator (ICSE) which takes the form of a weighted average between the unconstrained and inequality constrained estimators with the data dependent weight. The weight drives both the direction and degree of shrinkage. We use a local asymptotic framework to derive the asymptotic distribution and risk of the ICSE. We provide conditions under which the asymptotic risk of the ICSE is strictly less than that of the unrestricted extremum estimator. The degree of shrinkage cannot be consistently estimated under the local asymptotic framework. To address this issue, we propose a feasible plug-in estimator and investigate its finite sample behavior. We also apply our framework to gasoline demand estimation under the Slutsky restriction. |
Date: | 2020–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2001.10586&r=all |
By: | Zongwu Cai (Department of Economics, The University of Kansas); Seong Yeon Chang (The Wang Yanan Institute for Studies in Economics, Xiamen University, Xiamen, China) |
Abstract: | This paper considers predictive regressions where a structural break is allowed at some unknown date. We establish novel testing procedures for testing predictability via empirical likelihood methods based on some weighted score equations. Theoretical results are useful in practice because we adopt a unified framework under which it is unnecessary to distinguish whether the predictor variables are stationary or nonstationary. In particular, nonstationary predictor variables are allowed to be (nearly) integrated or exhibit a structural change at some unknown date. Simulations show that the empirical likelihood-based tests perform well in terms of size and power in finite samples. As an empirical analysis, we test asset returns predictability using various predictor variables. |
Keywords: | Autoregressive process; Empirical likelihood; Structural change; Unit root; Weighted estimation |
JEL: | C12 C14 C32 |
Date: | 2018–12 |
URL: | http://d.repec.org/n?u=RePEc:kan:wpaper:201811&r=all |
By: | Arturas Juodis; Vasilis Sarafidis |
Abstract: | A novel method-of-moments approach is proposed for the estimation of factor-augmented panel data models with endogenous regressors when T is fixed. The underlying methodology involves approximating the unobserved common factors using observed factor proxies. The resulting moment conditions are linear in the parameters. The proposed approach addresses several issues which arise with existing nonlinear estimators that are available in fixed T panels, such as local minima-related problems, a sensitivity to particular normalisation schemes, and a potential lack of global identification. We apply our approach to a large panel of households and estimate the price elasticity of urban water demand. A simulation study confirms that our approach performs well in finite samples. |
Keywords: | panel data, common factors, fixed T consistency, moment conditions, urban water management. |
JEL: | C13 C15 C23 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2020-5&r=all |
By: | Matteo, Pelagatti; Giacomo, Sbrana |
Abstract: | This paper proposes tree main results that enable the estimation of high dimensional multivariate stochastic volatility models. The first result is the closed-form steady-state Kalman filter for the multivariate AR(1) plus noise model. The second result is an accelerated EM algorithm for parameters estimation. The third result is an estimator of the correlation of two elliptical random variables with time-varying variances that is consistent and asymptotically normal regardless of the variances evolution. Speed and precision of our methodology are evaluated in a simulation experiment. Finally, we implement our method and compare its performance with other approaches in a minimum variance portfolio composed by the constituents of the CAC40 and S&P100 indexes. |
Keywords: | Riccati equation, EM algorithm, Kalman filter, Correlation estimation, Large covariance matrix, Multivariate stochastic volatility |
Date: | 2020–01 |
URL: | http://d.repec.org/n?u=RePEc:mib:wpaper:428&r=all |
By: | Morten Ørregaard Nielsen (Queen's University and CREATES); Antoine Noël |
Abstract: | This paper provides an exact algorithm for efficient computation of the time series of conditional variances, and hence the likelihood function, of models that have an ARCH($\infty$) representation. This class of models includes, e.g., the fractionally integrated generalized autoregressive conditional heteroskedasticity (FIGARCH) model. Our algorithm is a variation of the fast fractional difference algorithm of \cite{JensenNielsen2014}. It takes advantage of the fast Fourier transform (FFT) to achieve an order of magnitude improvement in computational speed. The efficiency of the algorithm allows estimation (and simulation/bootstrapping) of ARCH($\infty$) models, even with very large data sets and without the truncation of the filter commonly applied in the literature. We also show that the elimination of the truncation of the filter substantially reduces the bias of the quasi-maximum-likelihood estimators. Our results are illustrated in two empirical examples. |
Keywords: | Circular convolution theorem, Conditional heteroskedasticity, Fast Fourier transform, FIGARCH, Truncation |
JEL: | C22 C58 C63 C87 |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:qed:wpaper:1425&r=all |
By: | Huber, Martin; Wüthrich, Kaspar |
Abstract: | Abstract This paper provides a review of methodological advancements in the evaluation of heterogeneous treatment effect models based on instrumental variable (IV) methods. We focus on models that achieve identification by assuming monotonicity of the treatment in the IV and analyze local average and quantile treatment effects for the subpopulation of compliers. We start with a comprehensive discussion of the binary treatment and binary IV case as for instance relevant in randomized experiments with imperfect compliance. We then review extensions to identification and estimation with covariates, multi-valued and multiple treatments and instruments, outcome attrition and measurement error, and the identification of direct and indirect treatment effects, among others. We also discuss testable implications and possible relaxations of the IV assumptions, approaches to extrapolate from local to global treatment effects, and the relationship to other IV approaches. |
Date: | 2019–01–28 |
URL: | http://d.repec.org/n?u=RePEc:cdl:ucsdec:qt4j29d8sc&r=all |
By: | Zongwu Cai (Department of Economics, The University of Kansas, Lawrence, KS 66045, USA); Haiqiang Chen (The Wang Yanan Institute for Studies in Economics, Xiamen University, Xiamen, Fujian 361005, China); Xiaosai Liao (Southwestern University of Finance and Economics, Chengdu, Sichuan 611130, China) |
Abstract: | This paper studies asset return predictability via quantile regression for all types of persistent regressors. We propose to estimating a quantile regression with an auxiliary regressor and constructing a weighted estimator using the estimated coefficients of the original predictor and the auxiliary regressor, together with a novel test procedure. We show that it can reach the local power under the different optimal rates for nonstationary and stationary predictors, respectively. Our approach can be easily implemented to test the joint predictive ability of financial variables in multiple regression. The heterogenous predictability of US stock returns at different quantile levels is reexamined. |
Keywords: | Auxiliary regressor; Highly persistent predictor; Multiple regression; Predictive quantile regression; Robust inference; Weighted estimator |
JEL: | C22 C32 C58 G12 G14 |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:kan:wpaper:202002&r=all |
By: | Marleen Marra (Département d'économie) |
Abstract: | This paper presents, solves, and estimates the first structural auction model with seller selection. This allows me to quantify network effects arising from endogenous bidder and seller entry into auction platforms, facilitating the estimation of theoretically ambiguous fee impacts by tracing them through the game. Relevant model primitives are identified from variation in second-highest bids and reserve prices. My estimator builds off the discrete choice literature to address the double nested fixed point characterization of the entry equilibrium. Using new wine auction data, I estimate that this platform’s revenues increase up to 60% when introducing a bidder discount and simultaneously increasing seller fees. More bidders enter when the platform is populated with lower-reserve setting sellers, driving up prices. Moreover, I show that meaningful antitrust damages can be estimated in a platform setting despite this two-sidedness. |
Keywords: | Auctions with entry; Two-sided markets; Nonparametric identification; Estimation; Nested fixed point |
JEL: | D44 C52 C57 L81 |
Date: | 2019–12 |
URL: | http://d.repec.org/n?u=RePEc:spo:wpecon:info:hdl:2441/5kht5rc22p99sq5tol4efe4ssb&r=all |
By: | Masahiro Kato; Takuya Ishihara; Junya Honda; Yusuke Narita |
Abstract: | Many scientific experiments have an interest in the estimation of the average treatment effect (ATE), which is defined as the difference between the expected outcomes of two or more treatments. In this paper, we consider a situation called adaptive experimental design where research subjects sequentially visit a researcher, and the researcher assigns a treatment. For estimating the ATE efficiently, we consider changing the probability of assigning a treatment at a period by using past information obtained until the period. However, in this approach, it is difficult to apply the standard statistical method to construct an estimator because the observations are not independent and identically distributed. In this paper, to construct an efficient estimator, we overcome this conventional problem by using an algorithm of the multi-armed bandit problem and the theory of martingale. In the proposed method, we use the probability of assigning a treatment that minimizes the asymptotic variance of an estimator of the ATE. We also elucidate the theoretical properties of an estimator obtained from the proposed algorithm for both infinite and finite samples. Finally, we experimentally show that the proposed algorithm outperforms the standard RCT in some cases. |
Date: | 2020–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2002.05308&r=all |
By: | Geoffroy de Clippel; Kareen Rozen |
Abstract: | Bounded rationality theories are typically characterized over exhaustive data sets. We develop a methodology to understand the empirical content of such theories with limited data, adapting the classic, revealed-preference approach to new forms of revealed information. We apply our approach to an array of theories, illustrating its versatility. We identify theories and datasets testable in the same elegant way as Rationality, and theories and datasets where testing is more challenging. We show that previous attempts to test consistency of limited data with bounded rationality theories are subject to a conceptual pitfall that can yield false positives and empty out-of-sample predictions. |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:bro:econwp:2020-08&r=all |
By: | Nassim Nicholas Taleb |
Abstract: | The book investigates the misapplication of conventional statistical techniques to fat tailed distributions and looks for remedies, when possible. Switching from thin tailed to fat tailed distributions requires more than "changing the color of the dress". Traditional asymptotics deal mainly with either n=1 or $n=\infty$, and the real world is in between, under of the "laws of the medium numbers" --which vary widely across specific distributions. Both the law of large numbers and the generalized central limit mechanisms operate in highly idiosyncratic ways outside the standard Gaussian or Levy-Stable basins of convergence. A few examples: + The sample mean is rarely in line with the population mean, with effect on "naive empiricism", but can be sometimes be estimated via parametric methods. + The "empirical distribution" is rarely empirical. + Parameter uncertainty has compounding effects on statistical metrics. + Dimension reduction (principal components) fails. + Inequality estimators (GINI or quantile contributions) are not additive and produce wrong results. + Many "biases" found in psychology become entirely rational under more sophisticated probability distributions + Most of the failures of financial economics, econometrics, and behavioral economics can be attributed to using the wrong distributions. This book, the first volume of the Technical Incerto, weaves a narrative around published journal articles. |
Date: | 2020–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2001.10488&r=all |
By: | André Lucas (Vrije Universiteit Amsterdam); Julia Schaumburg (Vrije Universiteit Amsterdam); Bernd Schwaab (European Central Bank, Financial Research) |
Abstract: | We propose a dynamic clustering model for studying time-varying group structures in multivariate panel data. The model is dynamic in three ways: First, the cluster means and covariance matrices are time-varying to track gradual changes in cluster characteristics over time. Second, the units of interest can transition between clusters over time based on a Hidden Markov model (HMM). Finally, the HMM’s transition matrix can depend on lagged cluster distances as well as economic covariates. Monte Carlo experiments suggest that the units can be classified reliably in a variety of settings. An empirical study of 299 European banks between 2008Q1 and 2018Q2 suggests that banks have become less diverse over time in key characteristics. On average, approximately 3% of banks transition each quarter. Transitions across clusters are related to cluster dissimilarity and differences in bank profitability. |
Keywords: | dynamic clustering, panel data, Hidden Markov Model, score-driven dynamics, bank business models |
JEL: | G21 C33 |
Date: | 2020–02–04 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20200009&r=all |
By: | Vasilis Sarafidis; Tom Wansbeek |
Abstract: | The present special issue features a collection of papers presented at the 2017 International Panel Data Conference, hosted by the University of Macedonia in Thessaloniki, Greece. The conference marked the 40th anniversary of the inaugural International Panel Data Conference, which was held in 1977 at INSEE in Paris, under the auspices of the French National Centre for Scientific Research. As a collection, the papers appearing in this special issue of the Journal of Econometrics continue to advance the analysis of panel data, and paint a state-of-the-art picture of the field. |
Keywords: | Panel data analysis, unobserved heterogeneity, omitted variables, crosssectional dependence, dynamic relationships, temporal e§ects, aggregation bias, nonlinear models, incidental parameter problem, common factor models, multidimensional data, multi-level data. |
JEL: | C13 C15 C23 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2020-6&r=all |
By: | Verme, Paolo |
Abstract: | OLS models are the predominant choice for poverty predictions in a variety of contexts such as proxy-means tests, poverty mapping or cross-survey impu- tations. This paper compares the performance of econometric and machine learning models in predicting poverty using alternative objective functions and stochastic dominance analysis based on coverage curves. It finds that the choice of an optimal model largely depends on the distribution of incomes and the poverty line. Comparing the performance of different econometric and machine learning models is therefore an important step in the process of opti- mizing poverty predictions and targeting ratios. |
Keywords: | Welfare Modelling,Income Distributions,Poverty Predictions,Imputations |
JEL: | D31 D63 E64 O15 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:zbw:glodps:468&r=all |
By: | Yang Lu (AMSE - Aix-Marseille Sciences Economiques - EHESS - École des hautes études en sciences sociales - AMU - Aix Marseille Université - ECM - Ecole Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique) |
Abstract: | We propose a flexible regression model that is suitable for mixed count-continuous panel data. The model is based on a compound Poisson representation of the continuous variable , with bivariate random effect following a polynomial-expansion-based joint density. Besides the distributional flexibility that it offers, the model allows for closed form forecast updating formu-lae.This property is especially important for insurance applications, in which the future individual insurance premium should be regularly updated according to one's own past claim history. An application to vehicle insurance claims is provided. |
Keywords: | Sequential forecasting and pricing,Mixed data,polynomial expansion,random effect,sequential forecast- ing/pricing |
Date: | 2019 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-02419024&r=all |