|
on Econometrics |
By: | Judith Anne Clarke (Department of Economics, University of Victoria) |
Abstract: | More commonly, applied and theoretical researchers are examining model averaging as a tool when considering estimation of regression models. Weighted-average least squares (WALS), originally proposed by Magnus and Durbin (1999, Econometrica) within the framework of estimating some of the parameters of a linear regression model when other coefficients are of no interest, is one such model averaging method with their proposed approach being a Bayesian combination of frequentist ordinary least squares and restricted least squares estimators. We generalize their work, along with that of other researchers, to consider averaging ordinary least squares (OLS) and two stage least squares (2SLS) estimators when possibly one or more regressors are endogenous. We derive asymptotic properties of our weighted OLS and 2SLS estimator under a local misspecification framework, showing that results from the existing WALS literature apply equally well to our case. In particular, determining the optimal weight function reduces to the problem of estimating the mean of a normally distributed random variate, which is unrelated to the details specific to the regression model of interest, including the extent of correlation between the explanatory variable(s) and the error term. We illustrate our findings with two examples. The first example, from a commonly adopted econometrics textbook, considers returns to schooling, and the second case is a growth regression application, which examines whether religion assists in explaining disparities in cross-country economic growth. |
Keywords: | Model averaging; least squares; two stage least squares; priors; instrumental variables. |
JEL: | C11 C13 C26 C51 C52 |
Date: | 2017–06–21 |
URL: | http://d.repec.org/n?u=RePEc:vic:vicewp:1701&r=ecm |
By: | Yang, Bill Huajian |
Abstract: | Common ordinal models, including the ordered logit model and the continuation ratio model, are structured by a common score (i.e., a linear combination of a list of given explanatory variables) plus rank specific intercepts. Sensitivity with respect to the common score is generally not differentiated between rank outcomes. In this paper, we propose an ordinal model based on forward ordinal probabilities for rank outcomes. The forward ordinal probabilities are structured by, in addition to the common score and intercepts, the rank and rating (for a risk-rated portfolio) specific sensitivity. This rank specific sensitivity allows a risk rating to respond to its migrations to default, downgrade, stay, and upgrade accordingly. An approach for parameter estimation is proposed based on maximum likelihood for observing rank outcome frequencies. Applications of the proposed model include modeling rating migration probability for point-in-time probability of default term structure for IFRS9 expected credit loss estimation and CCAR stress testing. Unlike the rating transition model based on Merton model, which allows only one sensitivity parameter for all rank outcomes for a rating, and uses only systematic risk drivers, the proposed forward ordinal model allows sensitivity to be differentiated between outcomes and include entity specific risk drivers (e.g., downgrade history or credit quality changes for an entity in last two quarters can be included). No estimation of the asset correlation is required. As an example, the proposed model, benchmarked with the rating transition model based on Merton model, is used to estimate the rating migration probability and probability of default term structure for a commercial portfolio, where for each rating the sensitivity is differentiated between migrations to default, downgrade, stay, and upgrade. Results show that the proposed model is more robust. |
Keywords: | PD term structure, forward ordinal probability, common score, rank specific sensitivity, rating migration probability |
JEL: | C0 C01 C02 C13 C18 C4 C5 C51 C52 C53 C54 C58 C61 C63 E61 G3 G31 G32 G38 O3 |
Date: | 2017–09 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:79934&r=ecm |
By: | Yukitoshi Matsushita; Taisuke Otsu |
Abstract: | In the past few decades, much progress has been made in semiparametric modeling and estimation methods for econometric analysis. This paper is concerned with inference (i.e., confidence intervals and hypothesis testing) in semiparametric models. In contrast to the conventional approach based on t-ratios, we advocate likelihood-based inference. In particular, we study two widely applied semiparametric problems, weighted average derivatives and treatment effects, and propose semiparametric empirical likelihood and jackknife empirical likelihood methods. We derive the limiting behavior of these empirical likelihood statistics and investigate their finite sample performance via Monte Carlo simulation. Furthermore, we extend the (delete-1) jackknife empirical likelihood toward the delete-d version with growing d and establish general asymptotic theory. This extension is crucial to deal with non-smooth objects, such as quantiles and quantile average derivatives or treatment effects, due to the well-known inconsistency phenomena of the jackknife under non-smoothness. |
Keywords: | Semiparametric, Jackknife, Empirical likelihood |
JEL: | C12 C14 |
Date: | 2017–07 |
URL: | http://d.repec.org/n?u=RePEc:cep:stiecm:592&r=ecm |
By: | Helmut Lütkepohl; Thore Schlaak |
Abstract: | The performance of information criteria and tests for residual heteroskedasticity for choosing between different models for time-varying volatility in the context of structural vector autoregressive analysis is investigated. Although it can be difficult to find the true volatility model with the selection criteria, using them is recommended because they can reduce the mean squared error of impulse response estimates substantially relative to a model that is chosen arbitrarily based on the personal preferences of a researcher. Heteroskedasticity tests are found to be useful tools for deciding whether time-varying volatility is present but do not discriminate well between different types of volatility changes. The selection methods are illustrated by specifying a model for the global market for crude oil. |
Keywords: | Structural vector autoregression, identification via heteroskedasticity, conditional heteroskedasticity, smooth transition, Markov switching, GARCH |
JEL: | C32 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:diw:diwwpp:dp1672&r=ecm |
By: | Fernandes, Marcelo; Guerre, Emmanuel; Horta, Eduardo |
Abstract: | We propose to smooth the entire objective function rather than only the check function in a linear quantile regression context. We derive a uniform Bahadur-Kiefer representation for the resulting convolution-type kernel estimator that demonstrates it improves on the extant quantile regression estimators in the literature. In addition, we also show that it is straightforward to compute asymptotic standard errors for the quantile regression coefficient estimates as well as to implement Wald-type tests. Simulations confirm that our smoothed quantile regression estimator performs very well in finite samples. |
Date: | 2017–06–28 |
URL: | http://d.repec.org/n?u=RePEc:fgv:eesptd:457&r=ecm |
By: | Herwartz, Helmut; Maxand, Simone; Walle, Yabibal M. |
Abstract: | Standard panel unit root tests (PURTs) are not robust to breaks in innovation variances. Consequently, recent papers have proposed PURTs that are pivotal in the presence of volatility shifts. The applicability of these tests, however, has been restricted to cases where the data contains only an intercept, and not a linear trend. This paper proposes a new heteroskedasticity-robust PURT that works well for trending data. Under the null hypothesis, the test statistic has a limiting Gaussian distribution. Simulation results reveal that the test tends to be conservative but shows remarkable power in finite samples. |
Keywords: | panel unit root tests,nonstationary volatility,cross-sectional dependence,near epoch dependence,energy use per capita |
JEL: | C23 C12 Q40 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:zbw:cegedp:314&r=ecm |
By: | Biørn, Erik (Dept. of Economics, University of Oslo) |
Abstract: | Estimation of polynomial regression equations in one error-ridden variable and a number of error-free regressors, as well as an instrument set for the former is considered. Procedures for identification, operating on moments up to a certain order, are elaborated for single- and multi-equation models. Weak distributional assumptions are made for the error and the latent regressor. Simple order-conditions are derived, and procedures involving recursive identification of the moments of the regressor and its measurement errors together with the coefficients of the polynomials are considered. A Generalized Method of Moments (GMM) algorithm involving the instruments and proceeding stepwise from the identification procedures, is presented. An illustration for systems of linear, quadratic and cubic Engel functions, with household consumption and income data is given. |
Keywords: | Errors in variables; Polynomial regression; Error distribution; Identification; Instrumental variables; Method of Moments; Engel functions |
JEL: | C21 C23 C31 C33 C51 E21 |
Date: | 2017–01–30 |
URL: | http://d.repec.org/n?u=RePEc:hhs:osloec:2017_001&r=ecm |
By: | Andrea Fontanari; Nassim Nicholas Taleb; Pasquale Cirillo |
Abstract: | Under infinite variance, the Gini coefficient cannot be reliably estimated using conventional nonparametric methods. We study different approaches to the estimation of the Gini index in presence of a heavy tailed data generating process, that is, one with Paretan tails and/or in the stable distribution class with finite mean but non-finite variance (with tail index $\alpha\in(1,2)$). While the Gini index is a measurement of fat tailedness, little attention has been brought to a significant downward bias in conventional applications, one that increases with lower values of $\alpha$. First, we show how the "non-parametric" estimator of the Gini index undergoes a phase transition in the symmetry structure of its asymptotic distribution as the data distribution shifts from the domain of attraction of a light tail distribution to the domain of attraction of a fat tailed, infinite variance one. Second, we show how the maximum likelihood estimator outperforms the "non-parametric" requiring a much smaller sample size to reach efficiency. Finally we provide a simple correction mechanism to the small sample bias of the "non-paramteric" estimator based on the distance between the mode and the mean of its asymptotic distribution for the case of heavy tailed data generating process. |
Date: | 2017–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1707.01370&r=ecm |
By: | Clarke, Damian; Matta, Benjamín |
Abstract: | This paper examines a number of techniques which allow for the construction of bounds estimates based on instrumental variables (IVs), even when the instruments are not valid. The plausexog and imperfectiv commands are introduced, which implement methods described by Conley et al. (2012) and Nevo and Rosen (2012b) in Stata. The performance of these bounds under a range of circumstances is examined, leading to a number of practical results related to the informativeness of the bounds in different circumstances. |
Keywords: | IV, instrumental variables, exclusion restrictions, invalidity, plausibly exogenous, imperfect IVs |
JEL: | C01 C1 C10 C18 C36 C63 |
Date: | 2017–06 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:79991&r=ecm |
By: | Götz, Thomas B.; Knetsch, Thomas A. |
Abstract: | There has been increased interest in the use of "big data" when it comes to forecasting macroeconomic time series such as private consumption or unemployment. However, applications on forecasting GDP are rather rare. In this paper we incorporate Google search data into a Bridge Equation Model, a version of which usually belongs to the suite of forecasting models at central banks. We show how to integrate these big data information, emphasizing the appeal of the underlying model in this respect. As the choice of which Google search terms to add to which equation is crucial - for the forecasting performance itself as well as for the economic consistency of the implied relationships - we compare different (ad-hoc, factor and shrinkage) approaches in terms of their pseudo-real time out-of-sample forecast performance for GDP, various GDP components and monthly activity indicators. We find that there are indeed sizeable gains possible from using Google search data, whereby partial least squares and LASSO appear most promising. Also, the forecast potential of Google search terms vis-avis survey indicators seems th have increased in recent years, suggesting that their scope in this field of application could increase in the future. |
Keywords: | Big Data,Bridge Equation Models,Forecasting,Principal Components Analysis,Partial Least Squares,LASSO,Boosting |
JEL: | C22 C32 C53 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:zbw:bubdps:182017&r=ecm |
By: | Gilbert Mbara (University of Warsaw) |
Abstract: | The two-state Markov switching model of dating recessions breaks down when confronted with the low volatility macroeconomic time series of the post 1984 Great Moderation era. In this paper, I present a new model specification and a two--stage maximum likelihood estimation procedure that can account for the lower volatility and persistence of macroeconomic times series after 1984, while preserving the economically interpretable two--state boom--bust business cycle switching. I first demonstrate the poor finite sample properties (bias and inconsistency) of standard models then suggest a new specification and estimation procedure that resolves these issues. The suggested likelihood profiling method achieves consistent estimation of unconditional variances across volatility regimes while resolving the poor performance of models with multiple lag structures in dating business cycle turning points. Based on this novel model specification and estimation, I find that the nature of US business cycles has changed: economic growth has permanently become lower while booms last longer than before. The length and size of recessions however remain unchanged. |
Keywords: | Regime Switching, Hidden Markov Models, Great Moderation, Maximum Likelihood Estimation |
JEL: | C5 C51 C58 C32 E32 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:war:wpaper:2017-13&r=ecm |
By: | Dukpa Kim (Korea University); Tatsushi Oka (National University of Singapore); Francisco Estrada (Universidad Nacional AutÛnoma de MÈxico and VU University Amsterdam); Pierre Perron (Boston University) |
Abstract: | What transpires from recent research is that temperatures and forcings seem to be characterized by a linear trend with two changes in the rate of growth. The first occurs in the early 60s and indicates a very large increase in the rate of growth of both temperatures and radiative forcings. This was termed as the "onset of sustained global warming". The second is related to the more recent so-called hiatus period, which suggests that temperatures and total radiative forcings have increased less rapidly since the mid-90s compared to the larger rate of increase from 1960 to 1990. There are two issues that remain unresolved. The Örst is whether the breaks in the slope of the trend functions of temperatures and radiative forcings are common. This is important because common breaks coupled with the basic science of climate change would strongly suggest a causal effect from anthropogenic factors to temperatures. The second issue relates to establishing formally via a proper testing procedure that takes into account the noise in the series, whether there was indeed a 'hiatus period' for temperatures since the mid 90s. This is important because such a test would counter the widely held view that the hiatus is the product of natural internal variability. Our paper provides tests related to both issues. The results show that the breaks in temperatures and forcings are common and that the hiatus is characterized by a significant decrease in the rate of growth of temperatures and forcings. The statistical results are of independent interest and applicable more generally. |
Keywords: | ultiple Breaks, Common Breaks, Multivariate Regressions, Joined Segmented Trend. |
JEL: | C32 |
Date: | 2017–01 |
URL: | http://d.repec.org/n?u=RePEc:bos:wpaper:wp2017-003&r=ecm |
By: | Lucas, André; Schaumburg, Julia; Schwaab, Bernd |
Abstract: | We propose a novel observation-driven finite mixture model for the study of banking data. The model accommodates time-varying component means and covariance matrices, normal and Student’s t distributed mixtures, and economic determinants of time-varying parameters. Monte Carlo experiments suggest that units of interest can be classified reliably into distinct components in a variety of settings. In an empirical study of 208 European banks between 2008Q1–2015Q4, we identify six business model components and discuss how their properties evolve over time. Changes in the yield curve predict changes in average business model characteristics. JEL Classification: G21, C33 |
Keywords: | bank business models, clustering, finite mixture model, low interest rates, score-driven model |
Date: | 2017–06 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20172084&r=ecm |
By: | Everton M. C. Abreu; Newton J. Moura Jr.; Abner D. Soares; Marcelo B. Ribeiro |
Abstract: | Oscillations in the cumulative individual income distribution have been found in the data of various countries studied by different authors at different time periods, but the dynamical origins of this behavior are currently unknown. These data sets can be fitted by different functions at different income ranges, but recently the Tsallis distribution has been found capable of fitting the whole distribution by means of only two parameters, procedure which showed even more clearly such oscillatory features in the entire income range. This behavior can be described by assuming log-periodic functions, however a different approach to naturally disclose such oscillatory characteristics is to allow the Tsallis $q$-parameter to become complex. In this paper we have used these ideas in order to describe the behavior of the complementary cumulative distribution function of the personal income of Brazil recently studied empirically by Soares et al. (2016). Typical elements of periodic motion, such as amplitude and angular frequency coupled to this income analysis, were obtained. |
Date: | 2017–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1706.10141&r=ecm |
By: | Liu, Lily Y. (Federal Reserve Bank of Boston) |
Abstract: | This paper combines a term structure model of credit default swaps (CDS) with weak-identification robust methods to jointly estimate the probability of default and the loss given default of the underlying firm. The model is not globally identified because it forgoes parametric time series restrictions that have aided identification in previous studies, but that are also difficult to verify in the data. The empirical results show that informative (small) confidence sets for loss given default are estimated for half of the firm-months in the sample, and most of these are much lower than and do not include the conventional value of 0.60. This also implies that risk-neutral default probabilities, and hence risk premia on default probabilities, are underestimated when loss given default is exogenously fixed at the conventional value instead of estimated from the data. |
JEL: | C13 C14 C58 G12 G13 |
Date: | 2017–05–08 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedbqu:rpa17-1&r=ecm |
By: | Nicolas Vallois (Université Picardie Jules Verne; CRIISEA); Dorian Jullien (Université Côte d'Azur; GREDEG CNRS) |
Abstract: | Experimental economists increasingly apply econometric techniques to interpret their data, as suggested the emergence of "experimetrics" in the 2000's (Camerer, 2003; Houser, 2008; Moffatt, 2015). Yet statistics remains a minor topic in experimental economics' (EE) methodology. This article aims to study the historical roots of this present paradox. To do so, we analyze the use of statistical tools in EE from early economics experiments of the 1940's-1950's to the present days. Our narrative is based on qualitative analysis of published papers for the earliest periods and on bibliometric and quantitative approaches for the more recent time period. Our results reveal a significant change in EE' statistical methods, from purely descriptive methods to more sophisticated and standardized techniques. Statistics now plays a decisive role in the way EE estimates rationality, particularly in structural modeling approaches, but it is still considered as a non-methodological, because purely technical, matter. Our historical analysis shows that this technical conception was the result of a long-run evolution of research tactics in EE, that notably allowed experimental economists to escape from psychologist's more re exive culture toward statistics. |
Keywords: | Experimental Economics, Statistics, Econometrics, History of Economic Thought, Methodology |
JEL: | B20 C83 A14 C90 |
Date: | 2017–06 |
URL: | http://d.repec.org/n?u=RePEc:gre:wpaper:2017-20&r=ecm |