|
on Econometrics |
By: | Gholamreza Hajargasht (Department of Economics, University of Melbourne); William E. Griffiths (Department of Economics, University of Melbourne) |
Abstract: | We show how a wide range of stochastic frontier models can be estimated relatively easily using variational Bayes. We derive approximate posterior distributions and point estimates for parameters and inefficiency effects for (a) time invariant models with several alternative inefficiency distributions, (b) models with time varying effects, (c) models incorporating environmental effects, and (d) models with more flexible forms for the regression function and error terms. Despite the abundance of stochastic frontier models, there have been few attempts to test the various models against each other, probably due to the difficulty of performing such tests. One advantage of the variational Bayes approximation is that it facilitates the computation of marginal likelihoods that can be used to compare models. We apply this idea to test stochastic frontier models with different inefficiency distributions. Estimation and testing is illustrated using three examples. |
Keywords: | Technical efficiency, Marginal likelihood, Time-varying panel, Environmental effects, Mixture, Semiparametric model |
Date: | 2016–05 |
URL: | http://d.repec.org/n?u=RePEc:mlb:wpaper:2024&r=ecm |
By: | Christopher G. Gibbs (School of Economics, UNSW Business School, UNSW); Andrey L. Vasnev (University of Sydney) |
Abstract: | In applied forecasting, there is a trade-off between in-sample fit and out-of-sample forecast accuracy. Parsimonious model specifications typically outperform richer model specifications. Consequently, there is often predictable information in forecast errors that is difficult to exploit. However, we show how this predictable information can be exploited in forecast combinations. In this case, optimal combination weights should minimize conditional mean squared error, or a conditional loss function, rather than the unconditional variance as in the commonly used framework of Bates and Granger (1969). We prove that our conditionally optimal weights lead to better forecast performance. The conditionally optimal weights support other forward-looking approaches to combining forecasts, where the forecast weights depend on the expected model performance. We show that forward-looking |
Keywords: | Forecast combination, conditionally optimal weights, forecast combination puzzle, inflation, Phillips curve |
JEL: | C18 C53 E31 |
Date: | 2017–02 |
URL: | http://d.repec.org/n?u=RePEc:swe:wpaper:2017-10&r=ecm |
By: | Javed Iqbal (State Bank of Pakistan); Muhammad Nadim Hanif (State Bank of Pakistan) |
Abstract: | We compare performance of modified HP filter, wavelet analysis and empirical mode decomposition. Our simulation study results suggest that modified HP filter performs better for an overall time series. However, in the middle (of time series) wavelet analysis performs best. Wavelet analysis based filtering has highest ‘end points bias (EPB)’. However, it performs better when we extrapolate the subject time series to lower the EPB. Study based on observed data of real income, investment and consumption shows that the autoregressive properties and multivariate analytics of cyclical components depend upon filtering technique. |
Keywords: | Business Cycle, Smoothing Macro Time Series, Modified HP Filter, Wavelet Analysis, End Point Bias in HP Filter, Simulation, Cross Country Study. |
JEL: | E32 C18 |
Date: | 2017–03 |
URL: | http://d.repec.org/n?u=RePEc:sbp:wpaper:87&r=ecm |
By: | Ke Wang; Yujiao Xian; Chia-Yen Lee; Yi-Ming Wei (Center for Energy and Environmental Policy Research (CEEP), Beijing Institute of Technology); Zhimin Huang |
Abstract: | Directional distance function (DDF) has been a commonly used technique for estimating efficiency and productivity over the past two decades, and the directional vector is usually predetermined in the applications of DDF. The most critical issue of using DDF remains that how to appropriately project the inefficient decision-making unit (DMU) onto the production frontier along with a justified direction. This paper provides a comprehensive literature review on the techniques for selecting directional vector of the directional distance function. It begins with a brief introduction of the existing methods around the inclusion of the exogenous direction techniques and the endogenous direction techniques. The former commonly includes arbitrary direction and conditional direction techniques, while the latter involves the techniques for seeking theoretically optimized directions (i.e., direction towards the closest benchmark or indicating the largest efficiency improvement potential) and market-oriented directions (i.e., directions towards cost minimization, profit maximization, or marginal profit maximization benchmarks). The main advantages and disadvantages of these techniques are summarized, and the limitations inherent in the exogenous direction-selecting techniques are discussed. It also analytically argues the mechanism of each endogenous direction technique. The literature review is end up with a numerical example of efficiency estimation for power plants, in which most of the reviewed directions for DDF are demonstrated and their evaluation performance are compared. |
Keywords: | Data Envelopment Analysis (DEA); Least distance; Endogenous mechanism; Cost efficiency; Profit efficiency; Marginal profit maximization |
JEL: | Q54 Q40 |
Date: | 2017–01–02 |
URL: | http://d.repec.org/n?u=RePEc:biw:wpaper:99&r=ecm |
By: | Muhammad Nadim Hanif (State Bank of Pakistan); Javed Iqbal (State Bank of Pakistan); M. Ali Choudhary (State Bank of Pakistan) |
Abstract: | Business cycle estimation is core of macroeconomics research. Hodrick-Prescott (1997) filter, (or HP filter), is the most popular tool to extract cycle from a macroeconomic time series. There are certain issues with HP filter including fixed value of ? across the series/countries and end points bias (EPB). Modified HP filter (MHP) of McDermott (1997) attempted to address the first issue. Bloechl (2014) introduced a loss function minimization approach to address the EPB issue but keeping lambda fixed (as in HP filter). In this study we marry the endogenous lambda approach of McDermott (1997) with loss function minimization approach of Bloechl (2014) to analyze EPB in HP filter, while intuitively changing the weighting scheme used in the latter. We contribute by suggesting an endogenous weighting scheme along with endogenous smoothing parameter to resolve EPB issue of HP filter. We call this fully modified HP (FMHP) filter. Our FMHP filter outperforms a variety of conventional filters in a power comparison (simulation) study as well as in observed real data (univariate and multivariate) analytics for a large set of countries. |
Keywords: | Business Cycle, Time Series, Fully Modified HP Filter, End Point Bias in HP Filter, Simulation, Cross Country Study. |
JEL: | E32 C18 |
Date: | 2017–04 |
URL: | http://d.repec.org/n?u=RePEc:sbp:wpaper:88&r=ecm |
By: | HONDA, Toshio; ING, Ching-Kang; WU, Wei-Ying |
Abstract: | We propose an adaptively weighted group Lasso procedure for simultaneous variable selection and structure identification for varying coefficient quantile regression models and additive quantile regression models with ultra-high dimensional covariates. Under a strong sparsity condition, we establish selection consistency of the proposed Lasso procedure when the weights therein satisfy a set of general conditions. This consistency result, however, is reliant on a suitable choice of the tuning parameter for the Lasso penalty, which can be hard to make in practice. To alleviate this difficulty, we suggest a BIC-type criterion, which we call high-dimensional information criterion (HDIC), and show that the proposed Lasso procedure with the tuning parameter determined by HDIC still achieves selection consistency. Our simulation studies support strongly our theoretical findings. |
Keywords: | Additive models, B-spline, high-dimensional information criteria, Lasso, structure identification, varying coefficient models |
Date: | 2017–04 |
URL: | http://d.repec.org/n?u=RePEc:hit:econdp:2017-04&r=ecm |
By: | Fabrizio Cipollini (Dipartimento di Statistica, Informatica, Applicazioni "G. Parenti", Università di Firenze); Robert F. Engle (Department of Finance, Stern School of Business, New York University); Giampiero M. Gallo (Dipartimento di Statistica, Informatica, Applicazioni "G. Parenti", Università di Firenze) |
Abstract: | We discuss several multivariate extensions of the Multiplicative Error Model by Engle (2002) to take into account dynamic interdependence and contemporaneously correlated innovations (vector MEM or vMEM). We suggest copula functions to link Gamma marginals of the innovations, in a specification where past values and conditional expectations of the variables can be simultaneously estimated. Results with realized volatility, volumes and number of trades of the JNJ stock show that significantly superior realized volatility forecasts are delivered with a fully interdependent vMEM relative to a single equation. Alternatives involving log–Normal or semiparametric formulations produce substantially equivalent results. |
Keywords: | GARCH; MEM; Realized Volatility; Trading Volume; Trading Activity; Trades; Copula; Volatility Forecasting |
JEL: | C32 C51 C58 C89 |
Date: | 2017–04 |
URL: | http://d.repec.org/n?u=RePEc:fir:econom:wp2017_02&r=ecm |
By: | Moshe Kim (University of Haifa, Department of Economics); Nir Billfeld (University of Haifa, Department of Economics, PHD student) |
Abstract: | In the case of truncation, which is the widespread phenomenon plaguing the majority of all elds of empirical research, the observed data distri- bution function is truncated and related to participants' covariates only, rendering Heckman's seminal and known correction procedure not imple- mentable. Thus, for the correction of endogenous selectivity bias propa- gated by truncation we introduce a new methodology that recovers the unobserved part of the data distribution function, using only its observed truncated part. The correlation patterns among the non-participants' co- variates (which are all functions of the recovered non-participants' density function) are recovered as well. The rationale underlying the ability to recover the unobserved complete density function from the observed trun- cated density function relies on the fact that the latter is obtained by conditioning the former on the selection rule. Consequently, the param- eters set which characterizes the truncated density function contains all the parameters characterizing the unobserved non-truncated density func- tion. Thus, it is possible to characterize the unobserved non-participants' density function in terms of the parameters estimated using the truncated data soley. Once this unobserved part is recovered one can estimate the selection rule equation for the hazard rate calculation as if the full sample consisting of both participants and non-participants is observable. Monte- Carlo simulations attest to the high accuracy of the estimates and above conventional p n consistency. |
Keywords: | Selectivity bias correction, Truncated Probit |
URL: | http://d.repec.org/n?u=RePEc:haf:huedwp:wp201607&r=ecm |
By: | Wiseman, Nathan (University of Nevada, Reno); Sorensen, Todd A. (University of Nevada, Reno) |
Abstract: | Instrumental variables (IV) is an indispensable tool for establishing causal relationships between variables. Recent work has focused on improving bounds for cases when an ideal instrument does not exist. We leverage a principle, "Intransitivity in Correlations," related to an under-utilized property from the statistics literature. From this principle, it is straightforward to obtain new bounds. We argue that these new theoretical bounds become increasingly useful as instruments become increasingly weak or invalid. |
Keywords: | instrumental variables, bounding, partial identification, transitivity in correlations |
JEL: | C26 |
Date: | 2017–03 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp10646&r=ecm |
By: | Sergei Seleznev (Bank of Russia, Russian Federation) |
Abstract: | We propose an algorithm for solving DSGE models with stochastic trends. Several implementations help us to solve the model with a small number of stochastic trends in the absence of a balanced growth path fast and allow us to control the accuracy of approximation in a certain range. Taking into account the fact that many implementations can be easily parallelized, this algorithm enables the estimation of models in the absence of a balanced growth path. We also provide a number of possible methods for estimation. |
Keywords: | Non-stationary DSGE, stochastic trends, Smolyak’s algorithm, perturbation method. |
JEL: | C61 C63 |
Date: | 2016–09 |
URL: | http://d.repec.org/n?u=RePEc:bkr:wpaper:wps15&r=ecm |
By: | David Lander (Pennsylvania State University); David Gunawan (University of New South Wales); William E. Griffiths (Department of Economics, University of Melbourne); Duangkamon Chotikapanich (Monash University) |
Keywords: | dominance probabilities; MCMC; poverty comparisons |
Date: | 2016–05 |
URL: | http://d.repec.org/n?u=RePEc:mlb:wpaper:2023&r=ecm |
By: | Gholamreza Hajargasht (Department of Economics, University of Melbourne); William E. Griffiths (Department of Economics, University of Melbourne) |
Keywords: | GMM, GB2 Distribution, General Quadratic, Beta Lorenz Curve, Gini Coefficient, Poverty Measures, Quantile Function Estimationds and make recommendations for improving current practices. |
JEL: | C13 C16 D31 |
Date: | 2016–01 |
URL: | http://d.repec.org/n?u=RePEc:mlb:wpaper:2022&r=ecm |