|
on Econometric Time Series |
By: | Chen, J.; Kobayashi, M.; McAleer, M.J. |
Abstract: | The paper considers the problem of volatility co-movement, namely as to whether two financial returns have perfectly correlated common volatility process, in the framework of multivariate stochastic volatility models and proposes a test which checks the volatility co-movement. The proposed test is a stochastic volatility version of the co-movement test proposed by Engle and Susmel (1993), who investigated whether international equity markets have volatility co-movement using the framework of the ARCH model. In empirical analysis we found that volatility co-movement exists among closelylinked stock markets and that volatility co-movement of the exchange rate markets tends to be found when the overall volatility level is low, which is contrasting to the often-cited finding in the financial contagion literature that financial returns have co-movement in the level during the financial crisis. |
Keywords: | Lagrange multiplier test, Volatility co-movement, Stock markets, Exchange rate Markets, Financial crisis |
JEL: | C12 C58 G01 G11 |
Date: | 2017–02–01 |
URL: | http://d.repec.org/n?u=RePEc:ems:eureir:99788&r=ets |
By: | Michal Andrle; Miroslav Plasil |
Abstract: | This paper introduces "system priors" into Bayesian analysis of econometric time series and provides a simple and illustrative application. Unlike priors on individual parameters, system priors offer a simple and efficient way of formulating well-defined and economically meaningful priors about model properties that determine the overall behavior of the model. The generality of system priors is illustrated using an AR(2) process with a prior that its dynamics comes mostly from business-cycle frequencies. |
Keywords: | Bayesian analysis, system priors, time series |
JEL: | C11 C18 C22 C51 |
Date: | 2017–05 |
URL: | http://d.repec.org/n?u=RePEc:cnb:wpaper:2017/01&r=ets |
By: | Rodríguez Caballero, Carlos Vladimir; Ergemen, Yunus Emre |
Abstract: | A dynamic multilevel factor model with possible stochastic time trends is proposed. In the model, long-range dependence and short memory dynamics are allowed in global and regional common factors as well as model innovations. Estimation of global and regional common factors is performed on the prewhitened series, for which the prewhitening parameter is estimated semiparametrically from the cross-sectional and regional average of the observable series. Employing canonical correlation analysis and a sequential least-squares algorithm on the prewhitened series, the resulting multilevel factor estimates have a centered asymptotic normal distribution. Selection of the number of global and regional factors is also discussed. Estimates are found to have good small-sample performance via Monte Carlo simulations. The method is then applied to the Nord Pool electricity market for the analysis of price comovements among different regions within the power grid. The global factor is identified to be the system price, and fractional cointegration relationships are found between regional prices and the system price. |
Keywords: | Nord Pool power market; fractional cointegration; short memory; long-range dependence; Multi-level factor |
Date: | 2017–05 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:24614&r=ets |
By: | Lillo Rodríguez, Rosa Elvira; Laniado Rodas, Henry; Cabana Garceran del Vall, Elisa |
Abstract: | A collection of methods for multivariate outlier detection based on a robust Mahalanobis distance is proposed. The procedure consists on different combinations of robust estimates for location and covariance matrix based on shrinkage. The performance of our proposal is illustrated, through the comparison to other techniques from the literature, in a simulation study. The resulting high correct classification rates and low false classification rates in the vast majority of cases, and also the good computational times shows the goodness of our proposal. The performance is also illustrated with a real dataset example and some conclusions are established. |
Keywords: | robust covariance matrix; robust location; robust estimation; high-dimension; shrinkage estimator; robust Mahalanobis distance; outlier detection |
Date: | 2017–05 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:24613&r=ets |
By: | Georgios Bampinas (Department of Economics, University of Macedonia); Konstantinos Ladopoulos (Citrix Systems Research & Development); Theodore Panagiotidis (Department of Economics, University of Macedonia) |
Abstract: | We employ 1440 stocks listed in the S&P Composite 1500 Index of the NYSE. Three benchmark GARCH models are estimated for the returns of each individual stock under three alternative distributions (Normal, t and GED).We provide summary statistics for all the GARCH coefficients derived from 11520 regressions. The EGARCH model with GED errors emerges as the preferred choice for the individual stocks in the S&P 1500 universe when non-negativity and stationarity constraints in the conditional variance are imposed. 57% of the constraint’s violations are taking place in the S&P small cap stocks. |
Keywords: | GARCH, GJR-GARCH, EGARCH, alternative distributions, volatility, time-series. |
JEL: | C22 |
Date: | 2017–05 |
URL: | http://d.repec.org/n?u=RePEc:mcd:mcddps:2017_04&r=ets |
By: | James D. Hamilton |
Abstract: | Here's why. (1) The HP filter produces series with spurious dynamic relations that have no basis in the underlying data-generating process. (2) Filtered values at the end of the sample are very different from those in the middle, and are also characterized by spurious dynamics. (3) A statistical formalization of the problem typically produces values for the smoothing parameter vastly at odds with common practice, e.g., a value for λ far below 1600 for quarterly data. (4) There's a better alternative. A regression of the variable at date t+h on the four most recent values as of date t offers a robust approach to detrending that achieves all the objectives sought by users of the HP filter with none of its drawbacks. |
JEL: | C22 E32 E47 |
Date: | 2017–05 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:23429&r=ets |
By: | Skrobotov, Anton (Russian Presidential Academy of National Economy and Public Administration (RANEPA)); Turuntseva, Marina (Russian Presidential Academy of National Economy and Public Administration (RANEPA)) |
Abstract: | Testing of a unit root in the data is of great importance for the empirical analysis. Almost one or macroeconomic study is not without testing whether a particular stationary time series against the trend (trend stationary, TS) or It is stationary in first differences (difference stationary, DS). In the first If the number is stationary relative to the trend, the number of simulated necessary levels. Otherwise, you need to go to the first difference time series if it is modeled by this specific number separately,or proceed to the cointegration analysis of multiple time series, each which is non-stationary. The presence of cointegration allows us to give feasibility study of long-term and short-term dependency adjustments to long-term equilibrium. |
Date: | 2017–02 |
URL: | http://d.repec.org/n?u=RePEc:rnp:wpaper:021707&r=ets |
By: | Davide De Gaetano |
Abstract: | This paper proposes some weighting schemes to average forecasts across different estimation windows to account for structural changes in the unconditional variance of a GARCH (1,1) model. Each combination is obtained by averaging forecasts generated by recursively increasing an initial estimation window of a fixed number of observations v. Three different choices of the combination weights are proposed. In the first scheme, the forecast combination is obtained by using equal weights to average the individual forecasts; the second weighting method assigns heavier weights to forecasts that use more recent information; the third is a trimmed version of the forecast combination with equal weights where a fixed fraction of forecasts with the worst performance are discarded. Simulation results show that forecast combinations with high values of v are able to perform better than alternative schemes proposed in the literature. An application to real data confirms the simulation results |
Keywords: | Forecast combinations, Structural breaks, GARCH models. |
JEL: | C53 C58 G17 |
Date: | 2017–05 |
URL: | http://d.repec.org/n?u=RePEc:rtr:wpaper:0219&r=ets |
By: | Edward Herbst; Frank Schorfheide |
Abstract: | The accuracy of particle filters for nonlinear state-space models crucially depends on the proposal distribution that mutates time t-1 particle values into time t values. In the widely-used bootstrap particle filter, this distribution is generated by the state-transition equation. While straightforward to implement, the practical performance is often poor. We develop a self-tuning particle filter in which the proposal distribution is constructed adaptively through a sequence of Monte Carlo steps. Intuitively, we start from a measurement error distribution with an inflated variance, and then gradually reduce the variance to its nominal level in a sequence of tempering steps. We show that the filter generates an unbiased and consistent approximation of the likelihood function. Holding the run time fixed, our filter is substantially more accurate in two DSGE model applications than the bootstrap particle filter. |
JEL: | C11 C32 E32 |
Date: | 2017–05 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:23448&r=ets |
By: | Ralph Rudd (University of Cape Town); Thomas A. McWalter (University of Johannesburg); Jorg Kienitz (Bergische Universit¨at Wuppertal); Eckhard Platen (Finance Discipline Group, UTS Business School, University of Technology, Sydney) |
Abstract: | Recursive Marginal Quantization (RMQ) allows fast approximation of solutions to stochastic differential equations in one-dimension. When applied to two factor models, RMQ is inefficient due to the fact that the optimization problem is usually performed using stochastic methods, e.g., Lloyd’s algorithm or Competitive Learning Vector Quantization. In this paper, a new algorithm is proposed that allows RMQ to be applied to two-factor stochastic volatility models, which retains the efficiency of gradient-descent techniques. By margining over potential realizations of the volatility process, a significant decrease in computational effort is achieved when compared to current quantization methods. Additionally, techniques for modelling the correct zero-boundary behaviour are used to allow the new algorithm to be applied to cases where the previous methods would fail. The proposed technique is illustrated for European options on the Heston and Stein-Stein models, while a more thorough application is considered in the case of the popular SABR model, where various exotic options are also priced. |
Date: | 2017–05–01 |
URL: | http://d.repec.org/n?u=RePEc:uts:rpaper:382&r=ets |