|
on Econometric Time Series |
By: | Maria-Helena A. Dias; Joilson Dias; Charles L. Evans |
Abstract: | The objective of this paper is to show an alternative technique to smooth time series from Monte Carlo Simulations. The technique considers that time series can contain more than one structural break, coming from movements in coefficients of trend or from intercept. The Hodrick-Prescott Filter (HP) does not provide identification of such possible breaks in order to smooth trend from the series to analyze its cyclical component. If the series are relatively stable, this problem may not have relevant implications. Otherwise, for economies relatively unstable, trend movements may interfere in the specification of the cyclical component, and Hodrick-Prescott smoothing could lead empiricists to achieve simplistic forms to economic cycles. In the context, we present an empirical methodology that allows structural breaks in any point of time, from coefficients or from intercepts. We apply this recursive technique to different models with variations in trend, from coefficients and from intercepts, using series simulated by Monte Carlo. Moreover, we compare the results of both techniques to the Brazilian GDP. |
JEL: | E32 C22 |
Date: | 2004 |
URL: | http://d.repec.org/n?u=RePEc:anp:en2004:104&r=ets |
By: | Smets,F.; Wouters,R. (Nationale Bank van Belgie) |
Date: | 2004 |
URL: | http://d.repec.org/n?u=RePEc:att:belgnw:200460&r=ets |
By: | Smets,F.; Wouters,R. (Nationale Bank van Belgie) |
Date: | 2004 |
URL: | http://d.repec.org/n?u=RePEc:att:belgnw:200461&r=ets |
By: | M. Hashem Pesaran |
Abstract: | This paper presents a new approach to estimation and inference in panel data models with a multifactor error structure where the unobserved common factors are (possibly) correlated with exogenously given individual-specific regressors, and the factor loadings differ over the cross section units. The basic idea behind the proposed estimation procedure is to filter the individual-specific regressors by means of (weighted) cross-section aggregates such that asymptotically as the cross-section dimension (N) tends to infinity the differential effects of unobserved common factors are eliminated. The estimation procedure has the advantage that it can be computed by OLS applied to an auxiliary regression where the observed regressors are augmented by (weighted) cross sectional averages of the dependent variable and the individual specific regressors. Two different but related problems are addressed: one that concerns the coefficients of the individual-specific regressors, and the other that focusses on the mean of the individual coefficients assumed random. In both cases appropriate estimators, referred to as common correlated effects (CCE) estimators, are proposed and their asymptotic distribution as N ¨ ‡, with T (the time-series dimension) fixed or as N and T¨ ‡ (jointly) are derived under different regularity conditions. One important feature of the proposed CCE mean group (CCEMG) estimator is its invariance to the (unknown but fixed) number of unobserved common factors as N and T¨ ‡ (jointly). The small sample properties of the various pooled estimators are investigated by Monte Carlo experiments that confirm the theoretical derivations and show that the pooled estimators have generally satisfactory small sample properties even for relatively small values of N and T. |
Keywords: | cross section dependence, large panels, common correlated effects, heterogeneity, estimation and inference |
JEL: | C12 C13 C33 |
Date: | 2004 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_1331&r=ets |
By: | Arie ten Cate |
Abstract: | This paper presents some suggestions for the specification of dynamic models. These suggestions are based on the supposed continuous-time nature of most economic processes. In particular, the partial adjustment model --or Koyck lag model-- is discussed. The refinement of this model is derived from the continuous-time econometric literature. <P> We find three alternative formulas for this refinement, depending on the particular econometric literature which is used. Two of these formulas agree with an intuitive example. In passing, it is shown that that the continuous-time models of Sims and Bergstrom are closely related. Also the inverse of Bergstrom’s approximate analog has been introduced, making use of engineering mathematics. |
Keywords: | dynamics; dynamics; continuous time; econometrics; koyck; bergstrom |
JEL: | C22 C51 |
Date: | 2004–11 |
URL: | http://d.repec.org/n?u=RePEc:cpb:discus:41&r=ets |
By: | Alberto Mora-Galan; Ana Perez; Esther Ruiz |
Abstract: | It has been often empirically observed that the sample autocorrelations of absolute financial returns are larger than those of squared returns. This property, know as Taylor effect, is analysed in this paper in the Stochastic Volatility (SV) model framework. We show that the stationary autoregressive SV model is able to generate this property for realistic parameter specifications. On the other hand, the Taylor effect is shown not to be a sampling phenomena due to estimation biases of the sample autocorrelations. Therefore, financial models that aims to explain the behaviour of financial returns should take account of this property. |
Date: | 2004–11 |
URL: | http://d.repec.org/n?u=RePEc:cte:wsrepe:ws046315&r=ets |
By: | Lemmens, A.; Croux, C.; Dekimpe, M.G. (Erasmus Research Institute of Management (ERIM), Erasmus University Rotterdam) |
Abstract: | We develop a bivariate spectral Granger-causality test that can be applied at each individual frequency of the spectrum. The spectral approach to Granger causality has the distinct advantage that it allows to disentangle (potentially) di?erent Granger- causality relationships over di?erent time horizons. We illustrate the usefulness of the proposed approach in the context of the predictive value of European production expectation surveys. |
Keywords: | Business Surveys;Granger Causality;Production Expectations;Spectral Analysis; |
Date: | 2004–12–01 |
URL: | http://d.repec.org/n?u=RePEc:dgr:eureri:30001959&r=ets |
By: | Akifumi Isogai; Satoru Kanoh; Toshifumi Tokunaga |
Abstract: | This paper attempts to extend the Markov-switching model with time-varying tansition probabilities(TVTP). The tansition probabilities in the conventional TVTP model are functions of exogenous variables that are time-dependent but with constant coefficients. In this paper the coefficient parameters that express the sensitivities of the exogenous variables are also allowed to vary with time. Using data on Japanese monthly stock returns, it is shown that the explanatory power of the extended model is superior to conventional models. |
Keywords: | Gibbs sampling, Kalman filter, Marginal likelihood, Market dynamics, Time-varying sensitivity |
Date: | 2004–11 |
URL: | http://d.repec.org/n?u=RePEc:hst:hstdps:d04-43&r=ets |
By: | Xibin Zhang; Maxwell L. King |
Abstract: | This paper presents a Markov chain Monte Carlo (MCMC) algorithm to estimate parameters and latent stochastic processes in the asymmetric stochastic volatility (SV) model, in which the Box-Cox transformation of the squared volatility follows an autoregressive Gaussian distribution and the marginal density of asset returns has heavytails. To test for the significance of the Box-Cox transformation parameter, we present the likelihood ratio statistic, in which likelihood functions can be approximated using a particle filter and a Monte Carlo kernel likelihood. When applying the heavy-tailed asymmetric Box-Cox SV model and the proposed sampling algorithm to continuously compounded daily returns of the Australian stock index, we find significant empirical evidence supporting the Box-Cox transformation of the squared volatility against the alternative model involving a logarithmic transformation. |
Keywords: | Leverage effect; Likelihood ratio test; Markov Chain Monte Carlo; Monte Carlo kernel likelihood; Particle filter |
JEL: | C12 C15 C52 |
Date: | 2004–11 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2004-26&r=ets |
By: | Yasuhiro Omori (University of Tokyo); Siddhartha Chib (Washington University); Neil Shephard (Nuffield College, University of Oxford, UK); Jouchi Nakajima (University of Tokyo) |
Abstract: | Kim, Shephard and Chib (1998) provided a Bayesian analysis of stochastic volatility models based on a very fast and reliable Markov chain Monte Carlo (MCMC) algorithm. Their method ruled out the leverage effect, which limited its scope for applications. Despite this, their basic method has been extensively used in financial economics literature and more recently in macroeconometrics. In this paper we show how to overcome the limitation of this analysis so that the essence of the Kim, Shephard and Chib (1998) can be used to deal with the leverage effect, greatly extending the applicability of this method. Several illustrative examples are provided. |
Keywords: | Leverage effect, Markov chain Monte Carlo, Mixture sampler, Stochastic volatility, Stock returns. |
Date: | 2004–08–22 |
URL: | http://d.repec.org/n?u=RePEc:nuf:econwp:0419&r=ets |
By: | Siddhartha Chib (Olin School of Business, Washington University); Michael K Pitt (University of Warwick); Neil Shephard (Nuffield College, University of Oxford, UK) |
Abstract: | This paper provides methods for carrying out likelihood based inference for diffusion driven models, for example discretely observed multivariate diffusions, continuous time stochastic volatility models and counting process models. The diffusions can potentially be non-stationary. Although our methods are sampling based, making use of Markov chain Monte Carlo methods to sample the posterior distribution of the relevant unknowns, our general strategies and details are different from previous work along these lines. The methods we develop are simple to implement and simulation efficient. Importantly, unlike previous methods, the performance of our technique is not worsened, in fact it improves, as the degree of latent augmentation is increased to reduce the bias of the Euler approximation. In addition, our method is not subject to a degeneracy that afflicts previous techniques when the degree of latent augmentation is increased. We also discuss issues of model choice, model checking and filtering. The techniques and ideas are applied to both simulated and real data. |
Keywords: | Bayes estimation, Brownian bridge, Non-linear diffusion, Euler approximation, Markov chain Monte Carlo, Metropolis-Hastings algorithm, Missing data, Simulation, Stochastic differential equation. |
Date: | 2004–08–22 |
URL: | http://d.repec.org/n?u=RePEc:nuf:econwp:0420&r=ets |
By: | Ole Barndorff-Nielsen (University of Aarhus); Neil Shephard (Nuffield College, University of Oxford, UK) |
Abstract: | In this brief note we review some of our recent results on the use of high frequency financial data to estimate objects like integrated variance in stochastic volatility models. Interesting issues include multipower variation, jumps and market microstructure effects. |
Date: | 2004–11–18 |
URL: | http://d.repec.org/n?u=RePEc:nuf:econwp:0430&r=ets |
By: | G. MALENGIER; L. POZZI |
Abstract: | We examine the Ricardian Equivalence proposition for a panel of OECD countries in the 80s and 90s by estimating a nonlinear dynamic consumption function. We estimate this function with the Generalized Method of Moments (GMM) using moment conditions that allow us to use information from the levels of the variables included in the consumption function. To examine the performance of this nonlinear GMM estimator and to obtain small sample critical values for the test statistics we apply both one-level and two-level bootstraps. Ricardian Equivalence is rejected since we find a significant number of current income consumers and since permanent income consumers seem to consume less in each period than what would be expected under certainty equivalence. |
Keywords: | Ricardian Equivalence, rule-of-thumb, precaution, panel data, GMM, bootstrap |
JEL: | E21 E62 C33 |
Date: | 2004–11 |
URL: | http://d.repec.org/n?u=RePEc:rug:rugwps:04/274&r=ets |
By: | Catalin Starica (Chalmers University of Technology, Göteborg, Sweden) |
Abstract: | This paper investigates the relevance of the stationary, conditional, parametric ARCH modeling paradigm as embodied by the GARCH(1,1) process to describing and forecasting the dynamics of returns of the Standard & Poors 500 (S&P 500) stock market index. A detailed analysis of the series of S&P 500 returns featured in Section 3.2 of the Advanced Information note on the Bank of Sweden Prize in Economic Sciences in Memory of Alfred Nobel reveals that during the period under discussion, there were no (statistically significant) differences between GARCH(1,1) modeling and a simple non-stationary, non-parametric regression approach to next-day volatility forecasting. A second finding is that the GARCH(1,1) model severely over-estimated the unconditional variance of returns during the period under study. For example, the annualized implied GARCH(1,1) unconditional standard deviation of the sample is 35% while the sample standard deviation estimate is a mere 19%. Over-estimation of the unconditional variance leads to poor volatility forecasts during the period under discussion with the MSE of GARCH(1,1) 1-year ahead volatility more than 4 times bigger than the MSE of a forecast based on historical volatility. We test and reject the hypothesis that a GARCH(1,1) process is the true data generating process of the longer sample of returns of the S&P 500 stock market index between March 4, 1957 and October 9, 2003. We investigate then the alternative use of the GARCH(1,1) process as a local, stationary approximation of the data and find that the GARCH(1,1) model fails during significantly long periods to provide a good local description to the time series of returns on the S&P 500 and Dow Jones Industrial Average indexes. Since the estimated coefficients of the GARCH model change significantly through time, it is not clear how the GARCH(1,1) model can be used for volatility forecasting over longer horizons. A comparison between the GARCH(1,1) volatility forecasts and a simple approach based on historical volatility questions the relevance of the GARCH(1,1) dynamics for longer horizon volatility forecasting for both the S&P 500 and Dow Jones Industrial Average indexes. |
Keywords: | stock returns, volatility, Garch(1,1), non-stationarities, unconditional time-varying volatility, IGARCH effect, longer-horizon forecasts |
JEL: | C14 C32 |
Date: | 2004–11–22 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0411015&r=ets |
By: | Jawadi Fredj (Université de Paris10-Nanterre MODEM-CNRS); Koubaa Yousra (Université de Paris10-Nanterre MODEM-CNRS) |
Abstract: | The aim of this paper is to study the efficient capital market hypothesis by using recent developments in nonlinear econometrics. In such a context, we estimate a Smooth Transition Error Correction Model (STECM). We introduce the DowJones as an explanatory variable of the dynamics of the other stock indexes. The error correction term takes into account of the structural changes that occured progressively from both the endogenous and the DowJones series. We note that the Smooth Transition Error Correction Model, for which the dynamics of adjustment is of ESTAR type, is more adequate than the linear ECM model to represent the adjustment of the stock price to the long term equilibrium price. Estimation results reveal the nonlinearity inherent to the adjustment process. In particular, we note that the adjustment is not continuous and that the speed of convergence toward price of equilibrium is not constant but rather function of the size of the disequilibrium. |
Keywords: | Efficiency, Regime-Switching Models, Threshold Cointegration, STECM. |
JEL: | C1 C2 C3 C4 C5 C8 |
Date: | 2004–12–02 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpem:0412001&r=ets |
By: | Cornelis A. Los |
Abstract: | The Value-at-Risk (VAR) measure is based on only the second moment of a rates of return distribution. It is an insufficient risk performance measure, since it ignores both the higher moments of the pricing distributions, like skewness and kurtosis, and all the fractional moments resulting from the long - term dependencies (long memory) of dynamic market pricing. Not coincidentally, the VaR methodology also devotes insufficient attention to the truly extreme financial events, i.e., those events that are catastrophic and that are clustering because of this long memory. Since the usual stationarity and i.i.d. assumptions of classical asset returns theory are not satisfied in reality, more attention should be paid to the measurement of the degree of dependence to determine the true risks to which any investment portfolio is exposed: the return distributions are time-varying and skewness and kurtosis occur and change over time. Conventional mean-variance diversification does not apply when the tails of the return distributions ate too fat, i.e., when many more than normal extreme events occur. Regrettably, also, Extreme Value Theory is empirically not valid, because it is based on the uncorroborated i.i.d. assumption. |
Keywords: | Long memory, Value at Risk, Extreme Value Theory, Portfolio Management, Degrees of Persistence |
JEL: | C33 G13 G14 G18 G19 G24 |
Date: | 2004–12–08 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpfi:0412014&r=ets |
By: | JS Armstrong (The Wharton School - University of Pennsylvania); Robert Fildes (The Management School - Lancaster University - UK) |
Abstract: | Clements and Hendry (1993) proposed the Generalized Forecast Error Second Moment (GFESM) as an improvement to the Mean Square Error in comparing forecasting performance across data series. They based their conclusion on the fact that rankings based on GFESM remain unaltered if the series are linearly transformed. In this paper, we argue that this evaluation ignores other important criteria. Also, their conclusions were illustrated by a simulation study whose relationship to real data was not obvious. Thirdly, prior empirical studies show that the mean square error is an inappropriate measure to serve as a basis for comparison. This undermines the claims made for the GFESM. |
Keywords: | Accuracy Forecast evaluation Loss functions |
JEL: | A |
Date: | 2004–12–06 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpgt:0412002&r=ets |
By: | JS Armstrong (The Wharton School - University of Pennsylvania); Fred Collopy (Case Western Reserve University) |
Abstract: | This paper examines a strategy for structuring one type of domain knowledge for use in extrapolation. It does so by representing information about causality and using this domain knowledge to select and combine forecasts. We use five categories to express causal impacts upon trends: growth, decay, supporting, opposing, and regressing. An identification of causal forces aided in the determination of weights for combining extrapolation forecasts. These weights improved average ex ante forecast accuracy when tested on 104 annual economic and demographic time series. Gains in accuracy were greatest when (1) the causal forces were clearly specified and (2) stronger causal effects were expected, as in longer- range forecasts. One rule suggested by this analysis was: “Do not extrapolate trends if they are contrary to causal forces.” We tested this rule by comparing forecasts from a method that implicitly assumes supporting trends (Holt’s exponential smoothing) with forecasts from the random walk. Use of the rule improved accuracy for 20 series where the trends were contrary; the MdAPE (Median Absolute Percentage Error) was 18% less for the random walk on 20 one-year ahead forecasts and 40% less for 20 six-year-ahead forecasts. We then applied the rule to four other data sets. Here, the MdAPE for the random walk forecasts was 17% less than Holt’s error for 943 short-range forecasts and 43% less for 723 long-range forecasts. Our study suggests that the causal assumptions implicit in traditional extrapolation methods are inappropriate for many applications. |
Keywords: | Causal forces Combining Contrary trends Damped trends Exponential smoothing Judgment Rule-based forecasting Selecting methods |
JEL: | A |
Date: | 2004–12–06 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpgt:0412003&r=ets |
By: | Fred Collopy (Case Western Reserve University); JS Armstrong (The Wharton School - University of Pennsylvania) |
Abstract: | This paper examines the feasibility of rule -based forecasting, a procedure that applies forecasting expertise and domain knowledge to produce forecasts according to features of the data. We developed a rule base to make annual extrapolation forecasts for economic and demographic time series. The development of the rule base drew upon protocol analyses of five experts on forecasting methods. This rule base, consisting of 99 rules, combined forecasts from four extrapolation methods (the random walk, regression, Brown's linear exponential smoothing, and Holt's exponential smoothing) according to rules using 18 features of time series. For one-year ahead ex ante forecasts of 90 annual series, the median absolute percentage error (MdAPE) for rule- based forecasting was 13% less than that from equally-weighted combined forecasts. For six-year ahead ex ante forecasts, rule-based forecasting had a MdAPE that was 42% less. The improvement in accuracy of the rule - based forecasts over equally-weighted combined forecasts was statistically significant. Rule-based forecasting was more accurate than equal-weights combining in situations involving significant trends, low uncertainty, stability, and good domain expertise. |
Keywords: | Rule-based forecasting, time series |
JEL: | A |
Date: | 2004–12–06 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpgt:0412004&r=ets |
By: | JS Armstrong (The Wharton School - University of Pennsylvania) |
Abstract: | Before 1960, little empirical research was done on forecasting methods. Since then, the literature has grown rapidly, especially in the area of judgmental forecasting. This research supports and adds to the forecasting guidelines proposed before 1960, such as the value of combining forecasts. New findings have led to significant gains in our ability to forecast and to help people to use forecasts. What have we reamed about forecasting over the past quarter century? Does recent research provide guidance for making more accurate forecasts, obtaining better assessments of uncertainty, or gaining acceptance of our forecasts? I will first describe forecasting principles that were believed to be the most advanced in 1960. Following that, I will examine the evidence produced since 1960. |
Keywords: | forecasting, forecasting research |
JEL: | A |
Date: | 2004–12–06 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpgt:0412006&r=ets |
By: | JS Armstrong (The Wharton School - University of Pennsylvania); Michael C. Grohman (IBM Corporation - Philadelphia) |
Abstract: | The following hypotheses about long-range market forecasting were examined: Hl Objective methods provide more accuracy than do subjective methods. H2 The relative advantage of objective over subjective methods increases as the amount of change in the environment increases. H3 Causal methods provide more accuracy than do naive methods. H4 The relative advantage of causal over naive methods increases as the amount of change in the environment increases. Support for these hypotheses was then obtained from the literature and from a study of a single market. The study used three different models to make ex ante forecasts of the U.S. air travel market from 1963 through 1968. These hypotheses imply that econometric methods are more accurate for long range market forecasting than are the major alternatives, expert judgment and extrapolation, and that the relative superiority of econometric methods increases as the time span of the forecast increases. |
Keywords: | long-range market forecasting, forecasting methods, forecasting |
JEL: | A |
Date: | 2004–12–06 |
URL: | http://d.repec.org/n?u=RePEc:wpa:wuwpgt:0412010&r=ets |
By: | Kevin Dowd (Nottingham University Business School) |
Abstract: | This paper presents a new approach to the evaluation of FOMC macroeconomic forecasts. Its distinctive feature is the interpretation, under reasonable conditions, of the minimum and maximum forecasts reported in FOMC meetings as indicative of probability density forecasts for these variables. This leads to some straightforward binomial tests of the performance of the FOMC forecasts as forecasts of macroeconomic risks. Empirical results suggest that there are serious problems with the FOMC forecasts. Most particularly, there are problems with the FOMC forecasts of the tails of the macroeconomic density functions, including a tendency to under-estimate the tails of macroeconomic risks. |
Keywords: | Macroeconomic risks, FOMC forecasts, density forecasting |
JEL: | C53 E47 E52 |
Date: | 2004–01–12 |
URL: | http://d.repec.org/n?u=RePEc:nub:occpap:12&r=ets |