|
on Econometrics |
By: | Hiroaki Chigira; Taku Yamamoto |
Abstract: | It is widely believed that taking cointegration and integration into consideration is useful in constructing long-term forecasts for cointegrated processes. This paper shows that imposing neither cointegration nor integration leads to superior long-term forecasts. |
Keywords: | Forecasting, Cointegration, Integration |
JEL: | C12 C32 |
Date: | 2006–03 |
URL: | http://d.repec.org/n?u=RePEc:hst:hstdps:d05-148&r=ecm |
By: | Josep Lluís Carrion-i-Silvestre; Andreu Sansó |
Abstract: | In this paper we generalize the KPSS-type test to allow for two structural breaks. Seven models have been de?ned depending on the way that the structural breaks a¤ect the time series behaviour. The paper derives the limit distribution of the test both under the null and the alternative hypotheses and conducts a set of simulation experiments to analyse the performance in finite samples. |
Keywords: | Stationary tests, structural breaks, unit root. |
JEL: | C12 C15 C22 |
Date: | 2005–07 |
URL: | http://d.repec.org/n?u=RePEc:ubi:deawps:13&r=ecm |
By: | Moon, Hyungsik Roger; Schorfheide, Frank |
Abstract: | This paper derives limit distributions of empirical likelihood estimators for models in which inequality moment conditions provide overidentifying information. We show that the use of this information leads to a reduction of the asymptotic mean-squared estimation error and propose asymptotically valid confidence sets for the parameters of interest. While inequality moment conditions arise in many important economic models, we use a dynamic macroeconomic model as data generating process and illustrate our methods with instrumental variable estimators of monetary policy rules. The assumption that output does not fall in response to an expansionary monetary policy shock leads to an inequality moment condition that can substantially increase the precision with which the policy rule is estimated. The results obtained in this paper extend to conventional GMM estimators. |
Keywords: | empirical likelihood estimation; generalized method of movements; inequality moment conditions; instrumental variable estimation; monetary policy rules |
JEL: | C32 |
Date: | 2006–03 |
URL: | http://d.repec.org/n?u=RePEc:cpr:ceprdp:5605&r=ecm |
By: | Ralf Brüggemann |
Abstract: | This paper investigates the finite sample properties of confidence intervals for structural vector error correction models (SVECMs) with long-run identifying restrictions on the impulse response functions. The simulation study compares methods that are frequently used in applied SVECM studies including an interval based on the asymptotic distribution of impulse responses, a standard percentile (Efron) bootstrap interval, Hall’s percentile and Hall’s studentized bootstrap interval. Data generating processes are based on empirical SVECM studies and evaluation criteria include the empirical coverage, the average length and the sign implied by the interval. Our Monte Carlo evidence suggests that applied researchers have little to choose between the asymptotic and the Hall bootstrap intervals in SVECMs. In contrast, the Efron bootstrap interval may be less suitable for applied work as it is less informative about the sign of the underlying impulse response function and the computationally demanding studentized Hall interval is often outperformed by the other methods. Differences between methods are illustrated empirically by using a data set from King, Plosser, Stock & Watson (1991). |
Keywords: | Structural vector error correction model, impulse response intervals, cointegration, long-run restrictions, bootstrap |
JEL: | C32 C53 C15 |
Date: | 2006–03 |
URL: | http://d.repec.org/n?u=RePEc:hum:wpaper:sfb649dp2006-021&r=ecm |
By: | Russell Davidson (McGill University, CIREQ and GREQAM); Jean-Yves Duclos (Université Laval, CIRPÉE and IZA Bonn) |
Abstract: | Asymptotic and bootstrap tests are studied for testing whether there is a relation of stochastic dominance between two distributions. These tests have a null hypothesis of nondominance, with the advantage that, if this null is rejected, then all that is left is dominance. This also leads us to define and focus on restricted stochastic dominance, the only empirically useful form of dominance relation that we can seek to infer in many settings. One testing procedure that we consider is based on an empirical likelihood ratio. The computations necessary for obtaining a test statistic also provide estimates of the distributions under study that satisfy the null hypothesis, on the frontier between dominance and nondominance. These estimates can be used to perform bootstrap tests that can turn out to provide much improved reliability of inference compared with the asymptotic tests so far proposed in the literature. |
Keywords: | stochastic dominance, empirical likelihood, bootstrap test |
JEL: | C10 C12 C15 I32 |
Date: | 2006–03 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp2047&r=ecm |
By: | Fulvio Corsi (University of Lugano); Uta Kretschmer (University of Bonn, Germany); Stefan Mittnik (University of Munich); Christian Pigorsch (University of Munich) |
Abstract: | Using unobservable conditional variance as measure, latent–variable approaches, such as GARCH and stochastic–volatility models, have traditionally been dominating the empirical finance literature. In recent years, with the availability of high–frequency financial market data modeling realized volatility has become a new and innovative research direction. By constructing “observable” or realized volatility series from intraday transaction data, the use of standard time series models, such as ARFIMA models, have become a promising strategy for modeling and predicting (daily) volatility. In this paper, we show that the residuals of the commonly used time–series models for realized volatility exhibit non–Gaussianity and volatility clustering. We propose extensions to explicitly account for these properties and assess their relevance when modeling and forecasting realized volatility. In an empirical application for S&P500 index futures we show that allowing for time–varying volatility of realized volatility leads to a substantial improvement of the model’s fit as well as predictive performance. Furthermore, the distributional assumption for residuals plays a crucial role in density forecasting. |
Keywords: | Finance, Realized Volatility, Realized Quarticity, GARCH, Normal Inverse Gaussian Distribution, Density Forecasting |
JEL: | C22 C51 C52 C53 |
Date: | 2005–11–28 |
URL: | http://d.repec.org/n?u=RePEc:cfs:cfswop:wp200533&r=ecm |
By: | Ana Oliveira-Brochado (Faculdade de Economia, Universidade do Porto); Francisco Vitorino Martins (Faculdade de Economia, Universidade do Porto) |
Abstract: | Despite the widespread application of finite mixture models, the decision of how many classes are required to adequately represent the data is, according to many authors, an important, but unsolved issue. This work aims to review, describe and organize the available approaches designed to help the selection of the adequate number of mixture components (including Monte Carlo test procedures, information criteria and classification-based criteria); we also provide some published simulation results about their relative performance, with the purpose of identifying the scenarios where each criterion is more effective (adequate). |
Keywords: | Finite mixture; number of mixture components; information criteria; simulation studies. |
JEL: | C15 C52 |
Date: | 2005–11 |
URL: | http://d.repec.org/n?u=RePEc:por:fepwps:194&r=ecm |
By: | Manabu Asai; Michael McAleer |
Abstract: | This paper proposes and analyses two types of asymmetric multivariate stochastic volatility (SV) models, namely: (i) SV with leverage (SV-L) model, which is based on the negative correlation between the innovations in the returns and volatility; and (ii) SV with leverage and size effect (SV-LSE) model, which is based on the signs and magnitude of the returns. The paper derives the state space form for the logarithm of the squared returns which follow the multivariate SV-L model, and develops estimation methods for the multivariate SV-L and SV-LSE models based on the Monte Carlo likelihood (MCL) approach. The empirical results show that the multivariate SV-LSE model fits the bivariate and trivariate returns of the S&P 500, Nikkei 225, and Hang Seng indexes with respect to AIC and BIC more accurately than does the multivariate SV-L model. Moreover, the empirical results suggest that the univariate models should be rejected in favour of their bivariate and trivariate counterparts. |
Keywords: | Multivariate stochastic volatility, asymmetric leverage, dynamic leverage, size effect, numerical likelihood, Bayesian Markov chain Monte Carlo, importance sampling. |
Date: | 2005–11 |
URL: | http://d.repec.org/n?u=RePEc:ubi:deawps:12&r=ecm |
By: | Anthony Garratt; Gary Koop; Shaun P. Vahey (Reserve Bank of New Zealand) |
Abstract: | A recent revision to the preliminary measurement of GDP(E) growth for 2003Q2 caused considerable press attention, provoked a public enquiry and prompted a number of reforms to UK statistical reporting procedures. In this paper, we compute the probability of "substantial revisions" that are greater (in absolute value) than the controversial 2003 revision. The pre-dictive densities are derived from Bayesian model averaging over a wide set of forecasting models including linear, structural break and regime-switching models with and without heteroskedasticity. Ignoring the nonlinearities and model uncertainty yields misleading predictives and obscures the improvement in the quality of preliminary UK macroeconomic measurements relative to the early 1990s. |
JEL: | C11 C32 C53 |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:nzb:nzbdps:2006/02&r=ecm |
By: | Troy Matheson (Reserve Bank of New Zealand) |
Abstract: | Stock and Watson (1999) show that the Phillips curve is a good forecasting tool in the United States. We assess whether this good performance extends to two small open economies, with relatively large tradable sectors. Using data for Australia and New Zealand, we find that the open economy Phillips curve performs poorly relative to a univariate autoregressive benchmark. However, its performance improves markedly when sectoral Phillips curves are used which model the tradable and non-tradable sectors separately. Combining forecasts from these sectoral models is much better than obtaining forecasts from a Phillips curve estimated on aggregate data. We also find that a diffusion index that combines a large number of indicators of real economic activity provides better forecasts of non-tradable inflation than more conventional measures of real demand, thus supporting Stock and Watson's (1999) findings for the United States. |
JEL: | C53 E31 |
Date: | 2006–02 |
URL: | http://d.repec.org/n?u=RePEc:nzb:nzbdps:2006/01&r=ecm |
By: | Susan Athey; Philip A. Haile |
Abstract: | Many important economic questions arising in auctions can be answered only with knowledge of the underlying primitive distributions governing bidder demand and information. An active literature has developed aiming to estimate these primitives by exploiting restrictions from economic theory as part of the econometric model used to interpret auction data. We review some highlights of this recent literature, focusing on identification and empirical applications. We describe three insights that underlie much of the recent methodological progress in this area and discuss some of the ways these insights have been extended to richer models allowing more convincing empirical applications. We discuss several recent empirical studies using these methods to address a range of important economic questions. |
JEL: | C5 L1 D4 |
Date: | 2006–03 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:12126&r=ecm |
By: | Arne Risa Hole (National Primary Care Research and Development Centre, Centre for Health Economics, University of York) |
Abstract: | This paper describes three approaches to estimating confidence intervals for willingness to pay measures, the delta, Krinsky and Robb and bootstrap methods. The accuracy of the various methods is compared using a number of simulated datasets. In the majority of the scenarios considered all three methods are found to be reasonably accurate as well as yielding similar results. The delta method is the most accurate when the data is well-conditioned, while the bootstrap is more robust to noisy data and misspecification of the model. These conclusions are illustrated by empirical data from a study of willingness to pay for a reduction in waiting time for a general practitioner appointment in which all the methods produce fairly similar confidence intervals. |
Keywords: | willingness to pay, confidence interval, delta method, boot-strap |
Date: | 2006–01 |
URL: | http://d.repec.org/n?u=RePEc:chy:respap:8&r=ecm |
By: | Duangkamon Chotikapanich; William E. Griffiths; D.S. Prasada Rao |
Abstract: | A major problem encountered in studies of income inequality at regional and global levels is the estimation of income distributions from data that are in a summary form. In this paper we estimate national and regional income distributions within a general framework that relaxes the assumption of constant income within groups. A technique to estimate the parameters of a beta-2 distribution using grouped data is proposed. Regional income distribution is modelled using a mixture of country-specific distributions and its properties are examined. The techniques are used to analyse national and regional inequality trends for eight East Asian countries and two benchmark years, 1988 and 1993. |
Keywords: | Gini coefficient; beta-2 distribution |
JEL: | C13 C16 D31 |
Date: | 2005 |
URL: | http://d.repec.org/n?u=RePEc:mlb:wpaper:926&r=ecm |
By: | M. Hashem Pesaran; Ron P. Smith |
Abstract: | This paper provides a synthesis and further development of a global modelling approach introduced in Pesaran, Schuermann and Weiner (2004), where country specific models in the form of VARX* structures are estimated relating a vector of domestic variables, xit, to their foreign counterparts, x*it, and then consistently combined to form a Global VAR (GVAR). It is shown that the VARX* models can be derived as the solution to a dynamic stochastic general equilibrium (DSGE) model where over-identifying long-run theoretical relations can be tested and imposed if acceptable. This gives the system a transparent long-run theoretical structure. Similarly, short-run over-identifying theoretical restrictions can be tested and imposed if accepted. Alternatively, if one has less confidence in the short-run theory the dynamics can be left unrestricted. The assumption of the weak exogeneity of the foreign variables for the long-run parameters can be tested, where x*it variables can be interpreted as proxies for global factors. Rather than using deviations from ad hoc statistical trends, the equilibrium values of the variables reflecting the long-run theory embodied in the model can be calculated. This approach has been used in a wide variety of contexts and for a wide variety of purposes. The paper also provides some new results. |
Keywords: | Global VAR (GVAR), DSGE models, VARX* |
JEL: | C32 E17 F42 |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_1659&r=ecm |
By: | Kurt E. Schnier (University of Rhode Island, Department of Environmental and Natural Resource Economics, Coastal Institute, 1 Greenhouse Road, Suite 212, Kingston, Rhode Island 02881); Christopher M. Anderson (Department of Environmental and Natural Resource Economics, University of Rhode Island); William C. Horrace (Center for Policy Research, Maxwell School, Syracuse University, Syracuse NY 13244-1020) |
Abstract: | Stochastic production frontier models are used extensively in the agricultural and resource economics literature to estimate production functions and technical efficiency, as well as to guide policy. Traditionally these models assume that each agent's production can be specified as a representative, homogeneous function. This paper proposes the synthesis of a latent class regression and an aagricultural production frontier model to estimate technical efficiency while allowing for the possibility of production heterogeneity. We use this model to estimate a latent class production function and efficiency measures for vessels in the Northeast Atlantic herring fishery. Our results suggest that traditional measures of technical efficiency may be incorrect, if heterogeneity of agricultural production exists. |
Keywords: | latent class regression, EC algorithm, stochastic production frontier, technical efficiency |
JEL: | D24 N52 |
Date: | 2006–03 |
URL: | http://d.repec.org/n?u=RePEc:max:cprwps:80&r=ecm |
By: | Patrik Guggenberger |
URL: | http://d.repec.org/n?u=RePEc:cla:uclaol:381&r=ecm |
By: | Amy Finkelstein; James Poterba |
Abstract: | This paper proposes a new test for adverse selection in insurance markets based on observable characteristics of insurance buyers that are not used in setting insurance prices. The test rejects the null hypothesis of symmetric information when it is possible to find one or more such “unused observables” that are correlated both with the claims experience of the insured and with the quantity of insurance purchased. Unlike previous tests for asymmetric information, this test is not confounded by heterogeneity in individual preference parameters, such as risk aversion, that affect insurance demand. Moreover, it can potentially identify the presence of adverse selection, while most alternative tests cannot distinguish adverse selection from moral hazard. We apply this test to a new data set on annuity purchases in the United Kingdom, focusing on the annuitant’s place of residence as an “unused observable.” We show that the socio-economic status of the annuitant’s place of residence is correlated both with annuity purchases and with the annuitant’s prospective mortality. Annuity buyers in different communities therefore face different effective insurance prices, and they make different choices accordingly. This is consistent with the presence of adverse selection. Our findings also raise questions about how insurance companies select the set of buyer attributes that they use in setting policy prices. We suggest that political economy concerns may figure prominently in decisions to forego the use of some information that could improve the risk classification of insurance buyers. |
JEL: | D82 G22 |
Date: | 2006–03 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:12112&r=ecm |
By: | Chin-Shien Lin (National Chung Hsing University); Haider A. Khan (GIGS, University of Denver); Ying-Chieh Wang (Providence University); Ruei-Yuan Chang (Providence University) |
Abstract: | This paper presents a hybrid model for predicting the occurrence of currency crises by using the neuro fuzzy modeling approach. The model integrates the learning ability of neural network with the inference mechanism of fuzzy logic. The empirical results show that the proposed neuro fuzzy model leads to a better prediction of crisis. Significantly, the model can also construct a reliable causal relationship among the variables through the obtained knowledge base. Compared to the traditionally used techniques such as logit, the proposed model can thus lead to a somewhat more prescriptive modeling approach towards finding ways to prevent currency crises. |
Date: | 2006–04 |
URL: | http://d.repec.org/n?u=RePEc:tky:fseres:2006cf411&r=ecm |
By: | Essama-Nssah, B. |
Abstract: | Effective development policymaking creates a need for reliable methods of assessing effectiveness. There should be, therefore, an intimate relationship between effective policymaking and impact analysis. The goal of a development intervention defines the metric by which to assess its impact, while impact evaluation can produce reliable information on which policymakers may base decisions to modify or cancel ineffective programs and thus make the most of limited resources. This paper reviews the logic of propensity score matching (PSM) and, using data on the National Support Work Demonstration, compares that approach with other evaluation methods such as double difference, instrumental variable, and Heckman ' s method of selection bias correction. In addition, it demonstrates how to implement nearest-neighbor and kernel-based methods, and plot program incidence curves in E-Views. In the end, the plausibility of an evaluation method hinges critically on the correctness of the socioeconomic model underlying program design and implementation, and on the quality and quantity of available data. In any case, PSM can act as an effective adjuvant to other methods. |
Keywords: | Poverty Monitoring & Analysis,Poverty Impact Evaluation,Statistical & Mathematical Sciences,Scientific Research & Science Parks,Science Education |
Date: | 2006–04–01 |
URL: | http://d.repec.org/n?u=RePEc:wbk:wbrwps:3877&r=ecm |