|
on Econometrics |
By: | Christian Brownlees; Geert Mesters |
Abstract: | Large economic and financial panels often contain time series that influence the entire cross-section. We name such series granular. In this paper we introduce a panel data model that allows to formalize the notion of granular time series. We then propose a methodology, which is inspired by the network literature in statistics and econometrics, to detect the set of granulars when such set is unknown. The influence of the i-th series in the panel is measured by the norm of the i-th column of the inverse covariance matrix. We show that a detection procedure based on the column norms allows to consistently select granular series when the cross-section and time series dimensions are large. Importantly, the methodology allows to consistently detect granulars also when the series in the panel are influenced by common factors. A simulation study shows that the proposed procedures perform satisfactorily in finite samples. Our empirical studies demonstrate, among other findings, the granular influence of the automobile sector in US industrial production. |
Keywords: | granularity, network models, factor models, panel data, industrial produc- tion, CDS spreads |
JEL: | C33 C38 |
Date: | 2017–09 |
URL: | http://d.repec.org/n?u=RePEc:bge:wpaper:991&r=ecm |
By: | Fries, Sébastien; Zakoian, Jean-Michel |
Abstract: | Noncausal autoregressive models with heavy-tailed errors generate locally explosive processes and therefore provide a natural framework for modelling bubbles in economic and financial time series. We investigate the probability properties of mixed causal-noncausal autoregressive processes, assuming the errors follow a stable non-Gaussian distribution. We show that the tails of the conditional distribution are lighter than those of the errors, and we emphasize the presence of ARCH effects and unit roots in a causal representation of the process. Under the assumption that the errors belong to the domain of attraction of a stable distribution, we show that a weak AR causal representation of the process can be consistently estimated by classical least-squares. We derive a Monte Carlo Portmanteau test to check the validity of the weak AR representation and propose a method based on extreme residuals clustering to determine whether the AR generating process is causal, noncausal or mixed. An empirical study on simulated and real data illustrates the potential usefulness of the results. |
Keywords: | Noncausal process, Stable process, Extreme clustering, Explosive bubble, Portmanteau test. |
JEL: | C13 C22 C52 C53 |
Date: | 2017–09 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:81345&r=ecm |
By: | BERTILLE ANTOINE (Simon Fraser University); PROSPER DOVONON (Concordia University) |
Abstract: | This paper is concerned with estimation of parameters defined by general estimating equations in the form of a moment condition model. In this context, Kitamura, Otsu and Evdokimov (2013a) have introduced the minimum Hellinger distance (HD) estimator which is asymptotically semiparametrically efficient when the model assumption holds (correct specification) and achieves optimal minimax robust properties under small deviations from the model (local misspecification). In this paper, we evaluate the performance of inference procedures of interest under two complementary types of misspecification, local and global. First, we show that HD is not robust to global misspecification in the sense that HD may cease to be root n convergent when the functions defining the moment conditions are unbounded. Second, in the spirit of Schennach (2007), we introduce the exponentially tilted Hellinger distance (ETHD) estimator by combining the Hellinger distance and the Kullback-Leibler information criterion. Our estimator shares the same desirable asymptotic properties as HD under correct specification and local misspecification, and remains well-behaved under global misspecification. ETHD is therefore the first estimator that is efficient under correct specification, and robust to both global and local misspecification. |
Keywords: | misspecified models; local misspecification; higher-order asymptotics; semiparametric efficiency |
Date: | 2017–09 |
URL: | http://d.repec.org/n?u=RePEc:sfu:sfudps:dp17-15&r=ecm |
By: | Piotr Dybka; Michal Kowalczuk; Bartosz Olesinski; Marek Rozkrut; Andrzej Toroj |
Abstract: | Model-based econometric techniques of the shadow economy estimation have been increasingly popular, but a systematic approach to getting the best of their complementarities has so far been missing. We review the dominant approaches in the literature –currency demand analysis (CDA) and MIMIC model – and propose a novel hybrid procedure that addresses their previous critique, in particular the misspecification issues in CDA equations and the vague transformation of the latent variable obtained via MIMIC model into interpretable levels and paths of the shadow economy. Our proposal is based on a new identification scheme for the MIMIC model, referred to as 'reverse standarizaton'. It supplies the MIMIC model with the panel-structured information on the latent variable's mean and variance obtained from the CDA estimates, treating this information as given in the restricted full information maximum likelihood function. This approach allows us to avoid some controversial steps, such as choosing an externally estimated reference point for benchmarking or adopting other ad hoc identifying assumptions. We estimate the shadow economy for up to 43countries, with the results obtained in the range of 2.8% to 29.9% of GDP. Various versions of our models remain robust as regards changes in the level of the shadow economy over time and the relative position of the analysed countries. We also find that the contribution of (a correctly specified) MIMIC model to the measurement of trends in the shadow economy is marginal as compared to the contribution of the CDA model. |
Keywords: | Shadow economy, MIMIC,Currency Demand Approach, Restricted Full Information Maximum Likelihood |
JEL: | C10 C51 C59 E26 H26 O17 |
Date: | 2017–09 |
URL: | http://d.repec.org/n?u=RePEc:sgh:kaewps:2017030&r=ecm |
By: | Daouia, Abdelaati; Girard, Stéphane; Stupfler, Gilles |
Abstract: | The class of quantiles lies at the heart of extreme-value theory and is one of the basic tools in risk management. The alternative family of expectiles is based on squared rather than absolute error loss minimization. It has recently been receiving a lot of attention in actuarial science, econometrics and statistical finance. Both quantiles and expectiles can be embedded in a more general class of M-quantiles by means of Lp optimization. These generalized Lp-quantiles steer an advantageous middle course between ordinary quantiles and expectiles without sacrificing their virtues too much for 1 p 2. In this paper, we investigate their estimation from the perspective of extreme values in the class of heavy-tailed distributions. We construct estimators of the intermediate Lp-quantiles and establish their asymptotic normality in a dependence framework motivated by financial and actuarial applications, before extrapolating these estimates to the very far tails. We also investigate the potential of extreme Lp-quantiles as a tool for estimating the usual quantiles and expectiles themselves. We show the usefulness of extreme Lp-quantiles and elaborate the choice of p through applications to some simulated and financial real data. |
Keywords: | Asymptotic normality; Dependent observations; Expectiles; Extrapolation; Extreme values; Heavy tails; Lp optimization; Mixing; Quantiles; Tail risk |
Date: | 2017–09 |
URL: | http://d.repec.org/n?u=RePEc:tse:wpaper:32050&r=ecm |