|
on Econometrics |
By: | Gloria Gonzalez-Rivera (Department of Economics, University of California Riverside); Yun Luo |
Abstract: | We propose a model for interval-valued time series (ITS), e.g. the collection of daily intervals of high/low stock returns over time, that specifies the conditional joint distribution of the upper and lower bounds of the interval as a mixture of truncated bivariate normal distribution. This specification guarantees that the natural order of the interval (upper bound not smaller than lower bound) is preserved. The model also captures the potential conditional heteroscedasticity and non-Gaussian features in ITS. The standard EM algorithm, when applied to the estimation of mixture models with truncated distribution, does not provide a closed-form solution in M step. We propose a new EM algorithm that solves this problem. We establish the consistency of the maximum likelihood estimator. Monte Carlo simulations show the new EM algorithm has good convergence properties. We apply the model to the interval-valued IBM daily stock returns and it exhibits superior performance over competing methods. |
Keywords: | interval-valued data, mixture transition model, EM algorithm, truncated normal distribution. |
JEL: | C01 C32 C34 |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:ucr:wpaper:202005&r=all |
By: | Can, S.U. (Tilburg University, School of Economics and Management); Einmahl, John (Tilburg University, School of Economics and Management); Laeven, R.J.A. (Tilburg University, School of Economics and Management) |
Abstract: | Consider a random sample from a continuous multivariate distribution function F with copula C. In order to test the null hypothesis that C belongs to a certain parametric family, we construct an empirical process on the unit hypercube that converges weakly to a standard Wiener process under the null hypothesis. This process can therefore serve as a ‘tests generator’ for asymptotically distribution-free goodness-of-fit testing of copula families. We also prove maximal sensitivity of this process to contiguous alternatives. Finally, we demonstrate through a Monte Carlo simulation study that our approach has excellent finite-sample performance, and we illustrate its applicability with a data analysis. |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:tiu:tiutis:211b2be9-b46e-41e2-9b95-18fa92cfda8c&r=all |
By: | Carlos Lamarche; Thomas Parker |
Abstract: | Existing work on penalized quantile regression for longitudinal data has been focused almost exclusively on point estimation. In this work, we investigate statistical inference. We first show that the pairs bootstrap that samples cross-sectional units with replacement does not approximate well the limiting distribution of the penalized estimator. We then propose a wild residual bootstrap procedure and show that it is asymptotically valid for approximating the distribution of the penalized estimator. The new method is easy to implement and uses weight distributions that are standard in the literature. Simulation studies are carried out to investigate the small sample behavior of the proposed approach in comparison with existing procedures. Finally, we illustrate the new approach using U.S. Census data to estimate a high-dimensional model that includes more than eighty thousand parameters. |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2004.05127&r=all |
By: | Jos\'e E. Figueroa-L\'opez; Bei Wu |
Abstract: | We first revisit the problem of kernel estimation of spot volatility in a general continuous It\^o semimartingale model in the absence of microstructure noise, and prove a Central Limit Theorem with optimal convergence rate, which is an extension of Figueroa and Li (2020) as we allow for a general two-sided kernel function. Next, to handle the microstructure noise of ultra high-frequency observations, we present a new type of pre-averaging/kernel estimator for spot volatility under the presence of additive microstructure noise. We prove Central Limit Theorems for the estimation error with an optimal rate and study the problems of optimal bandwidth and kernel selection. As in the case of a simple kernel estimator of spot volatility in the absence of microstructure noise, we show that the asymptotic variance of the pre-averaging/kernel estimator is minimal for exponential or Laplace kernels, hence, justifying the need of working with unbounded kernels as proposed in this work. Feasible implementation of the proposed estimators with optimal bandwidth is also developed. Monte Carlo experiments confirm the superior performance of the devised method. |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2004.01865&r=all |
By: | Bonev, Petyo |
Abstract: | We study nonparametric identification of nonseparable duration models with unobserved heterogeneity. Our models are nonseparable in two ways. First, genuine duration dependence is allowed to depend on observed covariates. Second, observed and unobserved characteristics may interact in an arbitrary way. Our study develops novel identification strategies for a comprehensive account of typical duration model settings. In particular, we show identification in single-spell models with and without time-varying covariates, in multiple models with shared frailty and lagged duration dependence, in single-spell and multiple-spell competing risks models, and in treatment effects models where treatment is assigned during the individual spell in the state of interest. |
Keywords: | Duration models, identification, unobserved treatment heterogeneity, nonseparable models, competing risks, treatment effect, job search, unemployment |
JEL: | C14 C41 J64 |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:usg:econwp:2020:05&r=all |
By: | Gianluca Benigno; Andrew Foerster; Christopher Otrok; Alessandro Rebucci |
Abstract: | We estimate a workhorse DSGE model with an occasionally binding borrowing constraint. First, we propose a new specification of the occasionally binding constraint, where the transition between the unconstrained and constrained states is a stochastic function of the leverage level and the constraint multiplier. This specification maps into an endogenous regime-switching model. Second, we develop a general perturbation method for the solution of such a model. Third, we estimate the model with Bayesian methods to fit Mexico's business cycle and financial crisis history since 1981. The estimated model fits the data well, identifying three crisis episodes of varying duration and intensity: the Debt Crisis in the early-1980s, the Peso Crisis in the mid-1990s, and the Global Financial Crisis in the late-2000s. The crisis episodes generated by the estimated model display sluggish and long-lasting build-up and stagnation phases driven by plausible combinations of shocks. Different sets of shocks explain different variables over the business cycle and the three historical episodes of sudden stops identified. |
JEL: | C11 E30 F41 G01 |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:26935&r=all |
By: | Jia Chen; Yongcheol Shin; Chaowen Zheng |
Abstract: | In this paper, we develop a unifying econometric framework for the analysis of heterogeneous panel data models that can account for both spatial dependence and unobserved common factors. To tackle the challenging issues of endogeneity caused by both the spatial lagged term and the correlation between regressors and factors, we propose to approximate common factors by cross-section averages of independent variables only, and deal with the spatial endogeneity via the instrumental variables. We develop the individual estimators as well as the Mean Group and the Pooled estimators, and establish their consistency and asymptotic normality. Monte Carlo simulations confirm that the finite sample performance of our proposed estimators are quite satisfactory. We demonstrate the usefulness of our approach with an application to a gravity model of bilateral trade flows for 91 pairs of 14 European Union (EU) countries, and find that the trade flows between the UK and EU members would fall substantially following a hard Brexit. |
Keywords: | Cross Section Dependence, Heterogeneous Spatial Panel Data Model, Factor Model, Instrumental Variable Analysis |
JEL: | C13 C15 C23 |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:yor:yorken:20/03&r=all |
By: | Philipp Bach; Sven Klaassen; Jannis Kueck; Martin Spindler |
Abstract: | We develop a method for uniform valid confidence bands of a nonparametric component $f_1$ in the general additive model $Y=f_1(X_1)+\ldots + f_p(X_p) + \varepsilon$ in a high-dimensional setting. We employ sieve estimation and embed it in a high-dimensional Z-estimation framework allowing us to construct uniformly valid confidence bands for the first component $f_1$. As usual in high-dimensional settings where the number of regressors $p$ may increase with sample, a sparsity assumption is critical for the analysis. We also run simulations studies which show that our proposed method gives reliable results concerning the estimation properties and coverage properties even in small samples. Finally, we illustrate our procedure with an empirical application demonstrating the implementation and the use of the proposed method in practice. |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2004.01623&r=all |
By: | Shalva Mkhatrishvili (Macroeconomic Research Division, National Bank of Georgia); Douglas Laxton (NOVA School of Business and Economics, Saddle Point Research, The Better Policy Project); Davit Tutberidze (Macroeconomic Research Division, National Bank of Georgia); Tamta Sopromadze (Macroeconomic Research Division, National Bank of Georgia); Saba Metreveli (Macroeconomic Research Division, National Bank of Georgia); Lasha Arevadze (Macroeconomic Research Division, National Bank of Georgia); Tamar Mdivnishvili (Macroeconomic Research Division, National Bank of Georgia); Giorgi Tsutskiridze (Macroeconomic Research Division, National Bank of Georgia) |
Abstract: | There has been an increased acceptance of non-linear linkages being the major driver of the most pronounced phases of business and financial cycles. However, modelling these non-linear phenomena has been a challenge, since existing solutions methods are either efficient but not able to accurately capture non-linear dynamics (e.g. linear methods), or accurate but quite resource-intensive (e.g. stacked system or stochastic Extended Path). This paper proposes two new solution approaches that try to be accurate enough and less costly. Moreover, one of those methods lets us do Kalman filtering on nonlinear models in a non-linear way, which is also important for this kind of models, in general, to be more policy-relevant. Impulse responses, simulations and Kalman filtering exercises show the advantages of those new approaches when applied to a simple, but strongly non-linear, monetary policy model. |
Keywords: | Non-linear dynamic models, Solution methods, Monetary policy |
JEL: | C60 C61 C63 E17 |
Date: | 2019–10 |
URL: | http://d.repec.org/n?u=RePEc:aez:wpaper:012019&r=all |
By: | Nassim Nicholas Taleb |
Abstract: | Empirical distributions have their in-sample maxima as natural censoring. We look at the "hidden tail", that is, the part of the distribution in excess of the maximum for a sample size of $n$. Using extreme value theory, we examine the properties of the hidden tail and calculate its moments of order $p$. The method is useful in showing how large a bias one can expect, for a given $n$, between the visible in-sample mean and the true statistical mean (or higher moments), which is considerable for $\alpha$ close to 1. Among other properties, we note that the "hidden" moment of order $0$, that is, the exceedance probability for power law distributions, follows an exponential distribution and has for expectation $\frac{1}{n}$ regardless of the parametrization of the scale and tail index. |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2004.05894&r=all |
By: | Régis Barnichon; Geert Mesters |
Abstract: | Model mis-specification remains a major concern in macroeconomics, and policy makers must often resort to heuristics to decide on policy actions; combining insights from multiple models and relying on judgment calls. Identifying the most appropriate, or optimal, policy in this manner can be challenging however. In this work, we propose a statistic -the Optimal Policy Perturbation (OPP)- to detect "optimization failures" in the policy decision process. The OPP does not rely on any specific underlying economic model, and its computation only requires (i) forecasts for the policy objectives conditional on the policy choice, and (ii) the impulse responses of the policy objectives to shocks to the policy instruments. We illustrate the OPP in the context of US monetary policy decisions. In forty years, we only detect one period with major optimization failures; during 2010-2012 when unconventional policy tools should have been used more intensively. |
Keywords: | macroeconomic stabilization, optimal policy, impulse responses, sufficient statistics, forecast targeting |
JEL: | C14 C32 E32 E52 |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:bge:wpaper:1171&r=all |
By: | Daniel Czarnowske; Amrei Stammann |
Abstract: | In this article, we study the limiting behavior of Bai (2009)'s interactive fixed effects estimator in the presence of randomly missing data. In extensive simulation experiments, we show that the inferential theory derived by Bai (2009) and Moon and Weidner (2017) approximates the behavior of the estimator fairly well. However, we find that the fraction and pattern of randomly missing data affect the performance of the estimator. Additionally, we use the interactive fixed effects estimator to reassess the baseline analysis of Acemoglu et al. (2019). Allowing for a more general form of unobserved heterogeneity as the authors, we confirm significant effects of democratization on growth. |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2004.03414&r=all |
By: | Daniel Borup; Bent Jesper Christensen; Nicolaj N{\o}rgaard M\"uhlbach; Mikkel Slot Nielsen |
Abstract: | The random forest regression (RF) has become an extremely popular tool to analyze high-dimensional data. Nonetheless, it has been argued that its benefits are lessened in sparse high-dimensional settings due to the presence of weak predictors and an initial dimension reduction (targeting) step prior to estimation is required. We show theoretically that, in high-dimensional settings with limited signal, proper targeting is an important complement to RF's feature sampling by controlling the probability of placing splits along strong predictors. This is supported by simulations with representable finite samples. Moreover, we quantify the immediate gain from targeting in terms of increased strength of individual trees. Our conclusions are elaborated by a broad set of applications within macroeconomics and finance. These show that the inherent bias-variance trade-off implied by targeting, due to increased tree correlation, is balanced at a medium level, selecting the best 10-30\% of commonly applied predictors. The applications consolidate that improvements from the targeted RF over the ordinary RF can be significant, particularly in long-horizon forecasting, and both in expansions and recessions. |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2004.01411&r=all |
By: | Giacomo Toscano; Maria Cristina Recchioni |
Abstract: | We derive a feasible criterion for the bias-optimal selection of the tuning parameters involved in estimating the integrated volatility of the spot volatility via the simple realized estimator by Barndorff-Nielsen and Veraart (2009). Our analytic results are obtained assuming that the spot volatility is a continuous mean-reverting process and that consecutive local windows for estimating the spot volatility are allowed to overlap in a finite sample setting. Moreover, our analytic results support some optimal selections of tuning parameters prescribed in the literature, based on numerical evidence. Interestingly, it emerges that the window-overlapping is crucial for optimizing the finite-sample bias of volatility-of-volatility estimates. |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2004.04013&r=all |
By: | Giuseppe Brandi; T. Di Matteo |
Abstract: | Multilayer networks proved to be suitable in extracting and providing dependency information of different complex systems. The construction of these networks is difficult and is mostly done with a static approach, neglecting time delayed interdependences. Tensors are objects that naturally represent multilayer networks and in this paper, we propose a new methodology based on Tucker tensor autoregression in order to build a multilayer network directly from data. This methodology captures within and between connections across layers and makes use of a filtering procedure to extract relevant information and improve visualization. We show the application of this methodology to different stationary fractionally differenced financial data. We argue that our result is useful to understand the dependencies across three different aspects of financial risk, namely market risk, liquidity risk, and volatility risk. Indeed, we show how the resulting visualization is a useful tool for risk managers depicting dependency asymmetries between different risk factors and accounting for delayed cross dependencies. The constructed multilayer network shows a strong interconnection between the volumes and prices layers across all the stocks considered while a lower number of interconnections between the uncertainty measures is identified. |
Date: | 2020–04 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2004.05367&r=all |
By: | Ye-Sheen Lim; Denise Gorse |
Abstract: | In this paper we propose a deep recurrent architecture for the probabilistic modelling of high-frequency market prices, important for the risk management of automated trading systems. Our proposed architecture incorporates probabilistic mixture models into deep recurrent neural networks. The resulting deep mixture models simultaneously address several practical challenges important in the development of automated high-frequency trading strategies that were previously neglected in the literature: 1) probabilistic forecasting of the price movements; 2) single objective prediction of both the direction and size of the price movements. We train our models on high-frequency Bitcoin market data and evaluate them against benchmark models obtained from the literature. We show that our model outperforms the benchmark models in both a metric-based test and in a simulated trading scenario |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2004.01498&r=all |
By: | Magne Mogstad; Joseph P. Romano; Azeem Shaikh; Daniel Wilhelm |
Abstract: | It is often desired to rank different populations according to the value of some feature of each population. For example, it may be desired to rank neighborhoods according to some measure of intergenerational mobility or countries according to some measure of academic achievement. These rankings are invariably computed using estimates rather than the true values of these features. As a result, there may be considerable uncertainty concerning the rank of each population. In this paper, we consider the problem of accounting for such uncertainty by constructing confidence sets for the rank of each population. We consider both the problem of constructing marginal confidence sets for the rank of a particular population as well as simultaneous confidence sets for the ranks of all populations. We show how to construct such confidence sets under weak assumptions. An important feature of all of our constructions is that they remain computationally feasible even when the number of populations is very large. We apply our theoretical results to re-examine the rankings of both neighborhoods in the United States in terms of intergenerational mobility and developed countries in terms of academic achievement. The conclusions about which countries do best and worst at reading, math, and science are fairly robust to accounting for uncertainty. By comparison, several celebrated findings about intergenerational mobility in the United states are not robust to taking uncertainty into account. |
JEL: | C0 I0 J0 |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:26883&r=all |
By: | Amir Mosavi; Pedram Ghamisi; Yaser Faghan; Puhong Duan |
Abstract: | The popularity of deep reinforcement learning (DRL) methods in economics have been exponentially increased. DRL through a wide range of capabilities from reinforcement learning (RL) and deep learning (DL) for handling sophisticated dynamic business environments offers vast opportunities. DRL is characterized by scalability with the potential to be applied to high-dimensional problems in conjunction with noisy and nonlinear patterns of economic data. In this work, we first consider a brief review of DL, RL, and deep RL methods in diverse applications in economics providing an in-depth insight into the state of the art. Furthermore, the architecture of DRL applied to economic applications is investigated in order to highlight the complexity, robustness, accuracy, performance, computational tasks, risk constraints, and profitability. The survey results indicate that DRL can provide better performance and higher accuracy as compared to the traditional algorithms while facing real economic problems at the presence of risk parameters and the ever-increasing uncertainties. |
Date: | 2020–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2004.01509&r=all |