|
on Econometric Time Series |
By: | Gabriele Mingoli (Vrije Universiteit Amsterdam and Tinbergen Institute) |
Abstract: | This paper introduces a novel dynamic factor model designed to capture common locally explosive episodes, also known as common bubbles, within large-dimensional, potentially non-stationary time series. The model leverages a lower-dimensional set of factors exhibiting locally explosive behavior to identify common extreme events. Modeling these explosive behaviors allows to predict systemic risk and test for the emergence of common bubbles. The dynamics of the explosive factors are modeled using mixed causal non-causal models, a class of heavy-tailed autoregressive models that allow processes to depend on their future values through a lead polynomial. The paper establishes the asymptotic properties of the model and provides sufficient conditions for consistency of the estimated factors and parameters. A Monte Carlo simulation confirms the good finite sample properties of the estimator, while an empirical analysis highlights its practical effectiveness. Specifically, the model accurately identifies the common explosive component in monthly stock prices of NASDAQ-listed energy companies during the financial crisis in 2008 and predicts its evolution significantly outperforming alternative forecasting methods. |
JEL: | C22 C38 C53 |
Date: | 2024–11–29 |
URL: | https://d.repec.org/n?u=RePEc:tin:wpaper:20240072 |
By: | Ranieri Dugo (DEF, University of Rome "Tor Vergata"); Giacomo Giorgio (Dept of Mathematics, University of Rome "Tor Vergata"); Paolo Pigato (DEF, University of Rome "Tor Vergata") |
Abstract: | Motivated by empirical evidence from the joint behavior of realized volatility time series, we propose to model the joint dynamics of log-volatilities using a multivariate fractional Ornstein-Uhlenbeck process. This model is a multivariate version of the Rough Fractional Stochastic Volatility model proposed in Gatheral, Jaisson, and Rosenbaum, Quant. Finance, 2018. It allows for different Hurst exponents in the different marginal components and non trivial interdependencies. We discuss the main features of the model and propose an estimator that jointly identifies its parameters. We derive the asymptotic theory of the estimator and perform a simulation study that confirms the asymptotic theory in finite sample. We carry out an extensive empirical investigation on all realized volatility time series covering the entire span of about two decades in the Oxford-Man realized library. Our analysis shows that these time series are strongly correlated and can exhibit asymmetries in their cross-covariance structure, accurately captured by our model. These asymmetries lead to spillover effects that we analyse theoretically within the model and then using our empirical estimates. Moreover, in accordance with the existing literature, we observe behaviors close to non-stationarity and rough trajectories. |
Keywords: | stochastic volatility, rough volatility, realized volatility, multivariate time series, volatility spillovers, mean reversion. |
JEL: | C32 C51 C58 G17 |
Date: | 2024–12–20 |
URL: | https://d.repec.org/n?u=RePEc:rtv:ceisrp:589 |
By: | Tae-Hwy Lee (Department of Economics, University of California Riverside); Ekaterina Seregina (Colby College) |
Abstract: | In this paper we develop a novel method of combining many forecasts based on Graphical LASSO. We represent forecast errors from different forecasters as a network of interacting entities and generalize network inference in the presence of common factor structure and structural breaks. First, we note that forecasters often use common information and hence make common errors, which makes the forecast errors exhibit common factor structures. We separate common forecast errors from the idiosyncratic errors and exploit sparsity of the precision matrix of the latter. Second, since the network of experts changes over time as a response to unstable environments, we propose Regime-Dependent Factor Graphical LASSO (RD-FGL) that allows factor loadings and idiosyncratic precision matrix to be regime-dependent. The empirical applications to forecasting macroeconomic series using the data of the European Central Bank’s Survey of Professional Forecasters and Federal Reserve Economic Data monthly database demonstrate superior performance of a combined forecast using RD-FGL. |
Keywords: | Common Forecast Errors; Regime Dependent Forecast Combination; Sparse Precision Matrix of Idiosyncratic Errors; Structural Breaks |
JEL: | C13 C38 C55 |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:ucr:wpaper:202413 |
By: | Lars-H. R. Siemers (University of Siegen, Department of Economics) |
Abstract: | James Hamilton put doubt on the quality of the HP filter estimates, and proposed an alternative regression approach to decompose trend and cycle of time series (H filter). We investigate the new H filter in detail and compare it to the HP filter. We apply both to German GDP time series. We find that in times of huge shocks the regular H filter produces unreliable trends. Its Quast-Wolters modification (QWH filter), in contrast, does not suffer from this issue. Checking expert benchmark congruency, we find that this modification outperforms all parameter constellations of the standard H filter. With a benchmark-specific adequate choice of the smoothing factor, in turn, the HP filter outperforms the H filter. The H filter, however, outperforms the HP filter with regard to correlation with expert benchmark recession dating. And the H filter uniquely outperforms the HP filter in capturing the gap-inflation link, an issue especially important for central banks. Overall, our results suggest using the QWH filter among the H filter options, and a smoothing factor of 38 for the HP filter. |
Keywords: | Hamilton filter, HP filter, expert-benchmark congruency, gap-inflation link, spectral analysis. |
JEL: | C18 C22 E31 E32 H60 |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:mar:magkse:202421 |
By: | Donggyu Kim (Department of Economics, University of California Riverside) |
Abstract: | In this paper, we develop a novel high-dimensional time-varying coefficient estimation method, based on high-dimensional Itô diffusion processes. To account for high-dimensional time-varying coefficients, we first estimate local (or instantaneous) coefficients using a time localized Dantzig selection scheme under a sparsity condition, which results in biased local coefficient estimators due to the regularization. To handle the bias, we propose a debiasing scheme, which provides well-performing unbiased local coefficient estimators. With the unbiased local coefficient estimators, we estimate the integrated coefficient, and to further account for the sparsity of the coefficient process, we apply thresholding schemes. We call this Thresholding dEbiased Dantzig (TED). We establish asymptotic properties of the proposed TED estimator. In the empirical analysis, we apply the TED procedure to analyzing high-dimensional factor models using high-frequency data. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:ucr:wpaper:202416 |
By: | Tae-Hwy Lee (Department of Economics, University of California Riverside); Tao Wang (University of Victoria) |
Abstract: | We in this paper employ a penalized moment selection procedure to identify valid and relevant moments for estimating and testing forecast rationality within the flexible loss framework proposed by Elliott et al. (2005). We motivate the selection of moments in a high-dimensional setting, outlining the fundamental mechanism of the penalized moment selection procedure and demonstrating its implementation in the context of forecast rationality, particularly in the presence of potentially invalid moment conditions. The selection consistency and asymptotic normality are established under conditions specifically tailored to economic forecasting. Through a series of Monte Carlo simulations, we evaluate the finite sample performance of penalized moment estimation in utilizing available instrument information effectively within both estimation and testing procedures. Additionally, we present an empirical analysis using data from the Survey of Professional Forecasters issued by the Federal Reserve Bank of Philadelphia to illustrate the practical utility of the suggested methodology. The results indicate that the proposed postselection estimator for forecaster’s attitude performs comparably to the oracle estimator by efficiently incorporating available information. The power of rationality and symmetry tests leveraging penalized moment estimation is substantially enhanced by minimizing the impact of uninformative instruments. For practitioners assessing the rationality of externally generated forecasts, such as those in the Greenbook, the proposed penalized moment selection procedure could offer a robust approach to achieve more efficient estimation outcomes. |
Keywords: | Forecast rationality; Moment selection; Penalized estimation; Relevance; Validity |
JEL: | C36 C53 E17 |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:ucr:wpaper:202412 |
By: | Sung Hoon Choi; Donggyu Kim (Department of Economics, University of California Riverside) |
Abstract: | In this paper, we introduce a novel method for predicting intraday instantaneous volatility based on Itˆo semimartingale models using high-frequency financial data. Several studies have highlighted stylized volatility time series features, such as interday auto-regressive dynamics and the intraday U-shaped pattern. To accommodate these volatility features, we propose an interday-by-intraday instantaneous volatility matrix process that can be decomposed into low-rank conditional expected instantaneous volatility and noise matrices. To predict the low-rank conditional expected instantaneous volatility matrix, we propose the Two-sIde Projected-PCA (TIP-PCA) procedure. We establish asymptotic properties of the proposed estimators and conduct a simulation study to assess the finite sample performance of the proposed prediction method. Finally, we apply the TIP-PCA method to an out-of-sample instantaneous volatility vector prediction study using high-frequency data from the S&P 500 index and 11 sector index funds. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:ucr:wpaper:202423 |
By: | Doko Tchatoka, Firmin; Wang, Wenjie |
Abstract: | Pretesting for exogeneity has become a routine in many empirical applications involving instrumental variables (IVs) to decide whether the ordinary least squares (OLS) or the IV-based method is appropriate. Guggenberger (2010) shows that the second-stage t-test – based on the outcome of a Durbin-Wu-Hausman type pretest for exogeneity in the first stage – has extreme size distortion with asymptotic size equal to 1, even when the IVs are strong. In this paper, we propose a novel two-stage test procedure that switches between the OLS-based statistic and the weak-IV-robust statistic. Furthermore, we develop a size-corrected wild bootstrap approach, which combines certain wild bootstrap critical values along with an appropriate size-correction method. We establish uniform validity of this procedure under conditional heteroskedasticity in the sense that the resulting tests achieve correct asymptotic size no matter the identification is strong or weak. Monte Carlo simulations confirm our theoretical findings. In particular, our proposed method has remarkable power gains over the standard weak-identification-robust test. |
Keywords: | DWH Pretest; Shrinkage; Weak Instruments; Asymptotic Size; Wild Bootstrap; Bonferroni-based Sizecorrection. |
JEL: | C26 |
Date: | 2024–12–20 |
URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:123060 |
By: | Donggyu Kim (Department of Economics, University of California Riverside); Minseok Shin |
Abstract: | In this paper, we develop a novel high-dimensional coefficient estimation procedure based on high-frequency data. Unlike usual high-dimensional regression procedure such as LASSO, we additionally handle the heavy-tailedness of high-frequency observations as well as time variations of coefficient processes. Specifically, we employ Huber loss and truncation scheme to handle heavy-tailed observations, while â„“1-regularization is adopted to overcome the curse of dimensionality. To account for the time-varying coefficient, we estimate local coefficients which are biased due to the â„“1-regularization. Thus, when estimating integrated coefficients, we propose a debiasing scheme to enjoy the law of large number property and employ a thresholding scheme to further accommodate the sparsity of the coefficients. We call this Robust thrEsholding Debiased LASSO (RED-LASSO) estimator. We show that the RED LASSO estimator can achieve a near-optimal convergence rate. In the empirical study, we apply the RED-LASSO procedure to the high-dimensional integrated coefficient estimation using high-frequency trading data. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:ucr:wpaper:202417 |
By: | Donggyu Kim (Department of Economics, University of California Riverside); Minseok Shin |
Abstract: | In this paper, we propose a novel high-dimensional time-varying coefficient estimator for noisy high-frequency observations with a factor structure. In high-frequency finance, we often observe that noises dominate the signal of underlying true processes and that covariates exhibit a factor structure due to their strong dependence. Thus, we cannot apply usual regression procedures to analyze high-frequency observations. To handle the noises, we first employ a smoothing method for the observed dependent and covariate processes. Then, to handle the strong dependence of the covariate processes, we apply Principal Component Analysis (PCA) and transform the highly correlated covariate structure into a weakly correlated structure. However, the variables from PCA still contain non-negligible noises. To manage these non negligible noises and the high dimensionality, we propose a nonconvex penalized regression method for each local coefficient. This method produces consistent but biased local coefficient estimators. To estimate the integrated coefficients, we propose a debiasing scheme and obtain a debiased integrated coefficient estimator using debiased local coefficient estimators. Then, to further account for the sparsity structure of the coefficients, we apply a thresholding scheme to the debiased integrated coefficient estimator. We call this scheme the Factor Adjusted Thresholded dEbiased Nonconvex LASSO (FATEN-LASSO) estimator. Furthermore, this paper establishes the concentration properties of the FATEN-LASSO estimator and discusses a nonconvex optimization algorithm. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:ucr:wpaper:202418 |
By: | Jianqing Fan; Donggyu Kim (Department of Economics, University of California Riverside); Minseok Shin; Yazhen Wang |
Abstract: | This paper introduces a novel Ito diffusion process for both factor and idiosyncratic volatilities whose eigenvalues follow the vector auto-regressive (VAR) model. We call it the factor and idiosyncratic VAR-Ito (FIVAR-Ito) model. The FIVAR-Ito model considers dynamics of the factor and idiosyncratic volatilities and involve many parameters. In addition, the empirical studies have shown that the financial returns often exhibit heavy tails. To address these two issues simultaneously, we propose a penalized optimization procedure with a truncation scheme for a parameter estimation. We apply the proposed parameter estimation procedure to predicting large volatility matrices and investigate its asymptotic properties. Using high-frequency trading data, the proposed method is applied to large volatility matrix prediction and minimum variance portfolio allocation. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:ucr:wpaper:202415 |
By: | Siddhartha Chib; Minchul Shin; Anna Simoni |
Abstract: | A standard assumption in the Bayesian estimation of linear regression models is that the regressors are exogenous in the sense that they are uncorrelated with the model error term. In practice, however, this assumption can be invalid. In this paper, under the rubric of the exponentially tilted empirical likelihood, we develop a Bayes factor test for endogeneity that compares a base model that is correctly specified under exogeneity but misspecified under endogeneity against an extended model that is correctly specified in either case. We provide a comprehensive study of the log-marginal exponentially tilted empirical likelihood. We demonstrate that our testing procedure is consistent from a frequentist point of view: as the sample becomes large, it almost surely selects the base model if and only if the regressors are exogenous, and the extended model if and only if the regressors are endogenous. The methods are illustrated with simulated data, and problems concerning the causal effect of automobile prices on automobile demand and the causal effect of potentially endogenous airplane ticket prices on passenger volume |
Keywords: | Bayesian inference; Causal inference; Exponentially tilted empirical likelihood; Endogeneity; Exogeneity; Instrumental variables; Marginal likelihood; Posterior consistency |
Date: | 2024–11–25 |
URL: | https://d.repec.org/n?u=RePEc:fip:fedpwp:99168 |
By: | Hao Hao (Global Data Insight & Analytics, Ford Motor Company, Michigan); Tae-Hwy Lee (Department of Economics, University of California Riverside) |
Abstract: | When the endogenous variable is an unknown function of observable instruments,  its conditional mean can be approximated using the sieve functions of observable instruments. We propose a novel instrument selection method, Double-criteria Boosting (DB), that consistently selects only valid and relevant instruments from a large set of candidate instruments. In the Monte Carlo simulation, we compare GMM using DB (DB-GMM) with other estimation methods and demonstrate that DB-GMM gives lower bias and RMSE. In the empirical application to the automobile demand, the DBGMM estimator is suggesting a more elastic estimate of the price elasticity of demand than the standard 2SLS estimator. |
Keywords: | Causal inference with high dimensional instruments; Irrelevant instruments; Invalid instruments; Instrument Selection; Machine Learning; Boosting. |
JEL: | C1 C5 |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:ucr:wpaper:202411 |
By: | Jianqing Fan; Donggyu Kim (Department of Economics, University of California Riverside); Minseok Shin |
Abstract: | Several novel statistical methods have been developed to estimate large integrated volatility matrices based on high-frequency financial data. To investigate their asymptotic behaviors, they require a sub-Gaussian or finite high-order moment assumption for observed log-returns, which cannot account for the heavy-tail phenomenon of stock-returns. Recently, a robust estimator was developed to handle heavy-tailed distributions with some bounded fourth-moment assumption. However, we often observe that log-returns have heavier tail distribution than the finite fourth-moment and that the degrees of heaviness of tails are heterogeneous across asset and over time. In this paper, to deal with the heterogeneous heavy-tailed distributions, we develop an adaptive robust integrated volatility estimator that employs pre-averaging and truncation schemes based on jump-diffusion processes. We call this an adaptive robust pre-averaging realized volatility (ARP) estimator. We show that the ARP estimator has a sub-Weibull tail concentration with only finite 2α-th moments for any α > 1. In addition, we establish matching upper and lower bounds to show that the ARP estimation procedure is optimal. To estimate large integrated volatility matrices using the approximate factor model, the ARP estimator is further regularized using the principal orthogonal complement thresholding (POET) method. The numerical study is conducted to check the finite sample performance of the ARP estimator. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:ucr:wpaper:202419 |
By: | Sung Hoon Choi; Donggyu Kim (Department of Economics, University of California Riverside) |
Abstract: | In this paper, we develop a novel large volatility matrix estimation procedure for analyzing global financial markets. Practitioners often use lower-frequency data, such as weekly or monthly returns, to address the issue of different trading hours in the international financial market. However, this approach can lead to inefficiency due to information loss. To mitigate this problem, our proposed method, called Structured Principal Orthogonal complEment Thresholding (S-POET), incorporates observation structural information for both global and national factor models. We establish the asymptotic properties of the S-POET estimator, and also demonstrate the drawbacks of conventional covariance matrix estimation procedures when using lower-frequency data. Finally, we apply the S-POET estimator to an out-of-sample portfolio allocation study using international stock market data. |
Date: | 2024–12 |
URL: | https://d.repec.org/n?u=RePEc:ucr:wpaper:202424 |