|
on Econometrics |
By: | Marco Barassi (University of Birmingham); Yiannis Karavias (University of Birmingham); Chongxian Zhu (University of Birmingham) |
Abstract: | This paper introduces unit-specific heterogeneity in panel data threshold regression. Both slope coefficients and threshold parameters are allowed to vary by unit. The heterogeneous threshold parameters manifest via a unit-specific empirical quantile transformation of a common underlying threshold parameter which is estimated efficiently from the whole panel. In the errors, the unobserved heterogeneity of the panel takes the general form of interactive fixed effects. The newly introduced parameter heterogeneity has implications for model identification, estimation, interpretation, and asymptotic inference. The assumption of a shrinking threshold magnitude now implies shrinking heterogeneity and leads to faster estimator rates of convergence than previously encountered. The asymptotic theory for the proposed estimators is derived and Monte Carlo simulations demonstrate its usefulness in small samples. The new model is employed to examine the Feldstein-Horioka puzzle and it is found that the trade liberalization policies of the 80's significantly impacted cross-country capital mobility. |
Date: | 2023–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2308.04057&r=ecm |
By: | Raffaella Giacomini; Sokbae Lee; Silvia Sarpietro |
Abstract: | We propose a method for forecasting individual outcomes and estimating random effects in linear panel data models and value-added models when the panel has a short time dimension. The method is robust, trivial to implement and requires minimal assumptions. The idea is to take a weighted average of time series- and pooled forecasts/estimators, with individual weights that are based on time series information. We show the forecast optimality of individual weights, both in terms of minimax-regret and of mean squared forecast error. We then provide feasible weights that ensure good performance under weaker assumptions than those required by existing approaches. Unlike existing shrinkage methods, our approach borrows the strength - but avoids the tyranny - of the majority, by targeting individual (instead of group) accuracy and letting the data decide how much strength each individual should borrow. Unlike existing empirical Bayesian methods, our frequentist approach requires no distributional assumptions, and, in fact, it is particularly advantageous in the presence of features such as heavy tails that would make a fully nonparametric procedure problematic. |
Date: | 2023–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2308.01596&r=ecm |
By: | Skrobotov, Anton (Скроботов, Антон) (The Russian Presidential Academy of National Economy and Public Administration) |
Abstract: | This review discusses methods of testing for a cointegration in a time series in the presence of structural breaks. The review covers a large number of recently developed testing methods based on both one equation and multiple equation frameworks. In addition, various methods for estimating the dating of break dates and constructing of their confidence intervals are presented. In addition, nonlinear cointegration methods with regime swithings are considered. |
Keywords: | testing for cointegration, testing for cointegration rank, structural breaks, error correction model |
Date: | 2021–11–12 |
URL: | http://d.repec.org/n?u=RePEc:rnp:wpaper:w20220130&r=ecm |
By: | Ioannis Papageorgiou; Ioannis Kontoyiannis |
Abstract: | A hierarchical Bayesian framework is introduced for developing rich mixture models for real-valued time series, along with a collection of effective tools for learning and inference. At the top level, meaningful discrete states are identified as appropriately quantised values of some of the most recent samples. This collection of observable states is described as a discrete context-tree model. Then, at the bottom level, a different, arbitrary model for real-valued time series - a base model - is associated with each state. This defines a very general framework that can be used in conjunction with any existing model class to build flexible and interpretable mixture models. We call this the Bayesian Context Trees State Space Model, or the BCT-X framework. Efficient algorithms are introduced that allow for effective, exact Bayesian inference; in particular, the maximum a posteriori probability (MAP) context-tree model can be identified. These algorithms can be updated sequentially, facilitating efficient online forecasting. The utility of the general framework is illustrated in two particular instances: When autoregressive (AR) models are used as base models, resulting in a nonlinear AR mixture model, and when conditional heteroscedastic (ARCH) models are used, resulting in a mixture model that offers a powerful and systematic way of modelling the well-known volatility asymmetries in financial data. In forecasting, the BCT-X methods are found to outperform state-of-the-art techniques on simulated and real-world data, both in terms of accuracy and computational requirements. In modelling, the BCT-X structure finds natural structure present in the data. In particular, the BCT-ARCH model reveals a novel, important feature of stock market index data, in the form of an enhanced leverage effect. |
Date: | 2023–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2308.00913&r=ecm |
By: | Brantly Callaway; Emmanuel Selorm Tsyawo |
Abstract: | This paper considers identifying and estimating causal effect parameters in a staggered treatment adoption setting -- that is, where a researcher has access to panel data and treatment timing varies across units. We consider the case where untreated potential outcomes may follow non-parallel trends over time across groups. This implies that the identifying assumptions of leading approaches such as difference-in-differences do not hold. We mainly focus on the case where untreated potential outcomes are generated by an interactive fixed effects model and show that variation in treatment timing provides additional moment conditions that can be used to recover a large class of target causal effect parameters. Our approach exploits the variation in treatment timing without requiring either (i) a large number of time periods or (ii) requiring any extra exclusion restrictions. This is in contrast to essentially all of the literature on interactive fixed effects models which requires at least one of these extra conditions. Rather, our approach directly applies in settings where there is variation in treatment timing. Although our main focus is on a model with interactive fixed effects, our idea of using variation in treatment timing to recover causal effect parameters is quite general and could be adapted to other settings with non-parallel trends across groups such as dynamic panel data models. |
Date: | 2023–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2308.02899&r=ecm |
By: | KITAGAWA, Toru; SAWADA, Masayuki |
Abstract: | We study the identification problem of the linear long regression coefficients by data combination. Unlike the usual data combination problem, we consider combining multiple short regressions of the same outcome with different regressors. For this conceptually novel problem, we provide partial identification results for the long regression coefficients under a restriction on the unknown correlation structure. Specifically, we employ an elliptic constraint from the relations among the explained variations of the regressions to induce the bounds. |
Keywords: | Data combination, Linear regression, Elliptic constraint |
Date: | 2023–08 |
URL: | http://d.repec.org/n?u=RePEc:hit:hituec:747&r=ecm |
By: | Christis Katsouris |
Abstract: | These lecture notes represent supplementary material for a short course on time series econometrics and network econometrics. We give emphasis on limit theory for time series regression models as well as the use of the local-to-unity parametrization when modeling time series nonstationarity. Moreover, we present various non-asymptotic theory results for moderate deviation principles when considering the eigenvalues of covariance matrices as well as asymptotics for unit root moderate deviations in nonstationary autoregressive processes. Although not all applications from the literature are covered we also discuss some open problems in nonstationary time series econometrics and network econometrics. |
Date: | 2023–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2308.01418&r=ecm |
By: | Miguel Herculano; Punnoose Jacob |
Abstract: | We construct a Financial Conditions Index (FCI) for the United States using a dataset that features many missing observations. The novel combination of probabilistic principal component techniques and a Bayesian factor-augmented VAR model resolves the challenges posed by data points being unavailable within a high-frequency dataset. Even with up to 62% of the data missing, the new approach yields a less noisy FCI that tracks the movement of 22 underlying financial variables more accurately both in-sample and out-of-sample. |
Keywords: | Financial Conditions Index, Mixed-Frequency, Bayesian Methods |
JEL: | C11 C32 C52 C53 |
Date: | 2023–08 |
URL: | http://d.repec.org/n?u=RePEc:een:camaaa:2023-42&r=ecm |
By: | Thomas R. Cook; Nathan M. Palmer |
Abstract: | Despite growing interest in the use of complex models, such as machine learning (ML) models, for credit underwriting, ML models are difficult to interpret, and it is possible for them to learn relationships that yield de facto discrimination. How can we understand the behavior and potential biases of these models, especially if our access to the underlying model is limited? We argue that counterfactual reasoning is ideal for interpreting model behavior, and that Gaussian processes (GP) can provide approximate counterfactual reasoning while also incorporating uncertainty in the underlying model’s functional form. We illustrate with an exercise in which a simulated lender uses a biased machine model to decide credit terms. Comparing aggregate outcomes does not clearly reveal bias, but with a GP model we can estimate individual counterfactual outcomes. This approach can detect the bias in the lending model even when only a relatively small sample is available. To demonstrate the value of this approach for the more general task of model interpretability, we also show how the GP model’s estimates can be aggregated to recreate the partial density functions for the lending model. |
Keywords: | models; Gaussian process; model bias |
JEL: | C10 C14 C18 C45 |
Date: | 2023–06–15 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedkrr:96511&r=ecm |
By: | Chao Zhang; Xingyue Pu; Mihai Cucuringu; Xiaowen Dong |
Abstract: | We present a novel methodology for modeling and forecasting multivariate realized volatilities using customized graph neural networks to incorporate spillover effects across stocks. The proposed model offers the benefits of incorporating spillover effects from multi-hop neighbors, capturing nonlinear relationships, and flexible training with different loss functions. Our empirical findings provide compelling evidence that incorporating spillover effects from multi-hop neighbors alone does not yield a clear advantage in terms of predictive accuracy. However, modeling nonlinear spillover effects enhances the forecasting accuracy of realized volatilities, particularly for short-term horizons of up to one week. Moreover, our results consistently indicate that training with the Quasi-likelihood loss leads to substantial improvements in model performance compared to the commonly-used mean squared error. A comprehensive series of empirical evaluations in alternative settings confirm the robustness of our results. |
Date: | 2023–08 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2308.01419&r=ecm |
By: | Bryan T. Kelly (Yale School of Management; AQR Capital Management; NBER); Boris Kuznetsov (Ecole Polytechnique Fédérale de Lausanne; Swiss Finance Institute); Semyon Malamud (Ecole Polytechnique Fédérale de Lausanne; Swiss Finance Institute; and CEPR); Teng Andrea Xu (Ecole Polytechnique Fédérale de Lausanne) |
Abstract: | We develop a novel methodology for extracting information from option implied volatility (IV) surfaces for the cross-section of stock returns, using image recognition techniques from machine learning (ML). The predictive information we identify is essentially uncorrelated with most of the existing option-implied characteristics, delivers a higher Sharpe ratio, and has a significant alpha relative to a battery of standard and option-implied factors. We show the virtue of ensemble complexity: Best results are achieved with a large ensemble of ML models, with the out-of-sample performance increasing in the ensemble size, saturating when the number of model parameters significantly exceeds the number of observations. We introduce principal linear features, an analog of principal components for ML and use them to show IV feature complexity: A low-rank rotation of the IV surface cannot explain the model performance. Our results are robust to short-sale constraints and transaction costs. |
Date: | 2023–08 |
URL: | http://d.repec.org/n?u=RePEc:chf:rpseri:rp2360&r=ecm |
By: | Mario Forni; Luca Gambetti; Nicolò Maffei-Faccioli; Luca Sala |
Abstract: | Financial shocks represent a major driver of fluctuations in tail risk, defined as the 5th percentile of the forecast distributions of output and inflation. Since the variance and the asymmetry of the forecast distributions are largely driven by the left tail, financial shocks turn out to play a prominent role for distribution dynamics. Monetary policy shocks also play a role in shaping risk, although its effects are smaller than those of financial shocks. These findings are obtained using a novel econometric approach which combines quantile regressions and Structural VARs. |
Keywords: | Tail Risk, Uncertainty, Skewness, Forecast Distribution, SVAR, Financial shocks, Monetary Policy Shocks, Quantile Regressions |
JEL: | C32 E32 |
Date: | 2023–03 |
URL: | http://d.repec.org/n?u=RePEc:bno:worpap:2023_3&r=ecm |
By: | Valter T. Yoshida Jr; Alan de Genaro; Rafael Schiozer; Toni R. E. dos Santos |
Abstract: | Large databases and Machine Learning have increased our ability to produce models with a different number of observations and explanatory variables. The credit scoring literature has focused on the optimization of classifications. Little attention has been paid to the inadequate use of models. This study fills this gap by focusing on model risk. It proposes a measure to assess credit scoring model risk. Its emphasis is on model misuse. The proposed model risk measure is ordinal, and it applies to many settings and types of loan portfolios, allowing comparisons of different specifications and situations (as in-sample or out-of-sample data). It allows practitioners and regulators to evaluate and compare different credit risk models in terms of model risk. We empirically test our measure in plugin LASSO default models and find that adding loans from different banks to increase the number of observations is not optimal, challenging the generally accepted assumption that more data leads to better predictions. |
Date: | 2023–08 |
URL: | http://d.repec.org/n?u=RePEc:bcb:wpaper:582&r=ecm |