nep-ecm New Economics Papers
on Econometrics
Issue of 2024–12–09
25 papers chosen by
Sune Karlsson, Örebro universitet


  1. Estimation and Inference in Dyadic Network Formation Models with Nontransferable Utilities By Ming Li; Zhentao Shi; Yapeng Zheng
  2. Joint Estimation of Conditional Mean and Covariance for Unbalanced Panels By Damir Filipovic; Paul Schneider
  3. Inference in Partially Linear Models under Dependent Data with Deep Neural Networks By Chad Brown
  4. Inference for Treatment Effects Conditional on Generalized Principal Strata using Instrumental Variables By Yuehao Bai; Shunzhuang Huang; Sarah Moon; Andres Santos; Azeem M. Shaikh; Edward J. Vytlacil
  5. Higher-Order Causal Message Passing for Experimentation with Complex Interference By Mohsen Bayati; Yuwei Luo; William Overman; Sadegh Shirani; Ruoxuan Xiong
  6. Detecting Cointegrating Relations in Non-stationary Matrix-Valued Time Series By Alain Hecq; Ivan Ricardo; Ines Wilms
  7. Addressing Design Bias Due to Instrumental Variables in Survey Experiments: Considering Violations of the Exclusion Restriction By FAN, Yizhou; Nakao, Ran; HAYASHIKAWA, Yuki
  8. A Constrained Dynamic Nelson-Siegel Model for Monetary Policy Analysis By Jamie L. Cross; Aubrey Poon; Wenying Yao; Dan Zhu
  9. Data-Driven Error Estimation: Upper Bounding Multiple Errors with No Technical Debt By Krishnamurthy, Sanath Kumar; Athey, Susan; Brunskill, Emma
  10. A Bayesian Perspective on the Maximum Score Problem By Christopher D. Walker
  11. Randomly Assigned First Differences? By Cl\'ement de Chaisemartin
  12. Efficient Nested Estimation of CoVaR: A Decoupled Approach By Nifei Lin; Yingda Song; L. Jeff Hong
  13. Jacobian-free Efficient Pseudo-Likelihood (EPL) Algorithm By Takeshi Fukasawa
  14. Likelihood Ratio Test for Publication Bias – a Proof of Concept By Lenartowicz, Paweł
  15. Identifying Conduct Parameters with Separable Demand: A Counterexample to Lau (1982) By Yuri Matsumura; Suguru Otani
  16. Enforcing asymptotic behavior with DNNs for approximation and regression in finance By Hardik Routray; Bernhard Hientzsch
  17. Volatility Parametrizations with Random Coefficients: Analytic Flexibility for Implied Volatility Surfaces By Nicola F. Zaugg; Leonardo Perotti; Lech A. Grzelak
  18. Volatility models versus intensity models: analogy and differences By Aknouche, Abdelhakim; Dimitrakopoulos, Stefanos
  19. Nowcasting distributions: a functional MIDAS model By Massimiliano Marcellino; Andrea Renzetti; Tommaso Tornese
  20. Beyond the Traditional VIX: A Novel Approach to Identifying Uncertainty Shocks in Financial Markets By Ayush Jha; Abootaleb Shirvani; Svetlozar T. Rachev; Frank J. Fabozzi
  21. Unified Causality Analysis Based on the Degrees of Freedom By Andr\'as Telcs; Marcell T. Kurbucz; Antal Jakov\'ac
  22. On the limiting variance of matching estimators By Songliang Chen; Fang Han
  23. Testing and Quantifying Economic Resilience By HARA, Naoko; YAMAMOTO, Yohei
  24. Understanding the decision-making process of choice modellers By Gabriel Nova; Sander van Cranenburgh; Stephane Hess
  25. Firm Heterogeneity and Macroeconomic Fluctuations: a Functional VAR model By Massimiliano Marcellino; Andrea Renzetti; Tommaso Tornese

  1. By: Ming Li; Zhentao Shi; Yapeng Zheng
    Abstract: This paper studies estimation and inference in a dyadic network formation model with observed covariates, unobserved heterogeneity, and nontransferable utilities. With the presence of the high dimensional fixed effects, the maximum likelihood estimator is numerically difficult to compute and suffers from the incidental parameter bias. We propose an easy-to-compute one-step estimator for the homophily parameter of interest, which is further refined to achieve $\sqrt{N}$-consistency via split-network jackknife and efficiency by the bootstrap aggregating (bagging) technique. We establish consistency for the estimator of the fixed effects and prove asymptotic normality for the unconditional average partial effects. Simulation studies show that our method works well with finite samples, and an empirical application using the risk-sharing data from Nyakatoke highlights the importance of employing proper statistical inferential procedures.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.23852
  2. By: Damir Filipovic; Paul Schneider
    Abstract: We propose a nonparametric, kernel-based joint estimator for conditional mean and covariance matrices in large unbalanced panels. Our estimator, with proven consistency and finite-sample guarantees, is applied to a comprehensive panel of monthly US stock excess returns from 1962 to 2021, conditioned on macroeconomic and firm-specific covariates. The estimator captures time-varying cross-sectional dependencies effectively, demonstrating robust statistical performance. In asset pricing, it generates conditional mean-variance efficient portfolios with out-of-sample Sharpe ratios that substantially exceed those of equal-weighted benchmarks.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.21858
  3. By: Chad Brown
    Abstract: I consider inference in a partially linear regression model under stationary $\beta$-mixing data after first stage deep neural network (DNN) estimation. Using the DNN results of Brown (2024), I show that the estimator for the finite dimensional parameter, constructed using DNN-estimated nuisance components, achieves $\sqrt{n}$-consistency and asymptotic normality. By avoiding sample splitting, I address one of the key challenges in applying machine learning techniques to econometric models with dependent data. In a future version of this work, I plan to extend these results to obtain general conditions for semiparametric inference after DNN estimation of nuisance components, which will allow for considerations such as more efficient estimation procedures, and instrumental variable settings.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.22574
  4. By: Yuehao Bai; Shunzhuang Huang; Sarah Moon; Andres Santos; Azeem M. Shaikh; Edward J. Vytlacil
    Abstract: In a setting with a multi-valued outcome, treatment and instrument, this paper considers the problem of inference for a general class of treatment effect parameters. The class of parameters considered are those that can be expressed as the expectation of a function of the response type conditional on a generalized principal stratum. Here, the response type simply refers to the vector of potential outcomes and potential treatments, and a generalized principal stratum is a set of possible values for the response type. In addition to instrument exogeneity, the main substantive restriction imposed rules out certain values for the response types in the sense that they are assumed to occur with probability zero. It is shown through a series of examples that this framework includes a wide variety of parameters and assumptions that have been considered in the previous literature. A key result in our analysis is a characterization of the identified set for such parameters under these assumptions in terms of existence of a non-negative solution to linear systems of equations with a special structure. We propose methods for inference exploiting this special structure and recent results in Fang et al. (2023).
    Date: 2024–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2411.05220
  5. By: Mohsen Bayati; Yuwei Luo; William Overman; Sadegh Shirani; Ruoxuan Xiong
    Abstract: Accurate estimation of treatment effects is essential for decision-making across various scientific fields. This task, however, becomes challenging in areas like social sciences and online marketplaces, where treating one experimental unit can influence outcomes for others through direct or indirect interactions. Such interference can lead to biased treatment effect estimates, particularly when the structure of these interactions is unknown. We address this challenge by introducing a new class of estimators based on causal message-passing, specifically designed for settings with pervasive, unknown interference. Our estimator draws on information from the sample mean and variance of unit outcomes and treatments over time, enabling efficient use of observed data to estimate the evolution of the system state. Concretely, we construct non-linear features from the moments of unit outcomes and treatments and then learn a function that maps these features to future mean and variance of unit outcomes. This allows for the estimation of the treatment effect over time. Extensive simulations across multiple domains, using synthetic and real network data, demonstrate the efficacy of our approach in estimating total treatment effect dynamics, even in cases where interference exhibits non-monotonic behavior in the probability of treatment.
    Date: 2024–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2411.00945
  6. By: Alain Hecq; Ivan Ricardo; Ines Wilms
    Abstract: This paper proposes a Matrix Error Correction Model to identify cointegration relations in matrix-valued time series. We hereby allow separate cointegrating relations along the rows and columns of the matrix-valued time series and use information criteria to select the cointegration ranks. Through Monte Carlo simulations and a macroeconomic application, we demonstrate that our approach provides a reliable estimation of the number of cointegrating relationships.
    Date: 2024–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2411.05601
  7. By: FAN, Yizhou (Hiroshima University); Nakao, Ran (Ehime University); HAYASHIKAWA, Yuki
    Abstract: This study proposed a strategy for causal identification in survey experiments by leveraging heteroscedasticity induced through information provision (Z). Here, Z affects attitudes (D), which in turn influences behavior (Y). While Z might traditionally serve as an instrumental variable (IV), a direct effect from Z to the outcome variable (Y) would invalidate it as an IV due to exclusion restriction violations. Building on prior methods, such as Lewbel’s IV approach, which utilize naturally occurring heteroscedasticity in observational data, this study explored the artificial induction of heteroscedasticity within survey experiments. We developed a new unbiased estimator within linear structural model and conducted simulations to confirm that this new estimator outperforms conventional IV estimators. We further addressed the issue of endogenous non-compliance, and discussed that under the homogeneous function assumption, manipulating information ambiguity can yield a consistent bias-corrected estimator. This approach can enhance the possibility of causal identification by survey experiments and extend heteroscedasticity-based methodologies to complex compliance contexts, offering new tools for robust experimental design.
    Date: 2024–10–30
    URL: https://d.repec.org/n?u=RePEc:osf:socarx:yecm2
  8. By: Jamie L. Cross; Aubrey Poon; Wenying Yao; Dan Zhu
    Abstract: The Dynamic Nelson-Siegel (DNS) model implies that the instantaneous bond yield is a linear combination of yield curve’s level and slope factors. However, this constraint is not used in practice because it induces a singularity in the state covariance matrix. We show that this problem can be resolved using Bayesian methods. The key idea is to view the state equation as a prior distribution over missing data to obtain a hyperplane truncated multivariate normal conditional posterior distribution for the latent factors. This distribution can then be reparameterized as a conditional multivariate normal distribution given the constraint. Samples from this distribution can be obtained in a direct and computationally efficient manner, thus bypassing the Kalman filter recursions. The empirical significance of the resulting Yield-Macro Constrained DNS (YM-CDNS) model is demonstrated through both a reduced form analysis of the US Treasury yield curve, and a structural analysis of functional conventional and unconventional monetary policy shocks on the yield curve and the broader macroeconomy.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:bny:wpaper:0133
  9. By: Krishnamurthy, Sanath Kumar (Stanford U); Athey, Susan (Stanford U); Brunskill, Emma (Stanford U)
    Abstract: We formulate the problem of constructing multiple simultaneously valid confidence intervals (CIs) as estimating a high probability upper bound on the maximum error for a class/set of estimate-estimand-error tuples, and refer to this as the error estimation problem. For a single such tuple, data-driven confidence intervals can often be used to bound the error in our estimate. However, for a class of estimate-estimand-error tuples, nontrivial high probability upper bounds on the maximum error often require class complexity as input — limiting the practicality of such methods and often resulting in loose bounds. Rather than deriving theoretical class complexity-based bounds, we propose a completely data-driven approach to estimate an upper bound on the maximum error. The simple and general nature of our solution to this fundamental challenge lends itself to several applications including: multiple CI construction, multiple hypothesis testing, estimating excess risk bounds (a fundamental measure of uncertainty in machine learning) for any training/fine-tuning algorithm, and enabling the development of a contextual bandit pipeline that can leverage any reward model estimation procedure as input (without additional mathematical analysis).
    Date: 2024–05
    URL: https://d.repec.org/n?u=RePEc:ecl:stabus:4208
  10. By: Christopher D. Walker
    Abstract: This paper presents a Bayesian inference framework for a linear index threshold-crossing binary choice model that satisfies a median independence restriction. The key idea is that the model is observationally equivalent to a probit model with nonparametric heteroskedasticity. Consequently, Gibbs sampling techniques from Albert and Chib (1993) and Chib and Greenberg (2013) lead to a computationally attractive Bayesian inference procedure in which a Gaussian process forms a conditionally conjugate prior for the natural logarithm of the skedastic function.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.17153
  11. By: Cl\'ement de Chaisemartin
    Abstract: I consider treatment-effect estimation with a two-periods panel, using a first-difference regression of the outcome evolution $\Delta Y_g$ on the treatment evolution $\Delta D_g$. To justify this regression, one may assume that $\Delta D_g$ is as good as randomly assigned, namely uncorrelated to the residual of the first-differenced model and to the treatment's effect. This note shows that if one posits a causal model in levels between the treatment and the outcome, then the residual of the first-differenced model is a function of $D_{g, 1}$, so $\Delta D_g$ uncorrelated to that residual essentially implies that $\Delta D_g$ is uncorrelated to $D_{g, 1}$. This is a strong, testable condition. If $\Delta D_g$ is correlated to $D_{g, 1}$, assuming that $\Delta D_g$ is uncorrelated to the treatment effect and to the remaining terms of the residual may not be sufficient to have that the first-difference regression identifies a convex combination of treatment effects. I use these results to revisit Acemoglu et al (2016).
    Date: 2024–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2411.03208
  12. By: Nifei Lin; Yingda Song; L. Jeff Hong
    Abstract: This paper addresses the estimation of the systemic risk measure known as CoVaR, which quantifies the risk of a financial portfolio conditional on another portfolio being at risk. We identify two principal challenges: conditioning on a zero-probability event and the repricing of portfolios. To tackle these issues, we propose a decoupled approach utilizing smoothing techniques and develop a model-independent theoretical framework grounded in a functional perspective. We demonstrate that the rate of convergence of the decoupled estimator can achieve approximately $O_{\rm P}(\Gamma^{-1/2})$, where $\Gamma$ represents the computational budget. Additionally, we establish the smoothness of the portfolio loss functions, highlighting its crucial role in enhancing sample efficiency. Our numerical results confirm the effectiveness of the decoupled estimators and provide practical insights for the selection of appropriate smoothing techniques.
    Date: 2024–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2411.01319
  13. By: Takeshi Fukasawa
    Abstract: This study proposes a simple procedure to compute Efficient Pseudo Likelihood (EPL) estimator proposed by Dearing and Blevins (2024) for estimating dynamic discrete games, without computing Jacobians of equilibrium constraints. EPL estimator is efficient, convergent, and computationally fast. However, the original algorithm requires deriving and coding the Jacobians, which are cumbersome and prone to coding mistakes especially when considering complicated models. The current study proposes to avoid the computation of Jacobians by combining the ideas of numerical derivatives (for computing Jacobian-vector products) and the Krylov method (for solving linear equations). It shows good computational performance of the proposed method by numerical experiments.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.20029
  14. By: Lenartowicz, Paweł
    Abstract: Publication bias poses a serious challenge to the integrity of scientific research and meta-analyses. There exist persistent methodological obstacles for estimating this bias, especially with heterogeneous dataset, where studies vary widely in methodologies and effect sizes. To address this gap, I propose a Likelihood Ratio Test for Publication Bias, a statistical method designed to detect and quantify publication bias in datasets of heterogeneous studies results. I also show the proof-of-concept implementation developed in Python and simulations that evaluate the performance. The results demonstrate that this new method clearly outperforms existing methods like Z-Curve 2 and the Caliper test in estimating the magnitude of publication bias, showing higher precision and reliability, with still some space for improvement due to spotted errors in the implemented algorithm. While inherent challenges in publication bias detection remain, such as the influence of different research practices and the need for large sample sizes, the Likelihood Ratio Test offers a significant advancement in addressing these issues.
    Date: 2024–11–12
    URL: https://d.repec.org/n?u=RePEc:osf:metaar:jt5zf
  15. By: Yuri Matsumura; Suguru Otani
    Abstract: We provide a counterexample to the conduct parameter identification result established in the foundational work of Lau (1982), which generalizes the identification theorem of Bresnahan (1982) by relaxing the linearity assumptions. We identify a separable demand function that still permits identification and validate this case both theoretically and through numerical simulations.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.16998
  16. By: Hardik Routray; Bernhard Hientzsch
    Abstract: We propose a simple methodology to approximate functions with given asymptotic behavior by specifically constructed terms and an unconstrained deep neural network (DNN). The methodology we describe extends to various asymptotic behaviors and multiple dimensions and is easy to implement. In this work we demonstrate it for linear asymptotic behavior in one-dimensional examples. We apply it to function approximation and regression problems where we measure approximation of only function values (``Vanilla Machine Learning''-VML) or also approximation of function and derivative values (``Differential Machine Learning''-DML) on several examples. We see that enforcing given asymptotic behavior leads to better approximation and faster convergence.
    Date: 2024–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2411.05257
  17. By: Nicola F. Zaugg; Leonardo Perotti; Lech A. Grzelak
    Abstract: It is a market practice to express market-implied volatilities in some parametric form. The most popular parametrizations are based on or inspired by an underlying stochastic model, like the Heston model (SVI method) or the SABR model (SABR parametrization). Their popularity is often driven by a closed-form representation enabling efficient calibration. However, these representations indirectly impose a model-specific volatility structure on observable market quotes. When the market's volatility does not follow the parametric model regime, the calibration procedure will fail or lead to extreme parameters, indicating inconsistency. This article addresses this critical limitation - we propose an arbitrage-free framework for letting the parameters from the parametric implied volatility formula be random. The method enhances the existing parametrizations and enables a significant widening of the spectrum of permissible shapes of implied volatilities while preserving analyticity and, therefore, computation efficiency. We demonstrate the effectiveness of the novel method on real data from short-term index and equity options, where the standard parametrizations fail to capture market dynamics. Our results show that the proposed method is particularly powerful in modeling the implied volatility curves of short expiry options preceding an earnings announcement, when the risk-neutral probability density function exhibits a bimodal form.
    Date: 2024–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2411.04041
  18. By: Aknouche, Abdelhakim; Dimitrakopoulos, Stefanos
    Abstract: We consider two popular classes of volatility models, the generalized autoregressive conditional heteroscedastic (GARCH) model and the stochastic volatility (SV) model. We compare these two models with two classes of intensity models, the integer-valued GARCH (INGARCH) model and the integer-valued stochastic volatility/intensity (INSV) model, which are corresponding integer-valued counterparts of the former. We reveal the analogy and differences of the models within the same class of volatility/intensity models, as well as between the two different classes of models.
    Keywords: GARCH, integer-valued GARCH, integer-valued stochastic intensity, observation-driven models, parameter-driven models, stochastic volatility.
    JEL: C25 C51 C58
    Date: 2024–10–28
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:122528
  19. By: Massimiliano Marcellino; Andrea Renzetti; Tommaso Tornese
    Abstract: We propose a functional MIDAS model to leverage high-frequency information for forecasting and nowcasting distributions observed at a lower frequency. We approximate the low-frequency distribution using Functional Principal Component Analysis and consider a group lasso spike-and-slab prior to identify the relevant predictors in the finite-dimensional SUR-MIDAS approximation of the functional MIDAS model. In our application, we use the model to nowcast the U.S. households' income distribution. Our findings indicate that the model enhances forecast accuracy for the entire target distribution and for key features of the distribution that signal changes in inequality.
    Date: 2024–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2411.05629
  20. By: Ayush Jha; Abootaleb Shirvani; Svetlozar T. Rachev; Frank J. Fabozzi
    Abstract: We introduce a new identification strategy for uncertainty shocks to explain macroeconomic volatility in financial markets. The Chicago Board Options Exchange Volatility Index (VIX) measures market expectations of future volatility, but traditional methods based on second-moment shocks and time-varying volatility of the VIX often fail to capture the non-Gaussian, heavy-tailed nature of asset returns. To address this, we construct a revised VIX by fitting a double-subordinated Normal Inverse Gaussian Levy process to S&P 500 option prices, providing a more comprehensive measure of volatility that reflects the extreme movements and heavy tails observed in financial data. Using an axiomatic approach, we introduce a general family of risk-reward ratios, computed with our revised VIX and fitted over a fractional time series to more accurately identify uncertainty shocks in financial markets.
    Date: 2024–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2411.02804
  21. By: Andr\'as Telcs; Marcell T. Kurbucz; Antal Jakov\'ac
    Abstract: Temporally evolving systems are typically modeled by dynamic equations. A key challenge in accurate modeling is understanding the causal relationships between subsystems, as well as identifying the presence and influence of unobserved hidden drivers on the observed dynamics. This paper presents a unified method capable of identifying fundamental causal relationships between pairs of systems, whether deterministic or stochastic. Notably, the method also uncovers hidden common causes beyond the observed variables. By analyzing the degrees of freedom in the system, our approach provides a more comprehensive understanding of both causal influence and hidden confounders. This unified framework is validated through theoretical models and simulations, demonstrating its robustness and potential for broader application.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.19469
  22. By: Songliang Chen; Fang Han
    Abstract: This paper examines the limiting variance of nearest neighbor matching estimators for average treatment effects with a fixed number of matches. We present, for the first time, a closed-form expression for this limit. Here the key is the establishment of the limiting second moment of the catchment area's volume, which resolves a question of Abadie and Imbens. At the core of our approach is a new universality theorem on the measures of high-order Voronoi cells, extending a result by Devroye, Gy\"orfi, Lugosi, and Walk.
    Date: 2024–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2411.05758
  23. By: HARA, Naoko; YAMAMOTO, Yohei
    Abstract: We propose a formal testing procedure to examine resilience of an economy. Our approach is applicable even when a cross-section of control group is unavailable and circumvents potential bias in time-series regressions using data that includes structural breaks. We provide measures of shock absorption and cumulative recovery. Our em-pirical analysis reveals that most of the advanced countries were not resilient to the Global Financial Crisis, while many were so during the COVID-19 pandemic. Poten-tial determinants of economic resilience such as Fnancial leverage and labor market regulation may have negative correlations with these measures and other determinants have heterogenous associations depending on the nature of the crisis.
    Keywords: Economic Resilience, Counterfactual Forecast, Pretesting, Global Financial Crisis, COVID-19 Pandemic
    JEL: C12 C53
    Date: 2024–11–08
    URL: https://d.repec.org/n?u=RePEc:hit:hiasdp:hias-e-142
  24. By: Gabriel Nova; Sander van Cranenburgh; Stephane Hess
    Abstract: Discrete Choice Modelling serves as a robust framework for modelling human choice behaviour across various disciplines. Building a choice model is a semi structured research process that involves a combination of a priori assumptions, behavioural theories, and statistical methods. This complex set of decisions, coupled with diverse workflows, can lead to substantial variability in model outcomes. To better understand these dynamics, we developed the Serious Choice Modelling Game, which simulates the real world modelling process and tracks modellers' decisions in real time using a stated preference dataset. Participants were asked to develop choice models to estimate Willingness to Pay values to inform policymakers about strategies for reducing noise pollution. The game recorded actions across multiple phases, including descriptive analysis, model specification, and outcome interpretation, allowing us to analyse both individual decisions and differences in modelling approaches. While our findings reveal a strong preference for using data visualisation tools in descriptive analysis, it also identifies gaps in missing values handling before model specification. We also found significant variation in the modelling approach, even when modellers were working with the same choice dataset. Despite the availability of more complex models, simpler models such as Multinomial Logit were often preferred, suggesting that modellers tend to avoid complexity when time and resources are limited. Participants who engaged in more comprehensive data exploration and iterative model comparison tended to achieve better model fit and parsimony, which demonstrate that the methodological choices made throughout the workflow have significant implications, particularly when modelling outcomes are used for policy formulation.
    Date: 2024–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2411.01704
  25. By: Massimiliano Marcellino; Andrea Renzetti; Tommaso Tornese
    Abstract: We develop a Functional Augmented Vector Autoregression (FunVAR) model to explicitly incorporate firm-level heterogeneity observed in more than one dimension and study its interaction with aggregate macroeconomic fluctuations. Our methodology employs dimensionality reduction techniques for tensor data objects to approximate the joint distribution of firm-level characteristics. More broadly, our framework can be used for assessing predictions from structural models that account for micro-level heterogeneity observed on multiple dimensions. Leveraging firm-level data from the Compustat database, we use the FunVAR model to analyze the propagation of total factor productivity (TFP) shocks, examining their impact on both macroeconomic aggregates and the cross-sectional distribution of capital and labor across firms.
    Date: 2024–11
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2411.05695

This nep-ecm issue is ©2024 by Sune Karlsson. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.