nep-cmp New Economics Papers
on Computational Economics
Issue of 2025–02–17
fourteen papers chosen by
Stan Miles, Thompson Rivers University


  1. Hybrid Quantum Neural Networks with Amplitude Encoding: Advancing Recovery Rate Predictions By Ying Chen; Paul Griffin; Paolo Recchia; Zhou Lei; Hongrui Chang
  2. Progress in Artificial Intelligence and its Determinants By Michael R. Douglas; Sergiy Verstyuk
  3. Scale-Insensitive Neural Network Significance Tests By Hasan Fallahgoul
  4. Reinforcement-Learning Portfolio Allocation with Dynamic Embedding of Market Information By Jinghai He; Cheng Hua; Chunyang Zhou; Zeyu Zheng
  5. Municipality synthetic Gini index for Colombia: A machine learning approach By John Michael, Riveros-Gavilanes
  6. Predicting COVID-19 Mortality Rates: An Analysis of Case Incidence, Mask Usage, and Machine Learning Approaches in U.S. Counties By Jacob Pratt; Serkan Varol; Serkan Catma
  7. Political Bias in Large Language Models: A Comparative Analysis of ChatGPT-4, Perplexity, Google Gemini, and Claude By Tavishi Choudhary
  8. AI Governance through Markets By Philip Moreira Tomei; Rupal Jain; Matija Franklin
  9. Artificial Intelligence Clones By Annie Liang
  10. Quantitative Theory of Money or Prices? A Historical, Theoretical, and Econometric Analysis By Jose Mauricio Gomez Julian
  11. Constructing Applicants from Loan-Level Data: A Case Study of Mortgage Applications By Hadi Elzayn; Simon Freyaldenhoven; Minchul Shin
  12. Welfare Modeling with AI as Economic Agents: A Game-Theoretic and Behavioral Approach By Sheyan Lalmohammed
  13. PASER: A Physics-Inspired Theory for Stimulated Growth and Real-Time Optimization in On-Demand Platforms By Ioannis Dritsas
  14. Beyond Human Intervention: Algorithmic Collusion through Multi-Agent Learning Strategies By Suzie Grondin; Arthur Charpentier; Philipp Ratz

  1. By: Ying Chen; Paul Griffin; Paolo Recchia; Zhou Lei; Hongrui Chang
    Abstract: Recovery rate prediction plays a pivotal role in bond investment strategies, enhancing risk assessment, optimizing portfolio allocation, improving pricing accuracy, and supporting effective credit risk management. However, forecasting faces challenges like high-dimensional features, small sample sizes, and overfitting. We propose a hybrid Quantum Machine Learning model incorporating Parameterized Quantum Circuits (PQC) within a neural network framework. PQCs inherently preserve unitarity, avoiding computationally costly orthogonality constraints, while amplitude encoding enables exponential data compression, reducing qubit requirements logarithmically. Applied to a global dataset of 1, 725 observations (1996-2023), our method achieved superior accuracy (RMSE 0.228) compared to classical neural networks (0.246) and quantum models with angle encoding (0.242), with efficient computation times. This work highlights the potential of hybrid quantum-classical architectures in advancing recovery rate forecasting.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.15828
  2. By: Michael R. Douglas; Sergiy Verstyuk
    Abstract: We study long-run progress in artificial intelligence in a quantitative way. Many measures, including traditional ones such as patents and publications, machine learning benchmarks, and a new Aggregate State of the Art in ML (or ASOTA) Index we have constructed from these, show exponential growth at roughly constant rates over long periods. Production of patents and publications doubles every ten years, by contrast with the growth of computing resources driven by Moore's Law, roughly a doubling every two years. We argue that the input of AI researchers is also crucial and its contribution can be objectively estimated. Consequently, we give a simple argument that explains the 5:1 relation between these two rates. We then discuss the application of this argument to different output measures and compare our analyses with predictions based on machine learning scaling laws proposed in existing literature. Our quantitative framework facilitates understanding, predicting, and modulating the development of these important technologies.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.17894
  3. By: Hasan Fallahgoul
    Abstract: This paper develops a scale-insensitive framework for neural network significance testing, substantially generalizing existing approaches through three key innovations. First, we replace metric entropy calculations with Rademacher complexity bounds, enabling the analysis of neural networks without requiring bounded weights or specific architectural constraints. Second, we weaken the regularity conditions on the target function to require only Sobolev space membership $H^s([-1, 1]^d)$ with $s > d/2$, significantly relaxing previous smoothness assumptions while maintaining optimal approximation rates. Third, we introduce a modified sieve space construction based on moment bounds rather than weight constraints, providing a more natural theoretical framework for modern deep learning practices. Our approach achieves these generalizations while preserving optimal convergence rates and establishing valid asymptotic distributions for test statistics. The technical foundation combines localization theory, sharp concentration inequalities, and scale-insensitive complexity measures to handle unbounded weights and general Lipschitz activation functions. This framework better aligns theoretical guarantees with contemporary deep learning practice while maintaining mathematical rigor.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.15753
  4. By: Jinghai He; Cheng Hua; Chunyang Zhou; Zeyu Zheng
    Abstract: We develop a portfolio allocation framework that leverages deep learning techniques to address challenges arising from high-dimensional, non-stationary, and low-signal-to-noise market information. Our approach includes a dynamic embedding method that reduces the non-stationary, high-dimensional state space into a lower-dimensional representation. We design a reinforcement learning (RL) framework that integrates generative autoencoders and online meta-learning to dynamically embed market information, enabling the RL agent to focus on the most impactful parts of the state space for portfolio allocation decisions. Empirical analysis based on the top 500 U.S. stocks demonstrates that our framework outperforms common portfolio benchmarks and the predict-then-optimize (PTO) approach using machine learning, particularly during periods of market stress. Traditional factor models do not fully explain this superior performance. The framework's ability to time volatility reduces its market exposure during turbulent times. Ablation studies confirm the robustness of this performance across various reinforcement learning algorithms. Additionally, the embedding and meta-learning techniques effectively manage the complexities of high-dimensional, noisy, and non-stationary financial data, enhancing both portfolio performance and risk management.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.17992
  5. By: John Michael, Riveros-Gavilanes
    Abstract: This paper presents two synthetic estimations of the Gini coefficient at a municipality level for Colombia in the years 2000-2020. The methodology relies on several machine learning models to select the best model for imputation of the data. This derives in two Random Forest models were the first is characterized by containing Dominant Fixed Effects, while the second contains a set of Dominant Varying Factors. Upon these estimations, the Synthetic Gini Coefficients for both models are inspected, and public links are generated to access them. The Dominant Fixed Effects models is rather ”stiff” in contrast to the Varying Factor model. Hence, for researchers it is recommended to use the Synthetic Gini Coefficient with Varying Factors because it contains greater variability across time than the Dominant Fixed Effects models.
    Keywords: Gini; Machine learning; Random forest; estimation; synthetic; economics
    JEL: C80 H7 O10 P19
    Date: 2025–02–01
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:123561
  6. By: Jacob Pratt (University of Tennessee Chattanooga, USA); Serkan Varol (University of Tennessee Chattanooga, USA); Serkan Catma (University of Tennessee Chattanooga, USA)
    Abstract: The COVID-19 pandemic has necessitated the use of multidisciplinary approach to assess public health interventions. Data science has been widely utilized to promote interdisciplinary collaboration especially during the post-COVID era. This study uses a comprehensive dataset, including mask usage and epidemiological metrics from U.S. counties, to explore the correlation between public compliance with mask-wearing guidelines and COVID-19 mortality rates. After employing machine learning approaches such as linear regression, decision tree regression, and random forest regression, our analysis identified the random forest model as the most accurate model in predicting mortality rates due to its efficacy with the lowest error metrics. The models' performances were rigorously evaluated through error metric comparisons, highlighting the random forest model's robustness in handling complex interactions between variables. These findings provide actionable insights for public health strategists and policy makers, suggesting that enhanced mask compliance could significantly mitigate mortality rates during the ongoing pandemic and future health crises.
    Keywords: machine learning applications, predictive modeling for public health, COVID-19 analysis, pandemic, model comparison
    Date: 2024–08
    URL: https://d.repec.org/n?u=RePEc:smo:raiswp:0455
  7. By: Tavishi Choudhary (Greenwich High, Greenwich, Connecticut, US)
    Abstract: Artificial Intelligence large language models have rapidly gained widespread adoption, sparking discussions on their societal and political impact, especially for political bias and its far-reaching consequences on society and citizens. This study explores the political bias in large language models by conducting a comparative analysis across four popular AI mod-els—ChatGPT-4, Perplexity, Google Gemini, and Claude. This research systematically evaluates their responses to politically charged prompts and questions from the Pew Research Center’s Political Typology Quiz, Political Compass Quiz, and ISideWith Quiz. The findings revealed that ChatGPT-4 and Claude exhibit a liberal bias, Perplexity is more conservative, while Google Gemini adopts more centrist stances based on their training data sets. The presence of such biases underscores the critical need for transparency in AI development and the incorporation of diverse training datasets, regular audits, and user education to mitigate any of these biases. The most significant question surrounding political bias in AI is its consequences, particularly its influence on public discourse, policy-making, and democratic processes. The results of this study advocate for ethical implications for the development of AI models and the need for transparency to build trust and integrity in AI models. Additionally, future research directions have been outlined to explore and address the complex AI bias issue.
    Keywords: Large language models (LLM), Generative AI (GenAI), AI Governance and Policy, Ethical AI Systems
    Date: 2024–08
    URL: https://d.repec.org/n?u=RePEc:smo:raiswp:0451
  8. By: Philip Moreira Tomei; Rupal Jain; Matija Franklin
    Abstract: This paper argues that market governance mechanisms should be considered a key approach in the governance of artificial intelligence (AI), alongside traditional regulatory frameworks. While current governance approaches have predominantly focused on regulation, we contend that market-based mechanisms offer effective incentives for responsible AI development. We examine four emerging vectors of market governance: insurance, auditing, procurement, and due diligence, demonstrating how these mechanisms can affirm the relationship between AI risk and financial risk while addressing capital allocation inefficiencies. While we do not claim that market forces alone can adequately protect societal interests, we maintain that standardised AI disclosures and market mechanisms can create powerful incentives for safe and responsible AI development. This paper urges regulators, economists, and machine learning researchers to investigate and implement market-based approaches to AI governance.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.17755
  9. By: Annie Liang
    Abstract: Large language models, trained on personal data, may soon be able to mimic individual personalities. This would potentially transform search across human candidates, including for marriage and jobs -- indeed, several dating platforms have already begun experimenting with training "AI clones" to represent users. This paper presents a theoretical framework to study the tradeoff between the substantially expanded search capacity of AI clones and their imperfect representation of humans. Individuals are modeled as points in $k$-dimensional Euclidean space, and their AI clones are modeled as noisy approximations. I compare two search regimes: an "in-person regime" -- where each person randomly meets some number of individuals and matches to the most compatible among them -- against an "AI representation regime" -- in which individuals match to the person whose AI clone is most compatible with their AI clone. I show that a finite number of in-person encounters exceeds the expected payoff from search over infinite AI clones. Moreover, when the dimensionality of personality is large, simply meeting two people in person produces a higher expected match quality than entrusting the process to an AI platform, regardless of the size of its candidate pool.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.16996
  10. By: Jose Mauricio Gomez Julian
    Abstract: This research studies the relation between money and prices and its practical implications analyzing quarterly data from United States (1959-2022), Canada (1961-2022), United Kingdom (1986-2022), and Brazil (1996-2022). The historical, logical, and econometric consistency of the logical core of the two main theories of money is analyzed using objective bayesian and frequentist machine learning models, bayesian regularized artificial neural networks, and ensemble learning. It is concluded that money is not neutral at any time horizon and that, despite money is ultimately subordinated to prices, there is a reciprocal influence over time between money and prices which constitute a complex system. Non-neutrality is transmitted through aggregate demand and is based on the exchange value of money as a monetary unit.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.14623
  11. By: Hadi Elzayn; Simon Freyaldenhoven; Minchul Shin
    Abstract: We develop a clustering-based algorithm to detect loan applicants who submit multiple applications (“cross-applicants”) in a loan-level dataset without personal identifiers. A key innovation of our approach is a novel evaluation method that does not require labeled training data, allowing us to optimize the tuning parameters of our machine learning algorithm. By applying this methodology to Home Mortgage Disclosure Act (HMDA) data, we create a unique dataset that consolidates mortgage applications to the individual applicant level across the United States. Our preferred specification identifies cross-applicants with 93 percent precision
    Keywords: clustering; mortgage applications; HMDA
    JEL: C38 C63 C81 G21 R21
    Date: 2025–02–04
    URL: https://d.repec.org/n?u=RePEc:fip:fedpwp:99499
  12. By: Sheyan Lalmohammed
    Abstract: The integration of artificial intelligence (AI) into economic systems represents a transformative shift in decision-making frameworks, introducing novel dynamics between human and AI agents. This paper proposes a welfare model that incorporates both game-theoretic and behavioral dimensions to optimize interactions within human-AI ecosystems. By leveraging agent-based modeling (ABM), we simulate these interactions, accounting for trust evolution, perceived risks, and cognitive costs. The framework redefines welfare as the aggregate utility of interactions, adjusted for collaboration synergies, efficiency penalties, and equity considerations. Dynamic trust is modeled using Bayesian updating mechanisms, while synergies between agents are quantified through a collaboration index rooted in cooperative game theory. Results reveal that trust-building and skill development are pivotal to maximizing welfare, while sensitivity analyses highlight the trade-offs between AI complexity, equity, and efficiency. This research provides actionable insights for policymakers and system designers, emphasizing the importance of equitable AI adoption and fostering sustainable human-AI collaborations.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.15317
  13. By: Ioannis Dritsas
    Abstract: This paper introduces an innovative framework for understanding on-demand platforms by quantifying positive network effects, trust, revenue dynamics, and the influence of demand on platform operations at per-minute or even per-second granularity. Drawing inspiration from physics, the framework provides both a theoretical and pragmatic perspective, offering a pictorial and quantitative representation of how on-demand platforms create value. It seeks to demystify their nuanced operations by providing practical, tangible, and highly applicable metrics, platform design templates, and real-time optimization tools for strategic what-if scenario planning. Its model demonstrates strong predictive power and is deeply rooted in raw data. The framework offers a deterministic insight into the workings of diverse platforms like Uber, Airbnb, and food delivery services. Furthermore, it generalizes to model all on-demand service platforms with cyclical operations. It works synergistically with machine learning, game theory, and agent-based models by providing a solid quantitative core rooted in raw data, based on physical truths, and is capable of delivering tangible predictions for real-time operational adjustments. The framework's mathematical model was rigorously validated using highly detailed historical data retrieved with near 100% certainty. Applying data-driven induction, distinct qualities were identified in big data sets via an iterative process. Through analogical thinking, a clear and highly intuitive mapping between the elements, operational principles, and dynamic behaviors of a well-known physical system was established to create a physics-inspired lens for Uber. This novel quantitative framework was named PASER (Profit Amplification by Stimulated Emission of Revenue), drawing an analogy to its physical counterpart, the LASER (Light Amplification by Stimulated Emission of Radiation).
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.14196
  14. By: Suzie Grondin; Arthur Charpentier; Philipp Ratz
    Abstract: Collusion in market pricing is a concept associated with human actions to raise market prices through artificially limited supply. Recently, the idea of algorithmic collusion was put forward, where the human action in the pricing process is replaced by automated agents. Although experiments have shown that collusive market equilibria can be reached through such techniques, without the need for human intervention, many of the techniques developed remain susceptible to exploitation by other players, making them difficult to implement in practice. In this article, we explore a situation where an agent has a multi-objective strategy, and not only learns to unilaterally exploit market dynamics originating from other algorithmic agents, but also learns to model the behaviour of other agents directly. Our results show how common critiques about the viability of algorithmic collusion in real-life settings can be overcome through the usage of slightly more complex algorithms.
    Date: 2025–01
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2501.16935

This nep-cmp issue is ©2025 by Stan Miles. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.