nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2024‒07‒29
nine papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. Artificial Intelligence and Algorithmic Price Collusion in Two-sided Markets By Cristian Chica; Yinglong Guo; Gilad Lerman
  2. Algorithmic Collusion And The Minimum Price Markov Game By Igor Sadoune; Marcelin Joanis; Andrea Lodi
  3. Effects of technological change and automation on industry structure and (wage-)inequality: insights from a dynamic task-based model By Dawid, Herbert; Neugart, Michael
  4. LABOR-LLM: Language-Based Occupational Representations with Large Language Models By Tianyu Du; Ayush Kanodia; Herman Brunborg; Keyon Vafa; Susan Athey
  5. Impact of the Availability of ChatGPT on Software Development: A Synthetic Difference in Differences Estimation using GitHub Data By Alexander Quispe; Rodrigo Grijalba
  6. What Teaches Robots to Walk, Teaches Them to Trade too -- Regime Adaptive Execution using Informed Data and LLMs By Raeid Saqur
  7. Impact of Sentiment analysis on Energy Sector Stock Prices : A FinBERT Approach By Sarra Ben Yahia; Jose Angel Garcia Sanchez; Rania Hentati Kaffel
  8. Artificial Intelligence Based Technologies and Economic Growth in a Creative Region By Batabyal, Amitrajeet; Kourtit, Karima; Nijkamp, Peter
  9. Ethical Procedures for Responsible Experimental Evaluation of AI-based Education Interventions By Dekker, Izaak; Bredeweg, Bert; Winkel, Wilco te; van de Poel, Ibo

  1. By: Cristian Chica; Yinglong Guo; Gilad Lerman
    Abstract: Algorithmic price collusion facilitated by artificial intelligence (AI) algorithms raises significant concerns. We examine how AI agents using Q-learning engage in tacit collusion in two-sided markets. Our experiments reveal that AI-driven platforms achieve higher collusion levels compared to Bertrand competition. Increased network externalities significantly enhance collusion, suggesting AI algorithms exploit them to maximize profits. Higher user heterogeneity or greater utility from outside options generally reduce collusion, while higher discount rates increase it. Tacit collusion remains feasible even at low discount rates. To mitigate collusive behavior and inform potential regulatory measures, we propose incorporating a penalty term in the Q-learning algorithm.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.04088&r=
  2. By: Igor Sadoune; Marcelin Joanis; Andrea Lodi
    Abstract: This paper introduces the Minimum Price Markov Game (MPMG), a dynamic variant of the Prisoner's Dilemma. The MPMG serves as a theoretical model and reasonable approximation of real-world first-price sealed-bid public auctions that follow the minimum price rule. The goal is to provide researchers and practitioners with a framework to study market fairness and regulation in both digitized and non-digitized public procurement processes, amidst growing concerns about algorithmic collusion in online markets. We demonstrate, using multi-agent reinforcement learning-driven artificial agents, that algorithmic tacit coordination is difficult to achieve in the MPMG when cooperation is not explicitly engineered. Paradoxically, our results highlight the robustness of the minimum price rule in an auction environment, but also show that it is not impervious to full-scale algorithmic collusion. These findings contribute to the ongoing debates about algorithmic pricing and its implications.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2407.03521&r=
  3. By: Dawid, Herbert; Neugart, Michael
    Abstract: The advent of artificial intelligence is changing the task allocation of workers and machines in firms’ production processes with potentially wide ranging effects on workers and firms. We develop an agent-based simulation framework to investigate the consequences of different types of automation for industry output, the wage distribution, the labor share, and industry dynamics. It is shown how the competitiveness of markets, in particular barriers to entry, changes the effects that automation has on various outcome variables, and to which extent heterogeneous workers with distinct general skill endowments and heterogeneous firms featuring distinct wage offer rules affect the channels via which automation changes market outcomes.
    Date: 2024–06–25
    URL: https://d.repec.org/n?u=RePEc:dar:wpaper:146300&r=
  4. By: Tianyu Du; Ayush Kanodia; Herman Brunborg; Keyon Vafa; Susan Athey
    Abstract: Many empirical studies of labor market questions rely on estimating relatively simple predictive models using small, carefully constructed longitudinal survey datasets based on hand-engineered features. Large Language Models (LLMs), trained on massive datasets, encode vast quantities of world knowledge and can be used for the next job prediction problem. However, while an off-the-shelf LLM produces plausible career trajectories when prompted, the probability with which an LLM predicts a particular job transition conditional on career history will not, in general, align with the true conditional probability in a given population. Recently, Vafa et al. (2024) introduced a transformer-based "foundation model", CAREER, trained using a large, unrepresentative resume dataset, that predicts transitions between jobs; it further demonstrated how transfer learning techniques can be used to leverage the foundation model to build better predictive models of both transitions and wages that reflect conditional transition probabilities found in nationally representative survey datasets. This paper considers an alternative where the fine-tuning of the CAREER foundation model is replaced by fine-tuning LLMs. For the task of next job prediction, we demonstrate that models trained with our approach outperform several alternatives in terms of predictive performance on the survey data, including traditional econometric models, CAREER, and LLMs with in-context learning, even though the LLM can in principle predict job titles that are not allowed in the survey data. Further, we show that our fine-tuned LLM-based models' predictions are more representative of the career trajectories of various workforce subpopulations than off-the-shelf LLM models and CAREER. We conduct experiments and analyses that highlight the sources of the gains in the performance of our models for representative predictions.
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2406.17972&r=
  5. By: Alexander Quispe; Rodrigo Grijalba
    Abstract: Advancements in Artificial Intelligence, particularly with ChatGPT, have significantly impacted software development. Utilizing novel data from GitHub Innovation Graph, we hypothesize that ChatGPT enhances software production efficiency. Utilizing natural experiments where some governments banned ChatGPT, we employ Difference-in-Differences (DID), Synthetic Control (SC), and Synthetic Difference-in-Differences (SDID) methods to estimate its effects. Our findings indicate a significant positive impact on the number of git pushes, repositories, and unique developers per 100, 000 people, particularly for high-level, general purpose, and shell scripting languages. These results suggest that AI tools like ChatGPT can substantially boost developer productivity, though further analysis is needed to address potential downsides such as low quality code and privacy concerns.
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2406.11046&r=
  6. By: Raeid Saqur
    Abstract: Machine learning techniques applied to the problem of financial market forecasting struggle with dynamic regime switching, or underlying correlation and covariance shifts in true (hidden) market variables. Drawing inspiration from the success of reinforcement learning in robotics, particularly in agile locomotion adaptation of quadruped robots to unseen terrains, we introduce an innovative approach that leverages world knowledge of pretrained LLMs (aka. 'privileged information' in robotics) and dynamically adapts them using intrinsic, natural market rewards using LLM alignment technique we dub as "Reinforcement Learning from Market Feedback" (**RLMF**). Strong empirical results demonstrate the efficacy of our method in adapting to regime shifts in financial markets, a challenge that has long plagued predictive models in this domain. The proposed algorithmic framework outperforms best-performing SOTA LLM models on the existing (FLARE) benchmark stock-movement (SM) tasks by more than 15\% improved accuracy. On the recently proposed NIFTY SM task, our adaptive policy outperforms the SOTA best performing trillion parameter models like GPT-4. The paper details the dual-phase, teacher-student architecture and implementation of our model, the empirical results obtained, and an analysis of the role of language embeddings in terms of Information Gain.
    Date: 2024–06
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2406.15508&r=
  7. By: Sarra Ben Yahia (CES - Centre d'économie de la Sorbonne - UP1 - Université Paris 1 Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique); Jose Angel Garcia Sanchez (CES - Centre d'économie de la Sorbonne - UP1 - Université Paris 1 Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique); Rania Hentati Kaffel (CES - Centre d'économie de la Sorbonne - UP1 - Université Paris 1 Panthéon-Sorbonne - CNRS - Centre National de la Recherche Scientifique)
    Abstract: This study provides sentiment analysis model to enhance market return forecasts by considering investor sentiment from social media platforms like Twitter (X). We leverage advanced NLP techniques and large language models to analyze sentiment from financial tweets. We use a large web-scrapped data of selected energy stock daily returns spanning from 2018 to 2023. Sentiment scores derived from FinBERT are integrated into a novel predictive model (SIMDM) to evaluate autocorrelation structures within both the sentiment scores and stock returns data. Our findings reveal i) significant correlations between sentiment scores and stock prices. ii) Results are highly sensitive to data quality. iii) Our study reinforces the concept of market efficiency and offers empirical evidence regarding the delayed influence of emotional states on stock returns.
    Keywords: financial NLP finBERT information extraction webscraping sentiment analysis, financial NLP, finBERT, information extraction, webscraping, sentiment analysis, LLM, Deep learing
    Date: 2024–06–30
    URL: https://d.repec.org/n?u=RePEc:hal:cesptp:hal-04629569&r=
  8. By: Batabyal, Amitrajeet; Kourtit, Karima; Nijkamp, Peter
    Abstract: We analyze economic growth in a stylized, high-tech region A with two key features. First, the residents of this region are high-tech because they possess skills. In the language of Richard Florida, these residents comprise the region’s creative class and they possess creative capital. Second, the region is high-tech because it uses an artificial intelligence (AI)-based technology and we model the use of this technology. In this setting, we first derive expressions for three growth metrics. Second, we use these metrics to show that the economy of A converges to a balanced growth path (BGP). Third, we compute the growth rate of output per effective creative capital unit on this BGP. Fourth, we study how heterogeneity in initial conditions influences outcomes on the BGP by introducing a second high-tech region B into the analysis. At time t=0, two key savings rates in A are twice as large as in B. We compute the ratio of the BGP value of income per effective creative capital unit in A to its value in B. Finally, we compute the ratio of the BGP value of skills per effective creative capital unit in A to its value in B.
    Keywords: Artificial Intelligence, Creative Capital, Regional Economic Growth, Skills
    JEL: O33 R11
    Date: 2023–12–11
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:121328&r=
  9. By: Dekker, Izaak; Bredeweg, Bert; Winkel, Wilco te; van de Poel, Ibo
    Abstract: AI-based interventions could enhance learning by personalization, improving teacher effectiveness, or optimize educational processes. However, they could also have unintended or unexpected side-effects, such as undermining learning by enabling procrastination, or reducing social interaction by individualizing learning processes. Responsible experiments are required to map both the potential benefits and the side-effects. Current procedures used to screen experiments by ethical review boards do not take the specific risks and dilemmas that AI poses into account. Previous studies identified sixteen conditions that can be used to judge whether trials with experimental technology are responsible. These conditions, however, were not yet translated into practical procedures, nor do they distinguish between the different types of AI applications and risk categories. This paper explores how those conditions could be further specified into procedures that could help facilitate and organize responsible experiments with AI, while differentiating for the different types of AI applications based on their level of automation. The four procedures that we propose are 1) A process of gradual testing 2) Risk- and side-effect detection 3) Explainability and severity, and 4) Democratic oversight. These procedures can be used by researchers, review boards, and research institutions to responsibly experiment with AI interventions in educational settings.
    Date: 2024–06–25
    URL: https://d.repec.org/n?u=RePEc:osf:osfxxx:3dynw&r=

This nep-ain issue is ©2024 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.