nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2024‒09‒30
seven papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. Algorithmic Collusion Without Threats By Eshwar Ram Arunachaleswaran; Natalie Collina; Sampath Kannan; Aaron Roth; Juba Ziani
  2. AI Reliance and Decision Quality: Fundamentals, Interdependence, and the Effects of Interventions By Schoeffer, Jakob; Jakubik, Johannes; Vössing, Michael; Kühl, Niklas; Satzger, Gerhard
  3. The Turing Valley: How AI Capabilities Shape Labor Income By Enrique Ide; Eduard Talam\`as
  4. Mirror, Mirror on the Wall: Which Jobs Will AI Replace After All?: A New Index of Occupational Exposure By Parrado, Eric; Benítez, Miguel
  5. Towards the Terminator Economy: Assessing Job Exposure to AI through LLMs By Emilio Colombo; Fabio Mercorio; Mario Mezzanzanica; Antonio Serino
  6. The Limited Potential of AI Implementation in US Courts By Eric Kim
  7. Large Language Model Agent in Financial Trading: A Survey By Han Ding; Yinheng Li; Junhao Wang; Hang Chen

  1. By: Eshwar Ram Arunachaleswaran; Natalie Collina; Sampath Kannan; Aaron Roth; Juba Ziani
    Abstract: There has been substantial recent concern that pricing algorithms might learn to ``collude.'' Supra-competitive prices can emerge as a Nash equilibrium of repeated pricing games, in which sellers play strategies which threaten to punish their competitors who refuse to support high prices, and these strategies can be automatically learned. In fact, a standard economic intuition is that supra-competitive prices emerge from either the use of threats, or a failure of one party to optimize their payoff. Is this intuition correct? Would preventing threats in algorithmic decision-making prevent supra-competitive prices when sellers are optimizing for their own revenue? No. We show that supra-competitive prices can emerge even when both players are using algorithms which do not encode threats, and which optimize for their own revenue. We study sequential pricing games in which a first mover deploys an algorithm and then a second mover optimizes within the resulting environment. We show that if the first mover deploys any algorithm with a no-regret guarantee, and then the second mover even approximately optimizes within this now static environment, monopoly-like prices arise. The result holds for any no-regret learning algorithm deployed by the first mover and for any pricing policy of the second mover that obtains them profit at least as high as a random pricing would -- and hence the result applies even when the second mover is optimizing only within a space of non-responsive pricing distributions which are incapable of encoding threats. In fact, there exists a set of strategies, neither of which explicitly encode threats that form a Nash equilibrium of the simultaneous pricing game in algorithm space, and lead to near monopoly prices. This suggests that the definition of ``algorithmic collusion'' may need to be expanded, to include strategies without explicitly encoded threats.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.03956
  2. By: Schoeffer, Jakob; Jakubik, Johannes; Vössing, Michael; Kühl, Niklas; Satzger, Gerhard
    Abstract: In AI-assisted decision-making, a central promise of having a human-in-the-loop is that they should be able to complement the AI system by overriding its wrong recommendations. In practice, however, we often see that humans cannot assess the correctness of AI recommendations and, as a result, adhere to wrong or override correct advice. Different ways of relying on AI recommendations have immediate, yet distinct, implications for decision quality. Unfortunately, reliance and decision quality are often inappropriately conflated in the current literature on AI-assisted decision-making. In this work, we disentangle and formalize the relationship between reliance and decision quality, and we characterize the conditions under which human-AI complementarity is achievable. To illustrate how reliance and decision quality relate to one another, we propose a visual framework and demonstrate its usefulness for interpreting empirical findings, including the effects of interventions like explanations. Overall, our research highlights the importance of distinguishing between reliance behavior and decision quality in AI-assisted decision-making.
    Date: 2024–08–25
    URL: https://d.repec.org/n?u=RePEc:osf:osfxxx:cekm9
  3. By: Enrique Ide; Eduard Talam\`as
    Abstract: Do improvements in Artificial Intelligence (AI) benefit workers? We study how AI capabilities influence labor income in a competitive economy where production requires multidimensional knowledge, and firms organize production by matching humans and AI-powered machines in hierarchies designed to use knowledge efficiently. We show that advancements in AI in dimensions where machines underperform humans decrease total labor income, while advancements in dimensions where machines outperform humans increase it. Hence, if AI initially underperforms humans in all dimensions and improves gradually, total labor income initially declines before rising. We also characterize the AI that maximizes labor income. When humans are sufficiently weak in all knowledge dimensions, labor income is maximized when AI is as good as possible in all dimensions. Otherwise, labor income is maximized when AI simultaneously performs as poorly as possible in the dimensions where humans are relatively strong and as well as possible in the dimensions where humans are relatively weak. Our results suggest that choosing the direction of AI development can create significant divisions between the interests of labor and capital.
    Date: 2024–08
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2408.16443
  4. By: Parrado, Eric; Benítez, Miguel
    Abstract: This paper introduces the AI Generated Index of Occupational Exposure (GENOE), a novel measure quantifying the potential impact of artificial intelligence on occupations and their associated tasks. Our methodology employs synthetic AI surveys, leveraging large language models to conduct expert-like assessments. This approach allows for a more comprehensive evaluation of job replacement likelihood, minimizing human bias and reducing assumptions about the mechanisms through which AI innovations could replace job tasks and skills. The index not only considers task automation, but also contextual factors such as social and ethical considerations and regulatory constraints that may affect the likelihood of replacement. Our findings indicate that the average likelihood of job replacement is estimated at 0.28 in the next year, increasing to 0.38 and 0.44 over the next five and ten years, respectively. To validate our methodology, we successfully replicate other measures of occupational exposure that rely on human expert assessments, substituting these with AI-based evaluations. The GENOE index provides valuable insights for policymakers, employers, and workers, offering a data-driven foundation for strategic workforce planning and adaptation in the face of rapid technological change.
    JEL: C53 C81 J23 J24 O33
    Date: 2024–08
    URL: https://d.repec.org/n?u=RePEc:idb:brikps:13696
  5. By: Emilio Colombo; Fabio Mercorio; Mario Mezzanzanica; Antonio Serino
    Abstract: There is no doubt that AI and AI-related technologies are reshaping jobs and related tasks, either by automating or by augmenting human skills in the workplace. Many researchers have tried to estimate if, and to what extent, jobs and tasks are exposed to the risk of being automatized by state-of-the-art AI-related technologies. Our work tackles this issue through a data-driven approach: (i) developing a reproducible framework that uses several open-source large language models to assess the current capabilities of AI and robotics in performing work-related tasks; (ii) formalising and computing a measure of AI exposure by occupation, namely the TEAI (Task Exposure to AI) index. Our TEAI index is positively correlated with cognitive, problem-solving and management skills, while is negatively correlated with social skills. Our results show that about one-third of U.S. employment is highly exposed to AI, primarily in high-skill jobs, requiring graduate or postgraduate level of education. Using 4-year rolling regressions, we also find that AI exposure is positively associated with both employment and wage growth in the period 2003-2023, suggesting that AI has an overall positive effect on productivity.
    JEL: J24 O33 O36
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:dis:wpaper:dis2401
  6. By: Eric Kim (Seoul International School, Seoul, Republic of Korea)
    Abstract: With the recent surge of interest in the use of AI in generating writing that imitates human creativity, many have begun turning to AI as the ultimate solution to various social problems. Amidst rising concerns about discrimination in judicial decisions stemming from judges’ personal leanings within the US criminal justice system, employing AI as judicial judges to eliminate these biases has been considered. Although the current stage of AI and its abilities render it impossible to replace human judges, AI technology’s fast-paced development opens up future possibilities. This paper evaluates the possibility of AI, specifically GPTs, being able to act as judges in US criminal courts in the near future by assessing how effective AI can be in deciding court cases. The paper examines how AI answers currently lack validity and reliability, two key characteristics of judicial decisions, and analyzes the extent to which AI developments in the near future will be able to address these flaws so that AI may be able to produce viable judicial decisions by themselves. This research concludes that barring unforeseeably significant technological advancements, AI cannot independently act as impartial judges within a US criminal court; however, feasible developments in AI’s reliability and validity in the near future would allow AI to work in a complementary capacity alongside human judges to help improve the current judicial system.
    Keywords: artificial intelligence, bias, ChatGPT, criminal justice, impartiality, judiciary, judgment
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:smo:raiswp:0394
  7. By: Han Ding; Yinheng Li; Junhao Wang; Hang Chen
    Abstract: Trading is a highly competitive task that requires a combination of strategy, knowledge, and psychological fortitude. With the recent success of large language models(LLMs), it is appealing to apply the emerging intelligence of LLM agents in this competitive arena and understanding if they can outperform professional traders. In this survey, we provide a comprehensive review of the current research on using LLMs as agents in financial trading. We summarize the common architecture used in the agent, the data inputs, and the performance of LLM trading agents in backtesting as well as the challenges presented in these research. This survey aims to provide insights into the current state of LLM-based financial trading agents and outline future research directions in this field.
    Date: 2024–07
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2408.06361

This nep-ain issue is ©2024 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.