nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2024‒11‒04
ten papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. The ABC’s of Who Benefits from Working with AI: Ability, Beliefs, and Calibration By Andrew Caplin; David J. Deming; Shangwen Li; Daniel J. Martin; Philip Marx; Ben Weidmann; Kadachi Jiada Ye
  2. The Impact of Generative AI on Collaborative Open-Source Software Development: Evidence from GitHub Copilot By Fangchen Song; Ashish Agarwal; Wen Wen
  3. Is Distance from Innovation a Barrier to the Adoption of Artificial Intelligence? By Jennifer Hunt; Iain M. Cockburn; James Bessen
  4. The Economic Implications of AI-Driven Automation: A Dynamic General Equilibrium Analysis By HARIT, ADITYA
  5. Auction-Based Regulation for Artificial Intelligence By Marco Bornstein; Zora Che; Suhas Julapalli; Abdirisak Mohamed; Amrit Singh Bedi; Furong Huang
  6. A Principal-Agent Model for Ethical AI: Optimal Contracts and Incentives for Ethical Alignment By Dae-Hyun Yoo; Caterina Giannetti
  7. Perceiving AI Intervention Does Not Compromise the Persuasive Effect of Fact-Checking By Chae, Je Hoon; Tewksbury, David
  8. VickreyFeedback: Cost-efficient Data Construction for Reinforcement Learning from Human Feedback By Guoxi Zhang; Jiuding Duan
  9. Large Language Models Overcome the Machine Penalty When Acting Fairly but Not When Acting Selfishly or Altruistically By Zhen Wang; Ruiqi Song; Chen Shen; Shiya Yin; Zhao Song; Balaraju Battu; Lei Shi; Danyang Jia; Talal Rahwan; Shuyue Hu
  10. ChatGPT and Corporate Policies By Manish Jha; Jialin Qian; Michael Weber; Baozhong Yang

  1. By: Andrew Caplin; David J. Deming; Shangwen Li; Daniel J. Martin; Philip Marx; Ben Weidmann; Kadachi Jiada Ye
    Abstract: We use a controlled experiment to show that ability and belief calibration jointly determine the benefits of working with Artificial Intelligence (AI). AI improves performance more for people with low baseline ability. However, holding ability constant, AI assistance is more valuable for people who are calibrated, meaning they have accurate beliefs about their own ability. People who know they have low ability gain the most from working with AI. In a counterfactual analysis, we show that eliminating miscalibration would cause AI to reduce performance inequality nearly twice as much as it already does.
    JEL: D81 J24
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:33021
  2. By: Fangchen Song; Ashish Agarwal; Wen Wen
    Abstract: Generative artificial intelligence (AI) has opened the possibility of automated content production, including coding in software development, which can significantly influence the participation and performance of software developers. To explore this impact, we investigate the role of GitHub Copilot, a generative AI pair programmer, on software development in open-source community, where multiple developers voluntarily collaborate on software projects. Using GitHub's dataset for open-source repositories and a generalized synthetic control method, we find that Copilot significantly enhances project-level productivity by 6.5%. Delving deeper, we dissect the key mechanisms driving this improvement. Our findings reveal a 5.5% increase in individual productivity and a 5.4% increase in participation. However, this is accompanied with a 41.6% increase in integration time, potentially due to higher coordination costs. Interestingly, we also observe the differential effects among developers. We discover that core developers achieve greater project-level productivity gains from using Copilot, benefiting more in terms of individual productivity and participation compared to peripheral developers, plausibly due to their deeper familiarity with software projects. We also find that the increase in project-level productivity is accompanied with no change in code quality. We conclude that AI pair programmers bring benefits to developers to automate and augment their code, but human developers' knowledge of software projects can enhance the benefits. In summary, our research underscores the role of AI pair programmers in impacting project-level productivity within the open-source community and suggests potential implications for the structure of open-source software projects.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.02091
  3. By: Jennifer Hunt; Iain M. Cockburn; James Bessen
    Abstract: Using our own data on Artificial Intelligence publications merged with Burning Glass vacancy data for 2007-2019, we investigate whether online vacancies for jobs requiring AI skills grow more slowly in U.S. locations farther from pre-2007 AI innovation hotspots. We find that a commuting zone which is an additional 200km (125 miles) from the closest AI hotspot has 17% lower growth in AI jobs' share of vacancies. This is driven by distance from AI papers rather than AI patents. Distance reduces growth in AI research jobs as well as in jobs adapting AI to new industries, as evidenced by strong effects for computer and mathematical researchers, developers of software applications, and the finance and insurance industry. 20% of the effect is explained by the presence of state borders between some commuting zones and their closest hotspot. This could reflect state borders impeding migration and thus flows of tacit knowledge. Distance does not capture difficulty of in-person or remote collaboration nor knowledge and personnel flows within multi-establishment firms hiring in computer occupations.
    JEL: O33 R12
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:nbr:nberwo:33022
  4. By: HARIT, ADITYA
    Abstract: This paper develops a dynamic general equilibrium (DGE) model to assess the impact of AI-driven automation on labor and capital allocation in an economy. The model considers the endogenous response of firms to task automation and labor substitution, showing how the increasing use of AI affects total output (GDP), wages, and capital returns. By introducing task complementarity and dynamic capital accumulation, the paper explores how automation impacts labor dynamics and capital accumulation. Key results show that while AI enhances productivity and GDP, it can also reduce wages and increase income inequality, with long-run effects that depend on the elasticity of substitution between labor and capital.
    Keywords: AI-driven Automation, Dynamic General Equilibrium, Labor Markets, Capital Accumulation, Income Distribution, Technological Change, Task Automation, Economic Inequality, Labor Demand, Capital Returns, Economic Policy, Neoclassical Growth Theory, Labor-Capital Dynamics.
    JEL: A10 A11 C0 C02 E1 E13 E6 E60 J3 J31 J4 J40 N3 P4 P48
    Date: 2024–10–01
    URL: https://d.repec.org/n?u=RePEc:pra:mprapa:122244
  5. By: Marco Bornstein; Zora Che; Suhas Julapalli; Abdirisak Mohamed; Amrit Singh Bedi; Furong Huang
    Abstract: In an era of "moving fast and breaking things", regulators have moved slowly to pick up the safety, bias, and legal pieces left in the wake of broken Artificial Intelligence (AI) deployment. Since AI models, such as large language models, are able to push misinformation and stoke division within our society, it is imperative for regulators to employ a framework that mitigates these dangers and ensures user safety. While there is much-warranted discussion about how to address the safety, bias, and legal woes of state-of-the-art AI models, the number of rigorous and realistic mathematical frameworks to regulate AI safety is lacking. We take on this challenge, proposing an auction-based regulatory mechanism that provably incentivizes model-building agents (i) to deploy safer models and (ii) to participate in the regulation process. We provably guarantee, via derived Nash Equilibria, that each participating agent's best strategy is to submit a model safer than a prescribed minimum-safety threshold. Empirical results show that our regulatory auction boosts safety and participation rates by 20% and 15% respectively, outperforming simple regulatory frameworks that merely enforce minimum safety standards.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.01871
  6. By: Dae-Hyun Yoo; Caterina Giannetti
    Abstract: This paper presents a principal-agent model for aligning artificial intelligence (AI) behaviors with human ethical objectives. In this framework, the end-user acts as the principal, offering a contract to the system developer (the agent) that specifies desired ethical alignment levels for the AI system. This incentivizes the developer to align the AI’s objectives with ethical considerations, fostering trust and collaboration. When ethical alignment is unobservable and the developer is risk-neutral, the optimal contract achieves the same alignment and expected utilities as when it is observable. For observable alignment levels, a fixed reward is uniquely optimal for strictly risk-averse developers, while for risk-neutral developers, a fixed reward is one of several optimal options. Our findings demonstrate that even a basic principal-agent model can enhance the understanding of how to balance responsibility between users and developers in the pursuit of ethical AI. Users seeking higher ethical alignment must compensate developers appropriately, and they also share responsibility for ethical AI by adhering to design specifications and regulations.
    Keywords: AI Ethics, Ethical Alignment, Principal-Agent Model, Contract Theory, Responsibility Allocation, Economic Incentives
    JEL: D82 D86 O33
    Date: 2024–10–01
    URL: https://d.repec.org/n?u=RePEc:pie:dsedps:2024/313
  7. By: Chae, Je Hoon (University of California, Los Angeles); Tewksbury, David
    Abstract: Efforts to scale up fact-checking through technology, such as artificial intelligence (AI), are increasingly being suggested and tested. This study examines whether previously observed effects of reading fact-checks remain constant when readers are aware of AI’s involvement in the fact-checking process. We conducted three online experiments (N = 3, 978), exposing participants to fact-checks identified as either human-generated or AI-assisted, simulating cases where AI fully generates the fact-check or automatically retrieves human fact-checks. Our findings indicate that the persuasive effect of fact-checking, specifically in increasing truth discernment, persists even among participants without a positive prior attitude toward AI. Additionally, in some cases, awareness of AI’s role reduced perceived political bias in fact-checks among Republicans. Finally, neither AI-generated nor human fact-checks significantly affected participants’ feelings toward or their perceptions of the competence of the targeted politicians.
    Date: 2024–09–16
    URL: https://d.repec.org/n?u=RePEc:osf:osfxxx:mkd6f
  8. By: Guoxi Zhang; Jiuding Duan
    Abstract: This paper addresses the cost-efficiency aspect of Reinforcement Learning from Human Feedback (RLHF). RLHF leverages datasets of human preferences over outputs of large language models (LLM) to instill human expectations into LLMs. While preference annotation comes with a monetized cost, the economic utility of a preference dataset has not been considered by far. What exacerbates this situation is that given complex intransitive or cyclic relationships in preference datasets, existing algorithms for fine-tuning LLMs are still far from capturing comprehensive preferences. This raises severe cost-efficiency concerns in production environments, where preference data accumulate over time. In this paper, we see the fine-tuning of LLMs as a monetized economy and introduce an auction mechanism to improve the efficiency of the preference data collection in dollar terms. We show that introducing an auction mechanism can play an essential role in enhancing the cost-efficiency of RLHF while maintaining satisfactory model performance. Experimental results demonstrate that our proposed auction-based protocol is cost-efficient for fine-tuning LLMs by concentrating on high-quality feedback.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.18417
  9. By: Zhen Wang (School of Cybersecurity, and School of Artificial Intelligence, OPtics and ElectroNics); Ruiqi Song (School of Cybersecurity, and School of Artificial Intelligence, OPtics and ElectroNics); Chen Shen (Faculty of Engineering Sciences, Kyushu University, Japan); Shiya Yin (School of Cybersecurity, and School of Artificial Intelligence, OPtics and ElectroNics); Zhao Song (School of Computing, Engineering and Digital Technologies, Teesside University, United Kingdom); Balaraju Battu (Computer Science, Science Division, New York University Abu Dhabi, UAE); Lei Shi (School of Statistics and Mathematics, Yunnan University of Finance and Economics, China); Danyang Jia (School of Cybersecurity, and School of Artificial Intelligence, OPtics and ElectroNics); Talal Rahwan (Computer Science, Science Division, New York University Abu Dhabi, UAE); Shuyue Hu (Shanghai Artificial Intelligence Laboratory, China)
    Abstract: In social dilemmas where the collective and self-interests are at odds, people typically cooperate less with machines than with fellow humans, a phenomenon termed the machine penalty. Overcoming this penalty is critical for successful human-machine collectives, yet current solutions often involve ethically-questionable tactics, like concealing machines' non-human nature. In this study, with 1, 152 participants, we explore the possibility of closing this research question by using Large Language Models (LLMs), in scenarios where communication is possible between interacting parties. We design three types of LLMs: (i) Cooperative, aiming to assist its human associate; (ii) Selfish, focusing solely on maximizing its self-interest; and (iii) Fair, balancing its own and collective interest, while slightly prioritizing self-interest. Our findings reveal that, when interacting with humans, fair LLMs are able to induce cooperation levels comparable to those observed in human-human interactions, even when their non-human nature is fully disclosed. In contrast, selfish and cooperative LLMs fail to achieve this goal. Post-experiment analysis shows that all three types of LLMs succeed in forming mutual cooperation agreements with humans, yet only fair LLMs, which occasionally break their promises, are capable of instilling a perception among humans that cooperating with them is the social norm, and eliciting positive views on their trustworthiness, mindfulness, intelligence, and communication quality. Our findings suggest that for effective human-machine cooperation, bot manufacturers should avoid designing machines with mere rational decision-making or a sole focus on assisting humans. Instead, they should design machines capable of judiciously balancing their own interest and the interest of humans.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.03724
  10. By: Manish Jha; Jialin Qian; Michael Weber; Baozhong Yang
    Abstract: We create a firm-level ChatGPT investment score, based on conference calls, that measures managers' anticipated changes in capital expenditures. We validate the score with interpretable textual content and its strong correlation with CFO survey responses. The investment score predicts future capital expenditure for up to nine quarters, controlling for Tobin's $q$ and other determinants, implying the investment score provides incremental information about firms' future investment opportunities. The investment score also separately forecasts future total, intangible, and R\&D investments. Consistent with theoretical predictions, high-investment-score firms experience significant positive short-term returns upon disclosure, and negative long-run future abnormal returns. We demonstrate ChatGPT's applicability to measure other policies, such as dividends and employment.
    Date: 2024–09
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2409.17933

This nep-ain issue is ©2024 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.