nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2024‒11‒18
thirteen papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. Reproducing and Extending Experiments in Behavioral Strategy with Large Language Models By Daniel Albert; Stephan Billinger
  2. The Rise of AI Pricing: Trends, Driving Forces, and Implications for Firm Performance By Jonathan J Adams; Min Fang; Zheng Liu; Yajie Wang
  3. The Rise of Generative AI: Modelling Exposure, Substitution, and Inequality Effects on the US Labour Market By Raphael Auer; David Köpfer; Josef Švéda; Raphael A. Auer
  4. AI Adoption and Workplace Training By Mühlemann, Samuel
  5. Use of Artificial Intelligence and Productivity: Evidence from firm and worker surveys By MORIKAWA Masayuki
  6. Double Jeopardy and Climate Impact in the Use of Large Language Models: Socio-economic Disparities and Reduced Utility for Non-English Speakers By Aivin V. Solatorio; Gabriel Stefanini Vicente; Holly Krambeck; Olivier Dupriez
  7. Artificial intelligence for climate change: a patent analysis in the manufacturing sector By Podrecca, Matteo; Culot, Giovanna; Tavassoli, Sam; Orzes, Guido
  8. Artificial Intelligence for Official Statistics: Opportunities, Practical Uses and, Challenges By Popoola, Osuolale Peter
  9. The European Union policy design for AI regulation By Nicola Giannelli
  10. Quantifying uncertainty: a new era of measurement through large language models By Francesco Audrino; Jessica Gentner; Simon Stalder
  11. Do Capital Incentives Distort Technology Diffusion? Evidence on Cloud, Big Data and AI By Timothy DeStefano; Nick Johnstone; Richard Kneller; Jonathan Timmis
  12. UCFE: A User-Centric Financial Expertise Benchmark for Large Language Models By Yuzhe Yang; Yifei Zhang; Yan Hu; Yilin Guo; Ruoli Gan; Yueru He; Mingcong Lei; Xiao Zhang; Haining Wang; Qianqian Xie; Jimin Huang; Honghai Yu; Benyou Wang
  13. Neuro-Symbolic Traders: Assessing the Wisdom of AI Crowds in Markets By Namid R. Stillman; Rory Baggott

  1. By: Daniel Albert; Stephan Billinger
    Abstract: In this study, we propose LLM agents as a novel approach in behavioral strategy research, complementing simulations and laboratory experiments to advance our understanding of cognitive processes in decision-making. Specifically, we reproduce a human laboratory experiment in behavioral strategy using large language model (LLM) generated agents and investigate how LLM agents compare to observed human behavior. Our results show that LLM agents effectively reproduce search behavior and decision-making comparable to humans. Extending our experiment, we analyze LLM agents' simulated "thoughts, " discovering that more forward-looking thoughts correlate with favoring exploitation over exploration to maximize wealth. We show how this new approach can be leveraged in behavioral strategy research and address limitations.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.06932
  2. By: Jonathan J Adams (Department of Economics, University of Florida); Min Fang (Department of Economics, University of Florida); Zheng Liu (FRB San Francisco); Yajie Wang (Department of Economics, University of Missouri)
    Abstract: We document key stylized facts about the time-series trends and cross-sectional distributions of AI pricing and study its implications for firm performance, both on average and conditional on monetary policy shocks. We use the universe of online job posting data from Lightcast to measure the adoption of AI pricing. We infer that a firm is adopting AI pricing if it posts a job opening that requires AI-related skills and contains the keyword ``pricing''. At the aggregate level, the share of AI-pricing jobs in all pricing jobs has increased by more than tenfold since 2010. The increase in AI-pricing jobs has been broad-based, spreading to more industries than other types of AI jobs. At the firm level, larger and more productive firms are more likely to adopt AI pricing. Moreover, firms that adopted AI pricing experienced faster growth in sales, employment, assets, and markups, and their stock returns are also more sensitive to high-frequency monetary policy surprises than non-adopters. We show that these empirical observations can be rationalized by a simple model where a monopolist firm with incomplete information about the demand function invests in AI pricing to acquire information.
    JEL: D40 E31 E52 O33
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:ufl:wpaper:001015
  3. By: Raphael Auer; David Köpfer; Josef Švéda; Raphael A. Auer
    Abstract: How exposed is the labour market to ever-advancing AI capabilities, to what extent does this substitute human labour, and how will it affect inequality? We address these questions in a simulation of 711 US occupations classified by the importance and level of cognitive skills. We base our simulations on the notion that AI can only perform skills that are within its capabilities and involve computer interaction. At low AI capabilities, 7% of skills are exposed to AI uniformly across the wage spectrum. At moderate and high AI capabilities, 17% and 36% of skills are exposed on average, and up to 45% in the highest wage quartile. Examining complementary versus substitution, we model the impact on side versus core occupational skills. For example, AI capable of bookkeeping helps doctors with administrative work, freeing up time for medical examinations, but risks the jobs of bookkeepers. We find that low AI capabilities complement all workers, as side skills are simpler than core skills. However, as AI capabilities advance, core skills in lower-wage jobs become exposed, threatening substitution and increased inequality. In contrast to the intuitive notion that the rise of AI may harm white-collar workers, we find that those remain safe longer as their core skills are hard to automate.
    Keywords: labour market, artificial intelligence, employment, inequality, automation, ChatGPT, GPT, LLM, wage, technology
    JEL: E24 E51 G21 G28 J23 M48 O30 O33
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:ces:ceswps:_11410
  4. By: Mühlemann, Samuel (University of Munich)
    Abstract: This paper investigates the impact of artificial intelligence (AI) adoption in production processes on workplace training practices, using firm-level data from the BIBB establishment panel on training and competence development (2019-2021). The findings reveal that AI adoption reduces the provision of continuing training for incumbent workers while increasing the share of high-skilled new hires and decreasing medium-skilled hires, thereby contributing to skill polarization. However, AI adoption also increases the number of apprenticeship contracts, particularly in small and medium-sized enterprises (SMEs), underscoring the ongoing importance of apprenticeships in preparing future workers with the skills needed to apply AI in production.
    Keywords: artificial intelligence, technological change, automation, apprenticeship training, human capital
    JEL: J23 J24 M53 O33
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:iza:izadps:dp17367
  5. By: MORIKAWA Masayuki
    Abstract: With the rapid diffusion of artificial intelligence (AI), its effects on economic growth and the labor market have attracted the attention of researchers. However, the lack of statistical data on the use of AI has restricted empirical research. Based on original surveys, this study provides an overview of the use of AI and other automation technologies in Japan, the characteristics of firms and workers who use AI, and their views on the impact of AI. According to the results, first, the number of firms using AI is increasing rapidly and firms with a larger share of highly educated workers have a greater tendency to use AI. Robot-using firms are also increasing, but the relationship between their use and workers’ education is weakly negative, suggesting that the impact on the labor market is different for each technology. Second, AI-using firms have higher productivity, wages, and medium-term growth expectations. Third, AI-using firms expect that while it will increase productivity and wages, it may decrease their employment. Fourth, at the worker level, more-educated workers are more likely to use AI, suggesting that AI and education are complementary. Currently, AI may favor high-skill workers in the labor market. Fifth, workers who use AI evaluate their work productivity to have increased by approximately 20% on average, suggesting that AI could potentially have a fairly large productivity enhancing effect.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:eti:dpaper:24074
  6. By: Aivin V. Solatorio; Gabriel Stefanini Vicente; Holly Krambeck; Olivier Dupriez
    Abstract: Artificial Intelligence (AI), particularly large language models (LLMs), holds the potential to bridge language and information gaps, which can benefit the economies of developing nations. However, our analysis of FLORES-200, FLORES+, Ethnologue, and World Development Indicators data reveals that these benefits largely favor English speakers. Speakers of languages in low-income and lower-middle-income countries face higher costs when using OpenAI's GPT models via APIs because of how the system processes the input -- tokenization. Around 1.5 billion people, speaking languages primarily from lower-middle-income countries, could incur costs that are 4 to 6 times higher than those faced by English speakers. Disparities in LLM performance are significant, and tokenization in models priced per token amplifies inequalities in access, cost, and utility. Moreover, using the quality of translation tasks as a proxy measure, we show that LLMs perform poorly in low-resource languages, presenting a ``double jeopardy" of higher costs and poor performance for these users. We also discuss the direct impact of fragmentation in tokenizing low-resource languages on climate. This underscores the need for fairer algorithm development to benefit all linguistic groups.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.10665
  7. By: Podrecca, Matteo (University of Bergamo); Culot, Giovanna (University of Udine); Tavassoli, Sam (Deakin University); Orzes, Guido (Free University of Bozen-Bolzano)
    Abstract: This study analyzes the current state of artificial intelligence (AI) technologies for addressing and mitigating climate change in the manufacturing sector and provides an outlook on future developments. The research is grounded in the concept of general-purpose technologies (GPTs), motivated by a still limited understanding of innovation patterns for this application context. To this end, we focus on global patenting activity between 2011 and 2023 (5, 919 granted patents classified for “mitigation or adaptation against climate change” in the “production or processing of goods”). We examined time trends, applicant characteristics, and underlying technologies. A topic modeling analysis was performed to identify emerging themes from the unstructured textual data of the patent abstracts. This allowed the identification of six AI application domains. For each of them, we built a network analysis and ran growth trend and forecasting models. Our results show that patenting activities are mostly oriented toward improving the efficiency and reliability of manufacturing processes in five out of six identified domains (“predictive analytics”, “material sorting”, “defect detection”, “advanced robotics”, and “scheduling”). Instead, AI within the “resource optimization” domain relates to energy management, showing an interplay with other climate-related technologies. Our results also highlight interdependent innovations peculiar to each domain around core AI technologies. Forecasts show that the more specific technologies are within domains, the longer it will take for them to mature. From a practical standpoint, the study sheds light on the role of AI within the broader cleantech innovation landscape and urges policymakers to consider synergies. Managers can find information to define technology portfolios and alliances considering technological co-evolution.
    Keywords: artificial intelligence; AI; climate change; sustainability; patent analysis; technology foresight
    JEL: O14 O31 O32 O33 O34
    Date: 2024–10–21
    URL: https://d.repec.org/n?u=RePEc:hhs:lucirc:2024_012
  8. By: Popoola, Osuolale Peter
    Abstract: In this era of digital transformation, artificial intelligence (AI) stands out as a revolutionary force across various sectors, including production of official statistics. Artificial intelligence is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. AI technology harbors immense potential to revolutionize official statistics, from data collection, to data analysis, from data analysis to decision-making, from decision making to effective service delivery. AI tools are advancing and are becoming more prevalence. Integrating AI into official statistics by the national statistical organizations (NSOs) can be facilitated through guidance and frameworks that provide NSOs with practical knowledge on how to identify opportunities for integrating AI, assess and minimize risks, and develop phased implementation plans linked to clear lines of ownership that enable evaluation and iteration. NSOs may explore AI to enhance informed decision making, inform policies, and optimize operations. Furthermore, Safe and effective AI integration across national statistics offices have the potential to minimize administrative burdens, reduce cost against the use of traditional methods of data collection, improve decision-making and enhance public service delivery. Thus, this paper aims at developing strategies for and adopting AI capabilities for official statistics, provides guidance and frameworks for the adoption its adoption, highlights examples of how AI could be applied for official statistics, examines the governance required for responsible implementation of AI and, discusses various challenges NSOs may face when implementing Al for official statistics. Thus, providing insights into the use of AI and, the necessary conditions for ensuring responsible AI for Official statistics in this digital era.
    Keywords: Official Statistics, Artificial Intelligence, National Statistics Organizations, Digital Era
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:zbw:esrepo:305190
  9. By: Nicola Giannelli (Department of Economics, Society & Politics, Università di Urbino Carlo Bo)
    Abstract: This paper examines the European Union's (EU) policy design for regulating Artificial Intelligence (AI), highlighting the comprehensive legislative approach adopted to balance innovation with the protection of fundamental rights. The EU’s AI Act and preceding regulatory efforts emphasize defining AI with flexibility and precision, ensuring transparency, and promoting trustworthiness. Central to this regulatory framework is the emphasis on human agency, technical robustness, privacy, transparency, and accountability. Military and academic use of AI is out of its scope.The paper explores the EU’s dual focus on economic growth and citizen protection, showcasing the role of the High-Level Expert Group on Artificial Intelligence (AI HLEG) in shaping ethical guidelines and management processes. By building on existing legislative frameworks, the EU addresses emerging risks and ethical dilemmas, ensuring that AI development aligns with societal values and public trust.1 The approach of the proposal is Risk-Based on the side human protection and market building on the side of innovation. The bill is 459 pages Act of Parliament that cannot be enforced without a lot of help from legal services. This should come a network of regulatory agencies, one for each member state, with an AI Office of the Commission at the European Level. Sandboxes that want to simulate the compliance of any new system with regulation framework will become necessary, like other legal counseling, for firms that want to avoid to be blocked in a Kafkian procedure.
    Keywords: Artificial Intelligence; European Union; Regulation Policy; Risk Management
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:urb:wpaper:24_02
  10. By: Francesco Audrino; Jessica Gentner; Simon Stalder
    Abstract: This paper presents an innovative method for measuring uncertainty via large language models (LLMs), which offer greater precision and contextual sensitivity than the conventional methods used to construct prominent uncertainty indices. By analysing newspaper texts with state-of-the-art LLMs, our approach captures nuances often missed by conventional methods. We develop indices for various types of uncertainty, including geopolitical risk, economic policy, monetary policy, and financial market uncertainty. Our findings show that shocks to these LLM-based indices exhibit stronger associations with macroeconomic variables, shifts in investor behaviour, and asset return variations than conventional indices, underscoring their potential for more accurately reflecting uncertainty.
    Keywords: Uncertainty measurement, Large language models, Economic policy, Geopolitical risk, Monetary policy, Financial markets
    JEL: C45 C55 E44 G12
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:snb:snbwpa:2024-12
  11. By: Timothy DeStefano; Nick Johnstone; Richard Kneller; Jonathan Timmis
    Abstract: The arrival of cloud computing provides firms a new way to access digital technologies as digital services. Yet, capital incentive policies present in every OECD country are still targeted towards investments in information technology (IT) capital. If cloud services are partial substitutes for IT investments, the presence of capital incentive policies may unintentionally discourage the adoption of cloud and technologies that rely on the cloud, such as artificial intelligence (AI) and big data analytics. This paper exploits a tax incentive in the UK for capital investment as a quasi-natural experiment to examine the impact on firm adoption of cloud computing, big data analytics and AI. The empirical results find that the policy increased investment in IT capital as would be expected; but it slowed firm adoption of cloud, big data and AI. Matched employer-employee data shows that the policy also led firms to reduce their demand for workers that perform data analytics, but not other types of workers.
    Keywords: capital incentives, firms, cloud computing, artificial intelligence
    JEL: J21 J24 L20 O33
    Date: 2024
    URL: https://d.repec.org/n?u=RePEc:ces:ceswps:_11369
  12. By: Yuzhe Yang; Yifei Zhang; Yan Hu; Yilin Guo; Ruoli Gan; Yueru He; Mingcong Lei; Xiao Zhang; Haining Wang; Qianqian Xie; Jimin Huang; Honghai Yu; Benyou Wang
    Abstract: This paper introduces the UCFE: User-Centric Financial Expertise benchmark, an innovative framework designed to evaluate the ability of large language models (LLMs) to handle complex real-world financial tasks. UCFE benchmark adopts a hybrid approach that combines human expert evaluations with dynamic, task-specific interactions to simulate the complexities of evolving financial scenarios. Firstly, we conducted a user study involving 804 participants, collecting their feedback on financial tasks. Secondly, based on this feedback, we created our dataset that encompasses a wide range of user intents and interactions. This dataset serves as the foundation for benchmarking 12 LLM services using the LLM-as-Judge methodology. Our results show a significant alignment between benchmark scores and human preferences, with a Pearson correlation coefficient of 0.78, confirming the effectiveness of the UCFE dataset and our evaluation approach. UCFE benchmark not only reveals the potential of LLMs in the financial sector but also provides a robust framework for assessing their performance and user satisfaction.The benchmark dataset and evaluation code are available.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.14059
  13. By: Namid R. Stillman; Rory Baggott
    Abstract: Deep generative models are becoming increasingly used as tools for financial analysis. However, it is unclear how these models will influence financial markets, especially when they infer financial value in a semi-autonomous way. In this work, we explore the interplay between deep generative models and market dynamics. We develop a form of virtual traders that use deep generative models to make buy/sell decisions, which we term neuro-symbolic traders, and expose them to a virtual market. Under our framework, neuro-symbolic traders are agents that use vision-language models to discover a model of the fundamental value of an asset. Agents develop this model as a stochastic differential equation, calibrated to market data using gradient descent. We test our neuro-symbolic traders on both synthetic data and real financial time series, including an equity stock, commodity, and a foreign exchange pair. We then expose several groups of neuro-symbolic traders to a virtual market environment. This market environment allows for feedback between the traders belief of the underlying value to the observed price dynamics. We find that this leads to price suppression compared to the historical data, highlighting a future risk to market stability. Our work is a first step towards quantifying the effect of deep generative agents on markets dynamics and sets out some of the potential risks and benefits of this approach in the future.
    Date: 2024–10
    URL: https://d.repec.org/n?u=RePEc:arx:papers:2410.14587

This nep-ain issue is ©2024 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.