nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2023‒11‒20
fourteen papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. Public Opinion on Fairness and Efficiency for Algorithmic and Human Decision-Makers By Bansak, Kirk; Paulson, Elisabeth
  2. The Economics of Copyright in the Digital Age By Christian Peukert; Margaritha Windisch
  3. AI-Generated Inventions: Implications for the Patent System By Gaetan de Rassenfosse; Adam Jaffe; Melissa Wasserman
  4. Economic Growth under Transformative AI By Philip Trammell; Anton Korinek
  5. Artificial intelligence, services globalisation and income inequality By Giulio Cornelli; Jon Frost; Saurabh Mishra
  6. AI Adoption in America: Who, What, and Where By Kristina McElheran; J. Frank Li; Erik Brynjolfsson; Zachary Kroff; Emin Dinlersoz; Lucia S. Foster; Nikolas Zolas
  7. Artificial Intelligence and Jobs: Evidence from US Commuting Zones By Alessandra Bonfiglioli; Rosario Crinò; Gino Gancia; Ioannis Papadakis
  8. Labor Market Exposure to AI: Cross-country Differences and Distributional Implications By Carlo Pizzinelli; Augustus J Panton; Ms. Marina Mendes Tavares; Mauro Cazzaniga; Longji Li
  9. Immigration and Regional Specialization in AI By Hanson, Gordon H.
  10. Leveraging Large Language Model for Automatic Evolving of Industrial Data-Centric R&D Cycle By Xu Yang; Xiao Yang; Weiqing Liu; Jinhui Li; Peng Yu; Zeqi Ye; Jiang Bian
  11. Deriving Technology Indicators from Corporate Websites: A Comparative Assessment Using Patents By Sebastian Heinrich
  12. From Transcripts to Insights: Uncovering Corporate Risks Using Generative AI By Alex Kim; Maximilian Muhn; Valeri Nikolaev
  13. Towards reducing hallucination in extracting information from financial reports using Large Language Models By Bhaskarjit Sarmah; Tianjie Zhu; Dhagash Mehta; Stefano Pasquali
  14. Some critical and ethical perspectives on the empirical turn of AI interpretability By Jean-Marie John-Mathews

  1. By: Bansak, Kirk; Paulson, Elisabeth
    Abstract: This study explores the public's preferences between algorithmic and human decision-makers (DMs) in high-stakes contexts, how these preferences are impacted by performance metrics, and whether the public's evaluation of performance differs when considering algorithmic versus human DMs. Leveraging a conjoint experimental design, respondents (n = 9, 030) chose between pairs of DM profiles in two scenarios: pre-trial release decisions and bank loan decisions. DM profiles varied on the DM’s type (human v. algorithm) and on three metrics—defendant crime rate/loan default rate, false positive rate (FPR) among white defendants/applicants, and FPR among minority defendants/applicants—as well as an implicit fairness metric defined by the absolute difference between the two FPRs. Controlling for performance, we observe a general tendency to favor human DMs, though this is driven by a subset of respondents who expect human DMs to perform better in the real world. In addition, although a large portion of respondents claimed to prioritize fairness, we find that the impact of fairness on respondents' actual choices is limited. We also find that the relative importance of the four performance metrics remains consistent across DM type, suggesting that the public's preferences related to DM performance do not vary fundamentally between algorithmic and human DMs. Taken together, our analysis suggests that the public as a whole does not hold algorithmic DMs to a stricter fairness or efficiency standard, which has important implications as policymakers and technologists grapple with the integration of AI into pivotal societal functions.
    Date: 2023–10–19
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:pghmx&r=ain
  2. By: Christian Peukert; Margaritha Windisch
    Abstract: Intellectual property rights are fundamental to how economies organize innovation and steer the diffusion of knowledge. Copyright law, in particular, has developed constantly to keep up with emerging technologies and the interests of creators, consumers, and intermediaries of the different creative industries. We provide a synthesis of the literature on the law and economics of copyright in the digital age, with a particular focus on the available empirical evidence. First, we discuss the legal foundations of the copyright system and developments of length and scope throughout the era of digitization. Second, we review the literature on technological change with its opportunities and challenges for the stakeholders involved. We give special attention to empirical evidence on online copyright enforcement, changes in the supply of works due to digital technology, and the importance of creative re-use and new licensing and business models. We then set out avenues for further research identifying critical gaps in the literature regarding the scope of empirical copyright research, the effects of technology that enables algorithmic licensing, and copyright issues related to software, data and artificial intelligence.
    Keywords: copyright, digitization, technology, enforcement, licensing, business models, evidence
    JEL: K11 L82 L86
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_10687&r=ain
  3. By: Gaetan de Rassenfosse (Ecole polytechnique federale de Lausanne); Adam Jaffe (Brandeis University); Melissa Wasserman (The University of Texas at Austin - School of Law)
    Abstract: This symposium Article discusses issues raised for patent processes and policy created by inventions generated by artificial intelligence (AI). The Article begins by examining the normative desirability of allowing patents on AI-generated inventions. While it is unclear whether patent protection is needed to incentivize the creation of AI-generated inventions, a stronger case can be made that AI-generated inventions should be patent eligible to encourage the commercialization and technology transfer of AI-generated inventions. Next, the Article examines how the emergence of AI inventions will alter patentability standards, and whether a differentiated patent system that treats AI-generated inventions differently from hu-man-generated inventions is normatively desirable. This Article concludes by considering the larger implications of allowing patents on AI-generated inventions, including changes to the patent examination process, a possible increase in the concentration of patent ownership and patent thickets, and potentially unlimited inventions.
    Keywords: generative AI; patent; intellectual property; invention
    JEL: K20 D23 O34
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:iip:wpaper:22&r=ain
  4. By: Philip Trammell; Anton Korinek
    Abstract: Industrialized countries have long seen relatively stable growth in output per capita and a stable labor share. AI may be transformative, in the sense that it may break one or both of these stylized facts. This review outlines the ways this may happen by placing several strands of the literature on AI and growth within a common framework. We first evaluate models in which AI increases output production, for example via increases in capital's substitutability for labor or task automation, capturing the notion that AI will let capital “self-replicate”. This typically speeds up growth and lowers the labor share. We then consider models in which AI increases knowledge production, capturing the notion that AI will let capital “self-improve”, speeding growth further. Taken as a whole, the literature suggests that sufficiently advanced AI is likely to deliver both effects.
    JEL: E2 O3 O4
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:31815&r=ain
  5. By: Giulio Cornelli; Jon Frost; Saurabh Mishra
    Abstract: How does economic activity related to artificial intelligence (AI) impact the income of various groups in an economy? This study, using a panel of 86 countries over 2010–19, finds that investment in AI is associated with higher income inequality. In particular, AI investment is tied to higher real incomes and income shares for households in the top decile, while households in the fifth and bottom decile see a decline in their income shares. We also find a positive association with exports of modern services linked to AI. In labour markets, there is a contraction in overall employment, a shift from mid-skill to high-skill managerial roles and a reduced labour share of income.
    Keywords: artificial intelligence, automation, services, structural shifts, inequality
    JEL: D31 D63 O32
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:bis:biswps:1135&r=ain
  6. By: Kristina McElheran; J. Frank Li; Erik Brynjolfsson; Zachary Kroff; Emin Dinlersoz; Lucia S. Foster; Nikolas Zolas
    Abstract: We study the early adoption and diffusion of five AI-related technologies (automated-guided vehicles, machine learning, machine vision, natural language processing, and voice recognition) as documented in the 2018 Annual Business Survey of 850, 000 firms across the United States. We find that fewer than 6% of firms used any of the AI-related technologies we measure, though most very large firms reported at least some AI use. Weighted by employment, average adoption was just over 18%. AI use in production, while varying considerably by industry, nevertheless was found in every sector of the economy and clustered with emerging technologies such as cloud computing and robotics. Among dynamic young firms, AI use was highest alongside more-educated, more-experienced, and younger owners, including owners motivated by bringing new ideas to market or helping the community. AI adoption was also more common alongside indicators of high-growth entrepreneurship, including venture capital funding, recent product and process innovation, and growth-oriented business strategies. Early adoption was far from evenly distributed: a handful of “superstar” cities and emerging hubs led startups’ adoption of AI. These patterns of early AI use foreshadow economic and social impacts far beyond this limited initial diffusion, with the possibility of a growing “AI divide” if early patterns persist.
    JEL: M15 O3
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:31788&r=ain
  7. By: Alessandra Bonfiglioli; Rosario Crinò; Gino Gancia; Ioannis Papadakis
    Abstract: We study the effect of Artificial Intelligence (AI) on employment across US commuting zones over the period 2000-2020. A simple model shows that AI can automate jobs or complement workers, and illustrates how to estimate its effect by exploiting variation in a novel measure of local exposure to AI: job growth in AI-related professions built from detailed occupational data. Using a shift-share instrument that combines industry-level AI adoption with local industry employment, we estimate robust negative effects of AI exposure on employment across commuting zones and time. We find that AI’s impact is different from other capital and technologies, and that it works through services more than manufacturing. Moreover, the employment effect is especially negative for low-skill and production workers, while it turns positive for workers at the top of the wage distribution. These results are consistent with the view that AI has contributed to the automation of jobs and to widen inequality.
    Keywords: artificial intelligence, automation, displacement, labor
    JEL: J23 J24 O33
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:ces:ceswps:_10685&r=ain
  8. By: Carlo Pizzinelli; Augustus J Panton; Ms. Marina Mendes Tavares; Mauro Cazzaniga; Longji Li
    Abstract: This paper examines the impact of Artificial Intelligence (AI) on labor markets in both Advanced Economies (AEs) and Emerging Markets (EMs). We propose an extension to a standard measure of AI exposure, accounting for AI's potential as either a complement or a substitute for labor, where complementarity reflects lower risks of job displacement. We analyze worker-level microdata from 2 AEs (US and UK) and 4 EMs (Brazil, Colombia, India, and South Africa), revealing substantial variations in unadjusted AI exposure across countries. AEs face higher exposure than EMs due to a higher employment share in professional and managerial occupations. However, when accounting for potential complementarity, differences in exposure across countries are more muted. Within countries, common patterns emerge in AEs and EMs. Women and highly educated workers face greater occupational exposure to AI, at both high and low complementarity. Workers in the upper tail of the earnings distribution are more likely to be in occupations with high exposure but also high potential complementarity.
    Keywords: Artificial intelligence; Employment; Occupations; Emerging Markets
    Date: 2023–10–04
    URL: http://d.repec.org/n?u=RePEc:imf:imfwpa:2023/216&r=ain
  9. By: Hanson, Gordon H.
    Abstract: I examine the specialization of US commuting zones in AI-related occupations over the 2000 to 2018 period. I define AI-related jobs based on keywords in Census occupational titles. Using the approach in Lin (2011) to identify new work, I measure job growth related to AI by weighting employment growth in AI-related occupations by the share of job titles in these occupations that were added after 1990. Overall, regional specialization in AI-related activities mirrors that of regional specialization in IT. However, foreign-born and native-born workers within the sector tend to cluster in different locations. Whereas specialization of the foreign-born in AI-related jobs is strongest in high-tech hubs with a preponderance of private-sector employment, native-born specialization in AI-related jobs is strongest in centers for military and space-related research. Nationally, foreign-born workers account for 55% of job growth in AI-related occupations since 2000. In regression analysis, I find that US commuting zones exposed to a larger increases in the supply of college-educated immigrants became more specialized in AI-related occupations and that this increased specialization was due entirely to the employment of the foreign born. My results suggest that access to highly skilled workers constrains AI-related job growth and that immigration of the college-educated helps relax this constraint. (Stone Center on Socio-Economic Inequality Working Paper)
    Date: 2023–10–25
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:9a45d&r=ain
  10. By: Xu Yang; Xiao Yang; Weiqing Liu; Jinhui Li; Peng Yu; Zeqi Ye; Jiang Bian
    Abstract: In the wake of relentless digital transformation, data-driven solutions are emerging as powerful tools to address multifarious industrial tasks such as forecasting, anomaly detection, planning, and even complex decision-making. Although data-centric R&D has been pivotal in harnessing these solutions, it often comes with significant costs in terms of human, computational, and time resources. This paper delves into the potential of large language models (LLMs) to expedite the evolution cycle of data-centric R&D. Assessing the foundational elements of data-centric R&D, including heterogeneous task-related data, multi-facet domain knowledge, and diverse computing-functional tools, we explore how well LLMs can understand domain-specific requirements, generate professional ideas, utilize domain-specific tools to conduct experiments, interpret results, and incorporate knowledge from past endeavors to tackle new challenges. We take quantitative investment research as a typical example of industrial data-centric R&D scenario and verified our proposed framework upon our full-stack open-sourced quantitative research platform Qlib and obtained promising results which shed light on our vision of automatic evolving of industrial data-centric R&D cycle.
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2310.11249&r=ain
  11. By: Sebastian Heinrich (KOF Swiss Economic Institute, ETH Zurich, Switzerland)
    Abstract: This paper investigates the potential of indicators derived from corporate websites to measure technology related concepts. Using arti cial intelligence (AI) technology as a case in point, I construct a 24-year panel combining the texts of websites and patent portfolios for over 1, 000 large companies. By identifying AI exposure with a comprehensive keyword set, I show that website and patent data are strongly related, suggesting that corporate websites constitute a promising data source to trace AI technologies.
    Keywords: corporate website, patent portfolio, technology indicator, text data, artificial intelligence
    JEL: C81 O31 O33
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:kof:wpskof:22-512&r=ain
  12. By: Alex Kim; Maximilian Muhn; Valeri Nikolaev
    Abstract: We explore the value of generative AI tools, such as ChatGPT, in helping investors uncover dimensions of corporate risk. We develop and validate firm-level measures of risk exposure to political, climate, and AI-related risks. Using the GPT 3.5 model to generate risk summaries and assessments from the context provided by earnings call transcripts, we show that GPT-based measures possess significant information content and outperform the existing risk measures in predicting (abnormal) firm-level volatility and firms' choices such as investment and innovation. Importantly, information in risk assessments dominates that in risk summaries, establishing the value of general AI knowledge. We also find that generative AI is effective at detecting emerging risks, such as AI risk, which has soared in recent quarters. Our measures perform well both within and outside the GPT's training window and are priced in equity markets. Taken together, an AI-based approach to risk measurement provides useful insights to users of corporate disclosures at a low cost.
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2310.17721&r=ain
  13. By: Bhaskarjit Sarmah; Tianjie Zhu; Dhagash Mehta; Stefano Pasquali
    Abstract: For a financial analyst, the question and answer (Q\&A) segment of the company financial report is a crucial piece of information for various analysis and investment decisions. However, extracting valuable insights from the Q\&A section has posed considerable challenges as the conventional methods such as detailed reading and note-taking lack scalability and are susceptible to human errors, and Optical Character Recognition (OCR) and similar techniques encounter difficulties in accurately processing unstructured transcript text, often missing subtle linguistic nuances that drive investor decisions. Here, we demonstrate the utilization of Large Language Models (LLMs) to efficiently and rapidly extract information from earnings report transcripts while ensuring high accuracy transforming the extraction process as well as reducing hallucination by combining retrieval-augmented generation technique as well as metadata. We evaluate the outcomes of various LLMs with and without using our proposed approach based on various objective metrics for evaluating Q\&A systems, and empirically demonstrate superiority of our method.
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2310.10760&r=ain
  14. By: Jean-Marie John-Mathews (IMT-BS - MMS - Département Management, Marketing et Stratégie - TEM - Télécom Ecole de Management - IMT - Institut Mines-Télécom [Paris] - IMT-BS - Institut Mines-Télécom Business School - IMT - Institut Mines-Télécom [Paris], LITEM - Laboratoire en Innovation, Technologies, Economie et Management (EA 7363) - UEVE - Université d'Évry-Val-d'Essonne - Université Paris-Saclay - IMT-BS - Institut Mines-Télécom Business School - IMT - Institut Mines-Télécom [Paris])
    Abstract: We consider two fundamental and related issues currently facing the development of Artificial Intelligence (AI): the lack of ethics, and the interpretability of AI decisions. Can interpretable AI decisions help to address the issue of ethics in AI? Using a randomized study, we experimentally show that the empirical and liberal turn of the production of explanations tends to select AI explanations with a low denunciatory power. Under certain conditions, interpretability tools are therefore not means but, paradoxically, obstacles to the production of ethical AI since they can give the illusion of being sensitive to ethical incidents. We also show that the denunciatory power of AI explanations is highly dependent on the context in which the explanation takes place, such as the gender or education of the person for whom the explication is intended. AI ethics tools are therefore sometimes too flexible and self-regulation through the liberal production of explanations does not seem to be enough to resolve ethical issues. By following an STS pragmatist program, we highlight the role of non-human actors (such as computational paradigms, testing environments, etc.) in the formation of structural power relations, such as sexism. We then propose two scenarios for the future development of ethical AI: more external regulation, or more liberalization of AI explanations. These two opposite paths will play a major role in the future development of ethical AI.
    Keywords: Artificial intelligence, Ethics Interpretability, Experimentation, Self-regulation
    Date: 2022–01
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-03395823&r=ain

This nep-ain issue is ©2023 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.