nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2023‒10‒30
fourteen papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. They Are Among Us: Pricing Behavior of Algorithms in the Field By Fourberg, Niklas; Marques Magalhaes, Katrin; Wiewiorra, Lukas
  2. Algorithmic Recommendations and Human Discretion By Victoria Angelova; Will S. Dobbie; Crystal Yang
  3. Artificial Intelligence and Employment: A Look into the Crystal Ball By Dario Guarascio; Jelena Reljic; Roman Stollinger
  4. Artificial Intelligence and Workers' Well-Being By Giuntella, Osea; König, Johannes; Stella, Luca
  5. Explosive growth from AI automation: A review of the arguments By Ege Erdil; Tamay Besiroglu
  6. AI-generated lemons: a sour outlook for content producers? By Howell, Bronwyn E.; Potgieter, Petrus H.
  7. What do telecommunications policy academics have to fear from GPT-3? By Howell, Bronwyn E.; Potgieter, Petrus H.
  8. Artificial intelligence for science – adoption trends and future development pathways By Hajkowicz, Stefan; Naughtin, Claire; Sanderson, Conrad; Schleiger, Emma; Karimi, Sarvnaz; Bratanova, Alexandra; Bednarz, Tomasz
  9. Advancing algorithmic bias management capabilities in AI-driven marketing analytics research By Shahriar Akter; Saida Sultana; Marcello Mariani; Samuel Fosso Wamba; Konstantina Spanaki; Yogesh Dwivedi
  10. How will the State think with the assistance of ChatGPT? The case of customs as an example of generative artificial intelligence in public administrations By Thomas Cantens
  11. Artificial intelligence, complementary assets and productivity: evidence from French firms By Flavio Calvino; Luca Fontanelli
  12. Assessing Look-Ahead Bias in Stock Return Predictions Generated By GPT Sentiment Analysis By Paul Glasserman; Caden Lin
  13. Using Large Language Models for Qualitative Analysis can Introduce Serious Bias By Julian Ashwin; Aditya Chhabra; Vijayendra Rao
  14. PAMS: Platform for Artificial Market Simulations By Masanori Hirano; Ryosuke Takata; Kiyoshi Izumi

  1. By: Fourberg, Niklas; Marques Magalhaes, Katrin; Wiewiorra, Lukas
    Abstract: We analyze pricing patterns and price level effects of algorithms in the market segments for OTC-antiallergics and -painkillers in Germany. Based on a novel hourly dataset which spans over four months and contains over 10 million single observations, we produce the following results. First, price levels are substantially higher for antiallergics compared to the segment of painkillers, which seems to be reflective of a lower price elasticity for antiallergics. Second, we find evidence that this exploitation of demand characteristics is heterogeneous with respect to the pricing technology. Retailers with a more advanced pricing technology establish even higher price premiums for antiallergics than retailers with a less advanced technology. Third, retailers with more advanced pricing technology post lower prices which contradicts previous findings from simulations but are in line with empirical findings if many firms compete in a market. Lastly, our data suggests that pricing algorithms takeweb-traffic of retailers' online-shops as demand side feedback into account when choosing prices. Our results stress the importance of a careful policy approach towards pricing algorithms and highlights new areas of risks when multiple players employ the same pricing technology.
    Keywords: Algorithmic pricing, Collusion, Artificial intelligence
    JEL: C13 D83 L13 L41
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:itse23:277958&r=ain
  2. By: Victoria Angelova; Will S. Dobbie; Crystal Yang
    Abstract: Human decision-makers frequently override the recommendations generated by predictive algorithms, but it is unclear whether these discretionary overrides add valuable private information or reintroduce human biases and mistakes. We develop new quasi-experimental tools to measure the impact of human discretion over an algorithm on the accuracy of decisions, even when the outcome of interest is only selectively observed, in the context of bail decisions. We find that 90% of the judges in our setting underperform the algorithm when they make a discretionary override, with most making override decisions that are no better than random. Yet the remaining 10% of judges outperform the algorithm in terms of both accuracy and fairness when they make a discretionary override. We provide suggestive evidence on the behavior underlying these differences in judge performance, showing that the high-performing judges are more likely to use relevant private information and are less likely to overreact to highly salient events compared to the low-performing judges.
    JEL: C01 D8 K40
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:31747&r=ain
  3. By: Dario Guarascio; Jelena Reljic; Roman Stollinger
    Abstract: This study provides evidence of the employment impact of AI exposure in European regions, addressing one of the many gaps in the emerging literature on AI's effects on employment in Europe. Building upon the occupation-based AI-exposure indicators proposed by Felten et al. (2018, 2019, 2021), which are mapped to the European occupational classification (ISCO), following Albanesi et al. (2023), we analyse the regional employment dynamics between 2011 and 2018. After controlling for a wide range of supply and demand factors, our findings indicate that, on average, AI exposure has a positive impact on regional employment. Put differently, European regions characterised by a relatively larger share of AI-exposed occupations display, all else being equal and once potential endogeneity concerns are mitigated, a more favourable employment tendency over the period 2011-2018. We also find evidence of a moderating effect of robot density on the AI-employment nexus, which however lacks a causal underpinning.
    Keywords: Artificial intelligence; industrial robots; labour; regional employment; occupations.
    Date: 2023–10–06
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2023/34&r=ain
  4. By: Giuntella, Osea (University of Pittsburgh); König, Johannes (DIW Berlin); Stella, Luca (Free University of Berlin)
    Abstract: This study explores the relationship between artificial intelligence (AI) and workers' well-being and mental health using longitudinal survey data from Germany (2000-2020). We construct a measure of individual exposure to AI technology based on the occupation in which workers in our sample were first employed and explore an event study design and a difference-in-differences approach to compare AI-exposed and non-exposed workers. Before AI became widely available, there is no evidence of differential pre-trends in workers' well-being and concerns about their economic futures. Since 2015, however, with the increasing adoption of AI in firms across Germany, we find that AI-exposed workers have become less satisfied with their life and job and more concerned about job security and their personal economic situation. However, we find no evidence of a significant impact of AI on workers' mental health, anxiety, or depression.
    Keywords: artificial intelligence, future of work, well-being, mental health
    JEL: I10 J28 O30
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp16485&r=ain
  5. By: Ege Erdil; Tamay Besiroglu
    Abstract: We examine whether substantial AI automation could accelerate global economic growth by about an order of magnitude, akin to the economic growth effects of the Industrial Revolution. We identify three primary drivers for such growth: 1) the scalability of an AI ``labor force" restoring a regime of increasing returns to scale, 2) the rapid expansion of an AI labor force, and 3) a massive increase in output from rapid automation occurring over a brief period of time. Against this backdrop, we evaluate nine counterarguments, including regulatory hurdles, production bottlenecks, alignment issues, and the pace of automation. We tentatively assess these arguments, finding most are unlikely deciders. We conclude that explosive growth seems plausible with AI capable of broadly substituting for human labor, but high confidence in this claim seems currently unwarranted. Key questions remain about the intensity of regulatory responses to AI, physical bottlenecks in production, the economic value of superhuman abilities, and the rate at which AI automation could occur.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.11690&r=ain
  6. By: Howell, Bronwyn E.; Potgieter, Petrus H.
    Abstract: Artificial intelligence (AI) techniques for natural language processing have made dramatic advances in the past few years (Lin 2023). Thunström & Steingrimsson (2022) demonstrated that the present generation of AI text engines are even able to write low-level scientific pieces about themselves, with relatively minimal prompting, whereas Goyal et al. (2022) show how good general-purpose AI language engines are at summarizing news articles. There is however a downside to all of this progress. Bontridder & Poullet (2021) point out how inexpensive it has become to generate deepfake videos and synthetic voice recordings. Kreps et al. (2022) look at AI generated text and find that "individuals are largely incapable of distinguishing between AI- and human-generated text". Illia et al. (2023) point to three ethical challenges raised by automated text generation that is difficult to distinguish from human writing: 1. facilitation of mass manipulation and disinformation; 2. a lowest denominator problem where a mass of low-quality but incredibly cheap text, crowds out higher-quality discourse; and 3. the suppression of direct communication between stakeholders and an attendant drop in the levels of trust. Our focus is mainly on (2) and we examine the institutional consequences that may arise in two specific sectors currently already facing challenges from AI-generated text: scientific journals and social media platforms. Drawing on the body of learning from institutional economics regarding responses to uncertainties in the veracity of information, it also proposes some elementary remedies that may prove helpful in navigating through the anticipated challenges. Distinguishing genuinely human-authored content from machine-generated text will likely be more easily done using a credible signal of the authenticity of the content creator. This is a variation of Akerlof's (1970) famous "market for lemons" problem. This paper uses an inductive approach to examine sections of the content industry that are likely to be particularly relevant to "market for lemons" substitution, referring to the framework of Giannakas & Fulton (2020).
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:itse23:277971&r=ain
  7. By: Howell, Bronwyn E.; Potgieter, Petrus H.
    Abstract: Artificial intelligence (AI) tools such as ChatGPT and GPT-3 have shot to prominence recently (Lin 2023), as dramatic advances have shown them to be capable of writing plausible output that is difficult to distinguish from human-authored content. Unsurprisingly, this has led to concerns about their use by students in tertiary education contexts (Swiecki et al. 2022) and it has led to them being banned in some school districts in the United States (e.g. Rosenblatt 2023; Clarridge 2023) and from at least one top-ranking international university (e.g. Reuters 2023). There are legitimate reasons for such fears to be held, as it is difficult to differentiate students' own written work presented for assessment from that produced by the AI tools. Successfully embedding them into educational contexts requires an understanding of the tools, what they are, and what they can and cannot do. Despite their powerful modelling and description capabilities, these tools have (at least currently) significant issues and limitations (Zhang & Li 2021). As telecommunications policy academics charged with the research-led teaching and supervising both undergraduate and research students, we need to be certain that our graduates are capable of understanding the complexities of current issues in this incredibly dynamic field and applying their learnings appropriately in industry and policy environments. We must be reasonably certain that the grades we assign are based on the students' own work and understanding, To this end, we engaged in an experiment with the current (Q1 of 2023) version of the AI tool to assess how well it coped with questions on a core and current topic in telecommunications policy education: the effects of access regulation (local loop unbundling) on broadband investment and uptake. We found that while the outputs were well-written and appeared plausible, there were significant systematic errors which, once academics are aware of them, can be exploited to avoid the risk of AI use severely undermining the credibility of the assessments we make of students' written work, at least for the time being and in respect of the version of chatbot software we used.
    Keywords: Artificial Intelligence (AI), ChatGPT, GPT-3, Academia
    Date: 2023
    URL: http://d.repec.org/n?u=RePEc:zbw:itse23:277972&r=ain
  8. By: Hajkowicz, Stefan; Naughtin, Claire; Sanderson, Conrad; Schleiger, Emma; Karimi, Sarvnaz; Bratanova, Alexandra; Bednarz, Tomasz
    Abstract: This paper aims to inform researchers and research organisations within the spheres of government, industry, community and academia seeking to develop improved AI capabilities. The paper is focused on the use of AI for science, and it describes AI adoption trends in the physical, natural and social science fields. Using a bibliometric analysis of peer-reviewed publishing trends over 63 years (1960–2022), the paper demonstrates a surge in AI adoption across all fields over the past several years. The paper examines future development pathways and explores implications for science organisations.
    Keywords: Artificial intelligence; machine learning; science; AI capabilities; bibliometric analysis; Australia
    JEL: O32 O33 O38
    Date: 2022–11
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:115464&r=ain
  9. By: Shahriar Akter (University of Wollongong [Australia]); Saida Sultana (University of Wollongong [Australia]); Marcello Mariani (Henley Business School [University of Reading] - UOR - University of Reading, University of Bologna/Università di Bologna); Samuel Fosso Wamba (TBS - Toulouse Business School); Konstantina Spanaki (Audencia Business School); Yogesh Dwivedi (School of Management [Swansea] - Swansea University, SIBM - Symbiosis Institute of Business Management Pune)
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-04194438&r=ain
  10. By: Thomas Cantens (WCO - World Customs Organization, CERDI - Centre d'Études et de Recherches sur le Développement International - IRD - Institut de Recherche pour le Développement - CNRS - Centre National de la Recherche Scientifique - UCA - Université Clermont Auvergne)
    Abstract: The paper discusses the implications of General Artificial Intelligence (GAI) in public administrations and the specific questions it raises compared to specialized and « numerical » AI, based on the example of Customs and the experience of the World Customs Organization in the field of AI and data strategy implementation in Member countries. At the organizational level, the advantages of GAI include cost reduction through internalization of tasks, uniformity and correctness of administrative language, access to broad knowledge, and potential paradigm shifts in fraud detection. At this level, the paper highlights three facts that distinguish GAI from specialized AI : i) GAI is less associated to decision-making process than specialized AI in public administrations so far, ii) the risks usually associated with GAI are often similar to those previously associated with specialized AI, but, while certain risks remain pertinent, others lose significance due to the constraints imposed by the inherent limitations of GAI technology itself when implemented in public administrations, iii) training data corpus for GAI becomes a strategic asset for public administrations, maybe more than the algorithms themselves, which was not the case for specialized AI.. At the individual level, the paper emphasizes the "language-centric" nature of GAI in contrast to "number-centric" AI systems implemented within public administrations up until now. It discusses the risks of replacement or enslavement of civil servants to the machines by exploring the transformative impact of GAI on the intellectual production of the State. The paper pleads for the development of critical vigilance and critical thinking as specific skills for civil servants who are highly specialized and will have to think with the assistance of a machine that is eclectic by nature.
    Keywords: Generative artificial intelligence, Language, Critical thinking, Customs, Public administrations
    Date: 2023–09–29
    URL: http://d.repec.org/n?u=RePEc:hal:cdiwps:hal-04233370&r=ain
  11. By: Flavio Calvino; Luca Fontanelli
    Abstract: In this work we characterise French firms using artificial intelligence (AI) and explore the link between AI use and productivity. We relevantly distinguish AI users that source AI from external providers (AI buyers) from those developing their own AI systems (AI developers). AI buyers tend to be larger than other firms, while AI developers are also younger. The share of firms using AI is highest in the ICT sector, which exhibits a particularly high share of developers. Complementary assets, including skills, digital capabilities and infrastructure, play a key role for AI use, with AI buyers and developers leveraging different types of human capital. Overall, AI users tend to be more productive, however this appears largely related to the self-selection of more productive and digital-intensive firms into AI use. This is not the case for AI developers, for which the positive link between AI use and productivity remains evident beyond selection, suggesting a positive effect of AI on their productivity.
    Keywords: Technology Diffusion; Artificial Intelligence; Digitalisation; Productivity.
    Date: 2023–10–13
    URL: http://d.repec.org/n?u=RePEc:ssa:lemwps:2023/35&r=ain
  12. By: Paul Glasserman; Caden Lin
    Abstract: Large language models (LLMs), including ChatGPT, can extract profitable trading signals from the sentiment in news text. However, backtesting such strategies poses a challenge because LLMs are trained on many years of data, and backtesting produces biased results if the training and backtesting periods overlap. This bias can take two forms: a look-ahead bias, in which the LLM may have specific knowledge of the stock returns that followed a news article, and a distraction effect, in which general knowledge of the companies named interferes with the measurement of a text's sentiment. We investigate these sources of bias through trading strategies driven by the sentiment of financial news headlines. We compare trading performance based on the original headlines with de-biased strategies in which we remove the relevant company's identifiers from the text. In-sample (within the LLM training window), we find, surprisingly, that the anonymized headlines outperform, indicating that the distraction effect has a greater impact than look-ahead bias. This tendency is particularly strong for larger companies--companies about which we expect an LLM to have greater general knowledge. Out-of-sample, look-ahead bias is not a concern but distraction remains possible. Our proposed anonymization procedure is therefore potentially useful in out-of-sample implementation, as well as for de-biased backtesting.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.17322&r=ain
  13. By: Julian Ashwin; Aditya Chhabra; Vijayendra Rao
    Abstract: Large Language Models (LLMs) are quickly becoming ubiquitous, but the implications for social science research are not yet well understood. This paper asks whether LLMs can help us analyse large-N qualitative data from open-ended interviews, with an application to transcripts of interviews with Rohingya refugees in Cox's Bazaar, Bangladesh. We find that a great deal of caution is needed in using LLMs to annotate text as there is a risk of introducing biases that can lead to misleading inferences. We here mean bias in the technical sense, that the errors that LLMs make in annotating interview transcripts are not random with respect to the characteristics of the interview subjects. Training simpler supervised models on high-quality human annotations with flexible coding leads to less measurement error and bias than LLM annotations. Therefore, given that some high quality annotations are necessary in order to asses whether an LLM introduces bias, we argue that it is probably preferable to train a bespoke model on these annotations than it is to use an LLM for annotation.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.17147&r=ain
  14. By: Masanori Hirano; Ryosuke Takata; Kiyoshi Izumi
    Abstract: This paper presents a new artificial market simulation platform, PAMS: Platform for Artificial Market Simulations. PAMS is developed as a Python-based simulator that is easily integrated with deep learning and enabling various simulation that requires easy users' modification. In this paper, we demonstrate PAMS effectiveness through a study using agents predicting future prices by deep learning.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2309.10729&r=ain

This nep-ain issue is ©2023 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.