nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2023‒08‒14
thirteen papers chosen by
Ben Greiner
Wirtschaftsuniversität Wien

  1. Corrupted by Algorithms? How AI-Generated and Human-Written Advice Shape (Dis)Honesty By Leib, Margarita; Köbis, Nils; Rilke, Rainer Michael; Hagens, Marloes; Irlenbusch, Bernd
  2. Combining Human Expertise with Artificial Intelligence: Experimental Evidence from Radiology By Nikhil Agarwal; Alex Moehring; Pranav Rajpurkar; Tobias Salz
  3. Is This System Biased? - How Users React to Gender Bias in an Explainable AI System By Jussupow, Ekaterina; Meza Martinez, Miguel Angel; Maedche, Alexander; Heinzl, Armin
  4. When AI joins the Team: A Literature Review on Intragroup Processes and their Effect on Team Performance in Team-AI Collaboration By Zercher, Désirée; Jussupow, Ekaterina; Heinzl, Armin
  5. How the Application of Machine Learning Systems Changes Business Processes: A multiple Case Study By Kunz, Pascal Christoph; Jussupow, Ekaterina; Spohrer, Kai; Heinzl, Armin
  6. Uncovering the Semantics of Concepts Using GPT-4 and Other Recent Large Language Models By Gaël Le Mens; Balász Kovács; Michael T. Hannan; Guillem Pros
  7. Improved Financial Forecasting via Quantum Machine Learning By Sohum Thakkar; Skander Kazdaghli; Natansh Mathur; Iordanis Kerenidis; Andr\'e J. Ferreira-Martins; Samurai Brito
  8. Artificial Intelligence and Inflation Forecasts By Miguel Faria-e-Castro; Fernando Leibovici
  9. Algorithms, Incentives, and Democracy By Elizabeth Maggie Penn; John W. Patty
  10. Competition in generative artificial intelligence foundation models By Christophe Carugati
  11. New technologies and jobs in Europe By Albanesi, Stefania; Da Silva, António Dias; Jimeno, Juan F.; Lamo, Ana; Wabitsch, Alena
  12. The role of Artificial Intelligence (AI) in agriculture and its impact on economy By Wójcik-Czerniawska, Agnieszka
  13. Forging AI Pathways: Portugal's Journey within the EU Digital Landscape By Gabriel Osório de Barros

  1. By: Leib, Margarita (Tilburg University); Köbis, Nils (Max Planck Institute for Human Development); Rilke, Rainer Michael (WHU Vallendar); Hagens, Marloes (Erasmus University Rotterdam); Irlenbusch, Bernd (University of Cologne)
    Abstract: Artificial Intelligence (AI) increasingly becomes an indispensable advisor. New ethical concerns arise if AI persuades people to behave dishonestly. In an experiment, we study how AI advice (generated by a Natural-Language-processing algorithm) affects (dis)honesty, compare it to equivalent human advice, and test whether transparency about advice source matters. We find that dishonesty-promoting advice increases dishonesty, whereas honesty-promoting advice does not increase honesty. This is the case for both AI and human advice. Algorithmic transparency, a commonly proposed policy to mitigate AI risks, does not affect behaviour. The findings mark the first steps towards managing AI advice responsibly.
    Keywords: Artificial Intelligence, machine behaviour, behavioural ethics, advice
    JEL: C91 D90 D91
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:iza:izadps:dp16293&r=ain
  2. By: Nikhil Agarwal; Alex Moehring; Pranav Rajpurkar; Tobias Salz
    Abstract: While Artificial Intelligence (AI) algorithms have achieved performance levels comparable to human experts on various predictive tasks, human experts can still access valuable contextual information not yet incorporated into AI predictions. Humans assisted by AI predictions could outperform both human-alone or AI-alone. We conduct an experiment with professional radiologists that varies the availability of AI assistance and contextual information to study the effectiveness of human-AI collaboration and to investigate how to optimize it. Our findings reveal that (i) providing AI predictions does not uniformly increase diagnostic quality, and (ii) providing contextual information does increase quality. Radiologists do not fully capitalize on the potential gains from AI assistance because of large deviations from the benchmark Bayesian model with correct belief updating. The observed errors in belief updating can be explained by radiologists’ partially underweighting the AI’s information relative to their own and not accounting for the correlation between their own information and AI predictions. In light of these biases, we design a collaborative system between radiologists and AI. Our results demonstrate that, unless the documented mistakes can be corrected, the optimal solution involves assigning cases either to humans or to AI, but rarely to a human assisted by AI.
    JEL: C50 C90 D47 D83
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:31422&r=ain
  3. By: Jussupow, Ekaterina; Meza Martinez, Miguel Angel; Maedche, Alexander; Heinzl, Armin
    Date: 2021–12–12
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:138569&r=ain
  4. By: Zercher, Désirée; Jussupow, Ekaterina; Heinzl, Armin
    Date: 2023–05–11
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:138574&r=ain
  5. By: Kunz, Pascal Christoph; Jussupow, Ekaterina; Spohrer, Kai; Heinzl, Armin
    Date: 2022–05–11
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:138573&r=ain
  6. By: Gaël Le Mens; Balász Kovács; Michael T. Hannan; Guillem Pros
    Abstract: Recently, the world’s attention has been captivated by Large Language Models (LLMs) thanks to OpenAI’s Chat-GPT, which rapidly proliferated as an app powered by GPT-3 and now its successor, GPT-4. If these LLMs produce human-like text, the semantic spaces they construct likely align with those used by humans for interpreting and generating language. This suggests that social scientists could use these LLMs to construct measures of semantic similarity that match human judgment. In this article, we provide an empirical test of this intuition. We use GPT-4 to construct a new measure of typicality– the similarity of a text document to a concept or category. We evaluate its performance against other model-based typicality measures in terms of their correspondence with human typicality ratings. We conduct this comparative analysis in two domains: the typicality of books in literary genres (using an existing dataset of book descriptions) and the typicality of tweets authored by US Congress members in the Democratic and Republican parties (using a novel dataset). The GPT-4 Typicality measure not only meets or exceeds the current state-of-the-art but accomplishes this without any model training. This is a breakthrough because the previous state-of-the-art measure required fine-tuning a model (a BERT text classifier) on hundreds of thousands of text documents to achieve its performance. Our comparative analysis emphasizes the need for systematic empirical validation of measures based on LLMs: several measures based on other recent LLMs achieve at best a moderate correspondence with human judgments.
    Keywords: categories, concepts, deep learning, typicality, GPT, chatGPT, BERT, Similarity
    JEL: C18 C52
    Date: 2023–06
    URL: http://d.repec.org/n?u=RePEc:bge:wpaper:1394&r=ain
  7. By: Sohum Thakkar (QC Ware Corp); Skander Kazdaghli (QC Ware Corp); Natansh Mathur (QC Ware Corp; IRIF - Universit\'e Paris Cit\'e and CNRS); Iordanis Kerenidis (QC Ware Corp; IRIF - Universit\'e Paris Cit\'e and CNRS); Andr\'e J. Ferreira-Martins (Ita\'u Unibanco); Samurai Brito (Ita\'u Unibanco)
    Abstract: Quantum algorithms have the potential to enhance machine learning across a variety of domains and applications. In this work, we show how quantum machine learning can be used to improve financial forecasting. First, we use classical and quantum Determinantal Point Processes to enhance Random Forest models for churn prediction, improving precision by almost 6%. Second, we design quantum neural network architectures with orthogonal and compound layers for credit risk assessment, which match classical performance with significantly fewer parameters. Our results demonstrate that leveraging quantum ideas can effectively enhance the performance of machine learning, both today as quantum-inspired classical ML solutions, and even more in the future, with the advent of better quantum hardware.
    Date: 2023–05
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2306.12965&r=ain
  8. By: Miguel Faria-e-Castro; Fernando Leibovici
    Abstract: We explore the ability of Large Language Models (LLMs) to produce conditional inflation forecasts during the 2019-2023 period. We use a leading LLM (Google AI's PaLM) to produce distributions of conditional forecasts at different horizons and compare these forecasts to those of a leading source, the Survey of Professional Forecasters (SPF). We find that LLM forecasts generate lower mean-squared errors overall in most years, and at almost all horizons. LLM forecasts exhibit slower reversion to the 2% inflation anchor. We argue that this method of generating forecasts is inexpensive and can be applied to other time series.
    Keywords: inflation forecasts; large language models; artificial intelligence
    JEL: E31 E37 C45 C53
    Date: 2023–07–14
    URL: http://d.repec.org/n?u=RePEc:fip:fedlwp:96478&r=ain
  9. By: Elizabeth Maggie Penn; John W. Patty
    Abstract: Classification algorithms are increasingly used in areas such as housing, credit, and law enforcement in order to make decisions affecting peoples' lives. These algorithms can change individual behavior deliberately (a fraud prediction algorithm deterring fraud) or inadvertently (content sorting algorithms spreading misinformation), and they are increasingly facing public scrutiny and regulation. Some of these regulations, like the elimination of cash bail in some states, have focused on \textit{lowering the stakes of certain classifications}. In this paper we characterize how optimal classification by an algorithm designer can affect the distribution of behavior in a population -- sometimes in surprising ways. We then look at the effect of democratizing the rewards and punishments, or stakes, to algorithmic classification to consider how a society can potentially stem (or facilitate!) predatory classification. Our results speak to questions of algorithmic fairness in settings where behavior and algorithms are interdependent, and where typical measures of fairness focusing on statistical accuracy across groups may not be appropriate.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2307.02319&r=ain
  10. By: Christophe Carugati
    Abstract: This working paper examines how competition in foundation models (FMs) works.
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:bre:wpaper:node_9258&r=ain
  11. By: Albanesi, Stefania; Da Silva, António Dias; Jimeno, Juan F.; Lamo, Ana; Wabitsch, Alena
    Abstract: We examine the link between labour market developments and new technologies such as artificial intelligence (AI) and software in 16 European countries over the period 2011-2019. Using data for occupations at the 3-digit level in Europe, we find that on average employment shares have increased in occupations more exposed to AI. This is particularly the case for occupations with a relatively higher proportion of younger and skilled workers. This evidence is in line with the Skill Biased Technological Change theory. While there exists heterogeneity across countries, only very few countries show a decline in employment shares of occupations more exposed to AI-enabled automation. Country heterogeneity for this result seems to be linked to the pace of technology diffusion and education, but also to the level of product market regulation (competition) and employment protection laws. In contrast to the findings for employment, we find little evidence for a relationship between wages and potential exposures to new technologies. JEL Classification: J23, O33
    Keywords: artificial intelligence, employment, occupations, skills
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:ecb:ecbwps:20232831&r=ain
  12. By: Wójcik-Czerniawska, Agnieszka
    Abstract: In terms of the economy, agriculture plays a significant role. In agriculture, automation has become a major concern and a hot topic around the world. Food and employment demand are rising as a result of a rapidly expanding population. Using the new methods, billions of people were able to meet their dietary needs while also gaining employment opportunities. Farming has undergone an enormous change thanks to artificial intelligence. Crop yields have been protected by this technology from a variety of threats, including climate change, population growth, labour shortages, and concerns about global food security. Weeding, spraying, and irrigation are just a few of the many uses for artificial intelligence in agriculture that this paper examines in detail, with the help of sensors and other tools built into machine and drones. Water, pesticide, herbicide, and soil fertility use, as well as labour use, are all reduced thanks to these new technologies, which boost output while also improving product quality. Robots and drones are being used for weeding in agriculture, and this paper compiles the findings of numerous researchers to give readers an overview of the current state of automation in agriculture. Soil water sensing techniques and two automated weeding methods are discussed. It is discussed in this paper how drones can be used for spraying and crop monitoring, as well as the various methods they can employ.
    Keywords: Production Economics, Research and Development/Tech Change/Emerging Technologies
    Date: 2022–09–23
    URL: http://d.repec.org/n?u=RePEc:ags:haaewp:337138&r=ain
  13. By: Gabriel Osório de Barros (GEE - Gabinete de Estratégia e Estudos do Ministério da Economia e do Mar (Office for Strategy and Studies of the portuguese Ministry of Economy and Maritime Affairs))
    Abstract: This GEE paper provides a comprehensive assessment of the potential and challenges of Artificial Intelligence (AI), with a particular focus on Portugal in the context of the EU. Grounded in both global and EU contexts, the study identifies applications and transformative influence of AI across various sectors such as education, health, tourism, manufacturing, financial services or e-government. It also delves into the ethical, social and legal implications of widespread AI adoption, including data privacy concerns and the need for human oversight. The paper examines EU's current stance and policies on AI. Recognizing Portugal's particular opportunities, the study provides strategic recommendations for fostering AI education and training, promoting research and development, supporting AI startups and businesses, ensuring ethical use of AI and encouraging international collaboration. The implications of these strategies extend beyond technological advancement, touching upon broader societal, economic and philosophical issues. The future of AI is also approached, acknowledging both its potential and the inherent challenges of regulating this rapidly evolving field. While this paper provides an analysis of AI within Portugal's context, it is subject to certain limitations, where future research is needed. As the AI landscape continues to evolve, so will the opportunities and challenges it presents, requiring continuous study and proactive policymaking. The study concludes with a reflection on humanity's role in an increasingly automated world, underscoring the importance of balancing AI integration with the preservation of human values. As the architects of AI, mankind carries the responsibility to guide its path and its impact on our existence. We urge for the application of this power with prudence, foresight and empathy, to envision a future where humans and AI may not only coexist but prosper together. Finally, the future of AI is not merely a technological evolution but a chapter in humanity's journey, echoing our choices. It is a future we create, a narrative we pen and a legacy we leave.
    Keywords: Artificial Intelligence, Digital Economy
    JEL: K20 L86 O33 Q55
    Date: 2023–07
    URL: http://d.repec.org/n?u=RePEc:mde:wpaper:177&r=ain

This nep-ain issue is ©2023 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.