nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2024‒04‒01
sixteen papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. Reputational Algorithm Aversion By Gregory Weitzner
  2. Do people rely on ChatGPT more than their peers to detect fake news? By Yuhao Fu; Nobuyuki Hanaki
  3. How Will AI Steal Our Elections? By Yu, Chen
  4. Can AI Bridge the Gender Gap in Competitiveness? By Mourelatos, Evangelos; Zervas, Panagiotis; Lagios, Dimitris; Tzimas, Giannis
  5. Will Artificial Intelligence Get in the Way of Achieving Gender Equality? By Carvajal, Daniel; Franco, Catalina; Isaksson, Siri
  6. Bias in Generative AI By Mi Zhou; Vibhanshu Abhishek; Timothy Derdenger; Jaymo Kim; Kannan Srinivasan
  7. Experimenting with Generative AI: Does ChatGPT Really Increase Everyone's Productivity? By Voraprapa Nakavachara; Tanapong Potipiti; Thanee Chaiwat
  8. Exposure to generative artificial intelligence in the European labour market By Laura Nurski; Nina Ruer
  9. Optimal Liability Rules for Combined Human-IA Health Care Decisions By Bertrand Chopard; Olivier Musy
  10. Navigating autonomy and control in human-AI delegation: User responses to technology- versus user-invoked task allocation By Adam, Martin; Diebel, Christopher; Goutier, Marc; Benlian, Alexander
  11. Generative AI and Copyright: A Dynamic Perspective By S. Alex Yang; Angela Huyue Zhang
  12. Artificial Intelligence and Intellectual Property : An Economic Perspective By Alexander Cuntz; Carsten Fink; Hansueli Stamm
  13. Ploutos: Towards interpretable stock movement prediction with financial large language model By Hanshuang Tong; Jun Li; Ning Wu; Ming Gong; Dongmei Zhang; Qi Zhang
  14. Limit Order Book Simulations: A Review By Konark Jain; Nick Firoozye; Jonathan Kochems; Philip Treleaven
  15. A Multimodal Foundation Agent for Financial Trading: Tool-Augmented, Diversified, and Generalist By Wentao Zhang; Lingxuan Zhao; Haochong Xia; Shuo Sun; Jiaze Sun; Molei Qin; Xinyi Li; Yuqing Zhao; Yilei Zhao; Xinyu Cai; Longtao Zheng; Xinrun Wang; Bo An
  16. The Random Forest Model for Analyzing and Forecasting the US Stock Market in the Context of Smart Finance By Jiajian Zheng; Duan Xin; Qishuo Cheng; Miao Tian; Le Yang

  1. By: Gregory Weitzner
    Abstract: People are often reluctant to incorporate information produced by algorithms into their decisions, a phenomenon called "algorithm aversion". This paper shows how algorithm aversion arises when the choice to follow an algorithm conveys information about a human's ability. I develop a model in which workers make forecasts of a random outcome based on their own private information and an algorithm's signal. Low-skill workers receive worse information than the algorithm and hence should always follow the algorithm's signal, while high-skill workers receive better information than the algorithm and should sometimes override it. However, due to reputational concerns, low-skill workers inefficiently override the algorithm to increase the likelihood they are perceived as high-skill. The model provides a fully rational microfoundation for algorithm aversion that aligns with the broad concern that AI systems will displace many types of workers.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.15418&r=ain
  2. By: Yuhao Fu; Nobuyuki Hanaki
    Abstract: In the era of rapidly advancing artificial intelligence (AI), understanding to what extent people rely on generative AI products (AI tools), such as ChatGPT, is crucial. This study experimentally investigates whether people rely more on AI tools than their human peers in assessing the authenticity of misinformation. We quantify participants’ degree of reliance using the weight of reference (WOR) and decompose it into two stages using the activation-integration model. Our results indicate that participants exhibit a higher reliance on ChatGPT than their peers, influenced significantly by the quality of the reference and their prior beliefs. The proportion of real parts did not impact the WOR. In addition, we found that the reference source affects both the activation and integration stages, but the quality of reference only influences the second stage.
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:dpr:wpaper:1233&r=ain
  3. By: Yu, Chen
    Abstract: In the evolving landscape of digital technology, artificial intelligence (AI) has emerged as a transformative force with the potential to redefine the dynamics of political campaigns and elections. While AI offers unparalleled opportunities for enhancing the efficiency and effectiveness of political campaigning through data analysis, voter targeting, and personalized messaging, it also poses significant threats to the integrity of democratic processes. This article delves into the multifaceted role of AI in political campaigns, highlighting both its beneficial applications and its capacity for misuse in spreading misinformation, manipulating voter opinions, and exacerbating cybersecurity vulnerabilities. It further explores the challenges of AI-generated disinformation, the risks of cyber attacks on election infrastructure, and the ethical concerns surrounding voter manipulation through psychological profiling. Against the backdrop of these challenges, the article examines the current legal and regulatory landscape, identifying gaps that allow for the unchecked use of AI in political processes and discussing international perspectives on regulating AI in elections. Finally, it proposes a comprehensive framework for mitigating AI's negative impacts, emphasizing the importance of enhancing transparency, strengthening cybersecurity, fostering public education, and promoting international cooperation. By confronting the dual-edged nature of AI in elections, this article seeks to chart a path towards resilient democracy in the age of AI.
    Date: 2024–02–28
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:un7ev&r=ain
  4. By: Mourelatos, Evangelos; Zervas, Panagiotis; Lagios, Dimitris; Tzimas, Giannis
    Abstract: This paper employs an online real-effort experiment to investigate gender disparities in the selection of individuals into competitive working environments when assisted by artificial intelligence (AI). In contrast to previous research suggesting greater competitiveness among men, our findings reveal that both genders are equally likely to compete in the presence of AI assistance. Surprisingly, the introduction of AI eliminates an 11-percentage-point gender gap, between men and women in our competitive scenario. We also discuss how the gender gap in tournament entry appears to be contingent on ChatGPT selection rather than being omnipresent. Notably, 47% of female participants independently chose to utilize ChatGPT, while 55% of males did the same. However, when ChatGPT was offered by the experimenter-employer, more than 53% of female participants opted for AI assistance, compared to 57% of males, in a gender-neutral online task. This shift prompts a reevaluation of gender gap trends in competition entry rates, particularly as women increasingly embrace generative AI tools, resulting in a boost in their confidence. We rule out differences in risk aversion. The discussion suggests that these behavioral patterns may have significant policy implications, as the introduction of generative AI tools in the workplace can be leveraged to rectify gender disparities.
    Keywords: Gender differences, ChatGPT, Competition, Economic experiments
    JEL: C90 J16 J71
    Date: 2024
    URL: http://d.repec.org/n?u=RePEc:zbw:glodps:1404&r=ain
  5. By: Carvajal, Daniel (Dept. of Economics, Norwegian School of Economics and Business Administration); Franco, Catalina (Center for Applied Research (SNF)); Isaksson, Siri (Dept. of Economics, Norwegian School of Economics and Business Administration)
    Abstract: The promise of generative AI to increase human productivity relies on developing skills to become proficient at it. There is reason to suspect that women and men use AI tools differently, which could result in productivity and payoff gaps in a labor market increasingly demanding knowledge in AI. Thus, it is important to understand if there are gender differences in AI-usage among current students. We conduct a survey at the Norwegian School of Economics collecting use and attitudes towards ChatGPT, a measure of AI proficiency, and responses to policies allowing or forbidding ChatGPT use. Three key findings emerge: first, female students report a significantly lower use of ChatGPT compared to their male counterparts. Second, male students are more skilled at writing successful prompts, even after accounting for higher ChatGPT usage. Third, imposing university bans on ChatGPT use widens the gender gap in intended use substantially. We provide insights into potential factors influencing the AI adoption gender gap and highlight the role of appropriate encouragement and policies in allowing female students to benefit from AI usage, thereby mitigating potential impacts on later labor market outcomes.
    Keywords: Artificial intelligence; ChatGTP; gender; education; technology adoption
    JEL: I24 J16 J24 O33
    Date: 2024–03–14
    URL: http://d.repec.org/n?u=RePEc:hhs:nhheco:2024_003&r=ain
  6. By: Mi Zhou; Vibhanshu Abhishek; Timothy Derdenger; Jaymo Kim; Kannan Srinivasan
    Abstract: This study analyzed images generated by three popular generative artificial intelligence (AI) tools - Midjourney, Stable Diffusion, and DALLE 2 - representing various occupations to investigate potential bias in AI generators. Our analysis revealed two overarching areas of concern in these AI generators, including (1) systematic gender and racial biases, and (2) subtle biases in facial expressions and appearances. Firstly, we found that all three AI generators exhibited bias against women and African Americans. Moreover, we found that the evident gender and racial biases uncovered in our analysis were even more pronounced than the status quo when compared to labor force statistics or Google images, intensifying the harmful biases we are actively striving to rectify in our society. Secondly, our study uncovered more nuanced prejudices in the portrayal of emotions and appearances. For example, women were depicted as younger with more smiles and happiness, while men were depicted as older with more neutral expressions and anger, posing a risk that generative AI models may unintentionally depict women as more submissive and less competent than men. Such nuanced biases, by their less overt nature, might be more problematic as they can permeate perceptions unconsciously and may be more difficult to rectify. Although the extent of bias varied depending on the model, the direction of bias remained consistent in both commercial and open-source AI generators. As these tools become commonplace, our study highlights the urgency to identify and mitigate various biases in generative AI, reinforcing the commitment to ensuring that AI technologies benefit all of humanity for a more inclusive future.
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2403.02726&r=ain
  7. By: Voraprapa Nakavachara; Tanapong Potipiti; Thanee Chaiwat
    Abstract: Generative AI technologies such as ChatGPT, Gemini, and MidJourney have made remarkable progress in recent years. Recent literature has documented ChatGPT's positive impact on productivity in areas where it has strong expertise, attributable to extensive training datasets, such as the English language and Python/SQL programming. However, there is still limited literature regarding ChatGPT's performance in areas where its capabilities could still be further enhanced. This paper aims to fill this gap. We conducted an experiment in which economics students were asked to perform writing analysis tasks in a non-English language (specifically, Thai) and math & data analysis tasks using a less frequently used programming package (specifically, Stata). The findings suggest that, on average, participants performed better using ChatGPT in terms of scores and time taken to complete the tasks. However, a detailed examination reveals that 34% of participants saw no improvement in writing analysis tasks, and 42% did not improve in math & data analysis tasks when employing ChatGPT. Further investigation indicated that higher-ability students, as proxied by their econometrics grades, were the ones who performed worse in writing analysis tasks when using ChatGPT. We also found evidence that students with better digital skills performed better with ChatGPT. This research provides insights on the impact of generative AI. Thus, stakeholders can make informed decisions to implement appropriate policy frameworks or redesign educational systems. It also highlights the critical role of human skills in addressing and complementing the limitations of technology.
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2403.01770&r=ain
  8. By: Laura Nurski; Nina Ruer
    Abstract: In this paper, we take a dual approach to assess the impact of GenAI on the European labour market.
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:bre:wpaper:node_9794&r=ain
  9. By: Bertrand Chopard (Université Paris Cité, LIRAES, F-75006 Paris, France.); Olivier Musy (Université Paris Cité, LIRAES, F-75006 Paris, France.)
    Abstract: The integration of AI for healthcare redefines medical liability, converting decisionmaking into a collaborative process involving a technological tool and its user. When a harm is caused, both users and manufacturers of AI may be responsible. The judicial system has yet to address claims of this nature. We build a model with bilateral care to study which combinations of liability rules are socially efficient. Both agents could face strict liability, be subject to negligence rules or face hybrid regimes: one agent faces a fault liability regime, while the other operates under strict liability. We highlight two crucial elements: (i) the sharing scheme of the payment of compensation between users and producers (ii) the nature of their cares (complements or substitutes). The latest AI Liability Directive from the European Parliament in the field of medicine advocates the implementation of a strict liability regime for producers and a fault liability regime for users. We show that this regime is socially efficient. A novel framework is not necessary.
    Keywords: medical liability; joint liability; artificial intelligence; bilateral care; european regulation
    JEL: I11 L13 K13 K41
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:afd:wpaper:2404&r=ain
  10. By: Adam, Martin; Diebel, Christopher; Goutier, Marc; Benlian, Alexander
    Date: 2024
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:143282&r=ain
  11. By: S. Alex Yang; Angela Huyue Zhang
    Abstract: The rapid advancement of generative AI is poised to disrupt the creative industry. Amidst the immense excitement for this new technology, its future development and applications in the creative industry hinge crucially upon two copyright issues: 1) the compensation to creators whose content has been used to train generative AI models (the fair use standard); and 2) the eligibility of AI-generated content for copyright protection (AI-copyrightability). While both issues have ignited heated debates among academics and practitioners, most analysis has focused on their challenges posed to existing copyright doctrines. In this paper, we aim to better understand the economic implications of these two regulatory issues and their interactions. By constructing a dynamic model with endogenous content creation and AI model development, we unravel the impacts of the fair use standard and AI-copyrightability on AI development, AI company profit, creators income, and consumer welfare, and how these impacts are influenced by various economic and operational factors. For example, while generous fair use (use data for AI training without compensating the creator) benefits all parties when abundant training data exists, it can hurt creators and consumers when such data is scarce. Similarly, stronger AI-copyrightability (AI content enjoys more copyright protection) could hinder AI development and reduce social welfare. Our analysis also highlights the complex interplay between these two copyright issues. For instance, when existing training data is scarce, generous fair use may be preferred only when AI-copyrightability is weak. Our findings underscore the need for policymakers to embrace a dynamic, context-specific approach in making regulatory decisions and provide insights for business leaders navigating the complexities of the global regulatory environment.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.17801&r=ain
  12. By: Alexander Cuntz; Carsten Fink; Hansueli Stamm
    Abstract: The emergence of Artificial Intelligence (AI) has profound implications for intellectual property (IP) frameworks. While much of the discussion so far has focused on the legal implications, we focus on the economic dimension. We dissect AI's role as both a facilitator and disruptor of innovation and creativity. Recalling economic principles and reviewing relevant literature, we explore the evolving landscape of AI innovation incentives and the challenges it poses to existing IP frameworks. From patentability dilemmas to copyright conundrums, we find that there is a delicate balance between fostering innovation and safeguarding societal interests amidst rapid technological progress. We also point to areas where future economic research could offer valuable insights to policymakers.
    Keywords: Artificial Intelligence, Intellectual Property, Patents, Copyright
    Date: 2024–03
    URL: http://d.repec.org/n?u=RePEc:wip:wpaper:77&r=ain
  13. By: Hanshuang Tong; Jun Li; Ning Wu; Ming Gong; Dongmei Zhang; Qi Zhang
    Abstract: Recent advancements in large language models (LLMs) have opened new pathways for many domains. However, the full potential of LLMs in financial investments remains largely untapped. There are two main challenges for typical deep learning-based methods for quantitative finance. First, they struggle to fuse textual and numerical information flexibly for stock movement prediction. Second, traditional methods lack clarity and interpretability, which impedes their application in scenarios where the justification for predictions is essential. To solve the above challenges, we propose Ploutos, a novel financial LLM framework that consists of PloutosGen and PloutosGPT. The PloutosGen contains multiple primary experts that can analyze different modal data, such as text and numbers, and provide quantitative strategies from different perspectives. Then PloutosGPT combines their insights and predictions and generates interpretable rationales. To generate accurate and faithful rationales, the training strategy of PloutosGPT leverage rearview-mirror prompting mechanism to guide GPT-4 to generate rationales, and a dynamic token weighting mechanism to finetune LLM by increasing key tokens weight. Extensive experiments show our framework outperforms the state-of-the-art methods on both prediction accuracy and interpretability.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2403.00782&r=ain
  14. By: Konark Jain; Nick Firoozye; Jonathan Kochems; Philip Treleaven
    Abstract: Limit Order Books (LOBs) serve as a mechanism for buyers and sellers to interact with each other in the financial markets. Modelling and simulating LOBs is quite often necessary} for calibrating and fine-tuning the automated trading strategies developed in algorithmic trading research. The recent AI revolution and availability of faster and cheaper compute power has enabled the modelling and simulations to grow richer and even use modern AI techniques. In this review we \highlight{examine} the various kinds of LOB simulation models present in the current state of the art. We provide a classification of the models on the basis of their methodology and provide an aggregate view of the popular stylized facts used in the literature to test the models. We additionally provide a focused study of price impact's presence in the models since it is one of the more crucial phenomena to model in algorithmic trading. Finally, we conduct a comparative analysis of various qualities of fits of these models and how they perform when tested against empirical data.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.17359&r=ain
  15. By: Wentao Zhang; Lingxuan Zhao; Haochong Xia; Shuo Sun; Jiaze Sun; Molei Qin; Xinyi Li; Yuqing Zhao; Yilei Zhao; Xinyu Cai; Longtao Zheng; Xinrun Wang; Bo An
    Abstract: Financial trading is a crucial component of the markets, informed by a multimodal information landscape encompassing news, prices, and Kline charts, and encompasses diverse tasks such as quantitative trading and high-frequency trading with various assets. While advanced AI techniques like deep learning and reinforcement learning are extensively utilized in finance, their application in financial trading tasks often faces challenges due to inadequate handling of multimodal data and limited generalizability across various tasks. To address these challenges, we present FinAgent, a multimodal foundational agent with tool augmentation for financial trading. FinAgent's market intelligence module processes a diverse range of data-numerical, textual, and visual-to accurately analyze the financial market. Its unique dual-level reflection module not only enables rapid adaptation to market dynamics but also incorporates a diversified memory retrieval system, enhancing the agent's ability to learn from historical data and improve decision-making processes. The agent's emphasis on reasoning for actions fosters trust in its financial decisions. Moreover, FinAgent integrates established trading strategies and expert insights, ensuring that its trading approaches are both data-driven and rooted in sound financial principles. With comprehensive experiments on 6 financial datasets, including stocks and Crypto, FinAgent significantly outperforms 9 state-of-the-art baselines in terms of 6 financial metrics with over 36% average improvement on profit. Specifically, a 92.27% return (a 84.39% relative improvement) is achieved on one dataset. Notably, FinAgent is the first advanced multimodal foundation agent designed for financial trading tasks.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.18485&r=ain
  16. By: Jiajian Zheng; Duan Xin; Qishuo Cheng; Miao Tian; Le Yang
    Abstract: The stock market is a crucial component of the financial market, playing a vital role in wealth accumulation for investors, financing costs for listed companies, and the stable development of the national macroeconomy. Significant fluctuations in the stock market can damage the interests of stock investors and cause an imbalance in the industrial structure, which can interfere with the macro level development of the national economy. The prediction of stock price trends is a popular research topic in academia. Predicting the three trends of stock pricesrising, sideways, and falling can assist investors in making informed decisions about buying, holding, or selling stocks. Establishing an effective forecasting model for predicting these trends is of substantial practical importance. This paper evaluates the predictive performance of random forest models combined with artificial intelligence on a test set of four stocks using optimal parameters. The evaluation considers both predictive accuracy and time efficiency.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.17194&r=ain

This nep-ain issue is ©2024 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.