nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2024‒03‒18
ten papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. LLM Voting: Human Choices and AI Collective Decision Making By Joshua C. Yang; Marcin Korecki; Damian Dailisan; Carina I. Hausladen; Dirk Helbing
  2. Rationality Report Cards: Assessing the Economic Rationality of Large Language Models By Narun Raman; Taylor Lundy; Samuel Amouyal; Yoav Levine; Kevin Leyton-Brown; Moshe Tennenholtz
  3. LLM-driven Imitation of Subrational Behavior : Illusion or Reality? By Andrea Coletta; Kshama Dwarakanath; Penghang Liu; Svitlana Vyetrenko; Tucker Balch
  4. ABIDES-Economist: Agent-Based Simulation of Economic Systems with Learning Agents By Kshama Dwarakanath; Svitlana Vyetrenko; Peyman Tavallali; Tucker Balch
  5. Artificial intelligence and the transformation of higher education institutions By Evangelos Katsamakas; Oleg V. Pavlov; Ryan Saklad
  6. Student Reactions to AI-Replicant Professor in an Econ101 Teaching Video By Rosa-García, Alfonso
  7. A Hormetic Approach to the Value-Loading Problem: Preventing the Paperclip Apocalypse? By Nathan I. N. Henry; Mangor Pedersen; Matt Williams; Jamin L. B. Martin; Liesje Donkin
  8. Explainable Automated Machine Learning for Credit Decisions: Enhancing Human Artificial Intelligence Collaboration in Financial Engineering By Marc Schmitt
  9. ChatGPT and Corporate Policies By Manish Jha; Jialin Qian; Michael Weber; Baozhong Yang
  10. DiffsFormer: A Diffusion Transformer on Stock Factor Augmentation By Yuan Gao; Haokun Chen; Xiang Wang; Zhicai Wang; Xue Wang; Jinyang Gao; Bolin Ding

  1. By: Joshua C. Yang; Marcin Korecki; Damian Dailisan; Carina I. Hausladen; Dirk Helbing
    Abstract: This paper investigates the voting behaviors of Large Language Models (LLMs), particularly OpenAI's GPT4 and LLaMA2, and their alignment with human voting patterns. Our approach included a human voting experiment to establish a baseline for human preferences and a parallel experiment with LLM agents. The study focused on both collective outcomes and individual preferences, revealing differences in decision-making and inherent biases between humans and LLMs. We observed a trade-off between preference diversity and alignment in LLMs, with a tendency towards more uniform choices as compared to the diverse preferences of human voters. This finding indicates that LLMs could lead to more homogenized collective outcomes when used in voting assistance, underscoring the need for cautious integration of LLMs into democratic processes.
    Date: 2024–01
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.01766&r=ain
  2. By: Narun Raman; Taylor Lundy; Samuel Amouyal; Yoav Levine; Kevin Leyton-Brown; Moshe Tennenholtz
    Abstract: There is increasing interest in using LLMs as decision-making "agents." Doing so includes many degrees of freedom: which model should be used; how should it be prompted; should it be asked to introspect, conduct chain-of-thought reasoning, etc? Settling these questions -- and more broadly, determining whether an LLM agent is reliable enough to be trusted -- requires a methodology for assessing such an agent's economic rationality. In this paper, we provide one. We begin by surveying the economic literature on rational decision making, taxonomizing a large set of fine-grained "elements" that an agent should exhibit, along with dependencies between them. We then propose a benchmark distribution that quantitatively scores an LLMs performance on these elements and, combined with a user-provided rubric, produces a "rationality report card." Finally, we describe the results of a large-scale empirical experiment with 14 different LLMs, characterizing the both current state of the art and the impact of different model sizes on models' ability to exhibit rational behavior.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.09552&r=ain
  3. By: Andrea Coletta; Kshama Dwarakanath; Penghang Liu; Svitlana Vyetrenko; Tucker Balch
    Abstract: Modeling subrational agents, such as humans or economic households, is inherently challenging due to the difficulty in calibrating reinforcement learning models or collecting data that involves human subjects. Existing work highlights the ability of Large Language Models (LLMs) to address complex reasoning tasks and mimic human communication, while simulation using LLMs as agents shows emergent social behaviors, potentially improving our comprehension of human conduct. In this paper, we propose to investigate the use of LLMs to generate synthetic human demonstrations, which are then used to learn subrational agent policies though Imitation Learning. We make an assumption that LLMs can be used as implicit computational models of humans, and propose a framework to use synthetic demonstrations derived from LLMs to model subrational behaviors that are characteristic of humans (e.g., myopic behavior or preference for risk aversion). We experimentally evaluate the ability of our framework to model sub-rationality through four simple scenarios, including the well-researched ultimatum game and marshmallow experiment. To gain confidence in our framework, we are able to replicate well-established findings from prior human studies associated with the above scenarios. We conclude by discussing the potential benefits, challenges and limitations of our framework.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.08755&r=ain
  4. By: Kshama Dwarakanath; Svitlana Vyetrenko; Peyman Tavallali; Tucker Balch
    Abstract: We introduce a multi-agent simulator for economic systems comprised of heterogeneous Households, heterogeneous Firms, Central Bank and Government agents, that could be subjected to exogenous, stochastic shocks. The interaction between agents defines the production and consumption of goods in the economy alongside the flow of money. Each agent can be designed to act according to fixed, rule-based strategies or learn their strategies using interactions with others in the simulator. We ground our simulator by choosing agent heterogeneity parameters based on economic literature, while designing their action spaces in accordance with real data in the United States. Our simulator facilitates the use of reinforcement learning strategies for the agents via an OpenAI Gym style environment definition for the economic system. We demonstrate the utility of our simulator by simulating and analyzing two hypothetical (yet interesting) economic scenarios. The first scenario investigates the impact of heterogeneous household skills on their learned preferences to work at different firms. The second scenario examines the impact of a positive production shock to one of two firms on its pricing strategy in comparison to the second firm. We aspire that our platform sets a stage for subsequent research at the intersection of artificial intelligence and economics.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.09563&r=ain
  5. By: Evangelos Katsamakas; Oleg V. Pavlov; Ryan Saklad
    Abstract: Artificial intelligence (AI) advances and the rapid adoption of generative AI tools like ChatGPT present new opportunities and challenges for higher education. While substantial literature discusses AI in higher education, there is a lack of a systemic approach that captures a holistic view of the AI transformation of higher education institutions (HEIs). To fill this gap, this article, taking a complex systems approach, develops a causal loop diagram (CLD) to map the causal feedback mechanisms of AI transformation in a typical HEI. Our model accounts for the forces that drive the AI transformation and the consequences of the AI transformation on value creation in a typical HEI. The article identifies and analyzes several reinforcing and balancing feedback loops, showing how, motivated by AI technology advances, the HEI invests in AI to improve student learning, research, and administration. The HEI must take measures to deal with academic integrity problems and adapt to changes in available jobs due to AI, emphasizing AI-complementary skills for its students. However, HEIs face a competitive threat and several policy traps that may lead to decline. HEI leaders need to become systems thinkers to manage the complexity of the AI transformation and benefit from the AI feedback loops while avoiding the associated pitfalls. We also discuss long-term scenarios, the notion of HEIs influencing the direction of AI, and directions for future research on AI transformation.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.08143&r=ain
  6. By: Rosa-García, Alfonso
    Abstract: This study explores student responses to AI-generated educational content, specifically a teaching video delivered by an AI-replicant of their professor. Utilizing ChatGPT-4 for scripting and Heygen technology for avatar creation, the research investigates whether students' awareness of the AI's involvement influences their perception of the content's utility. With 97 participants from first-year economics and business programs, the findings reveal a significant difference in valuation between students informed of the AI origin and those who were not, with the former group valuing the content less. This indicates a bias against AI-generated materials based on their origin. The paper discusses the implications of these findings for the adoption of AI in educational settings, highlighting the necessity of addressing student biases and ethical considerations in the deployment of AI-generated educational materials. This research contributes to the ongoing debate on the integration of AI tools in education and their potential to enhance learning experiences.
    Keywords: AI-Generated Content; Virtual Avatars; Student Perceptions; Technology Adoption
    JEL: I23 O33
    Date: 2024–02–11
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:120135&r=ain
  7. By: Nathan I. N. Henry; Mangor Pedersen; Matt Williams; Jamin L. B. Martin; Liesje Donkin
    Abstract: The value-loading problem is a significant challenge for researchers aiming to create artificial intelligence (AI) systems that align with human values and preferences. This problem requires a method to define and regulate safe and optimal limits of AI behaviors. In this work, we propose HALO (Hormetic ALignment via Opponent processes), a regulatory paradigm that uses hormetic analysis to regulate the behavioral patterns of AI. Behavioral hormesis is a phenomenon where low frequencies of a behavior have beneficial effects, while high frequencies are harmful. By modeling behaviors as allostatic opponent processes, we can use either Behavioral Frequency Response Analysis (BFRA) or Behavioral Count Response Analysis (BCRA) to quantify the hormetic limits of repeatable behaviors. We demonstrate how HALO can solve the 'paperclip maximizer' scenario, a thought experiment where an unregulated AI tasked with making paperclips could end up converting all matter in the universe into paperclips. Our approach may be used to help create an evolving database of 'values' based on the hedonic calculus of repeatable behaviors with decreasing marginal utility. This positions HALO as a promising solution for the value-loading problem, which involves embedding human-aligned values into an AI system, and the weak-to-strong generalization problem, which explores whether weak models can supervise stronger models as they become more intelligent. Hence, HALO opens several research avenues that may lead to the development of a computational value system that allows an AI algorithm to learn whether the decisions it makes are right or wrong.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.07462&r=ain
  8. By: Marc Schmitt
    Abstract: This paper explores the integration of Explainable Automated Machine Learning (AutoML) in the realm of financial engineering, specifically focusing on its application in credit decision-making. The rapid evolution of Artificial Intelligence (AI) in finance has necessitated a balance between sophisticated algorithmic decision-making and the need for transparency in these systems. The focus is on how AutoML can streamline the development of robust machine learning models for credit scoring, while Explainable AI (XAI) methods, particularly SHapley Additive exPlanations (SHAP), provide insights into the models' decision-making processes. This study demonstrates how the combination of AutoML and XAI not only enhances the efficiency and accuracy of credit decisions but also fosters trust and collaboration between humans and AI systems. The findings underscore the potential of explainable AutoML in improving the transparency and accountability of AI-driven financial decisions, aligning with regulatory requirements and ethical considerations.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.03806&r=ain
  9. By: Manish Jha; Jialin Qian; Michael Weber; Baozhong Yang
    Abstract: We create a firm-level ChatGPT investment score, based on conference calls, that measures managers' anticipated changes in capital expenditures. We validate the score with interpretable textual content and its strong correlation with CFO survey responses. The investment score predicts future capital expenditure for up to nine quarters, controlling for Tobin's q and other determinants, implying the investment score provides incremental information about firms' future investment opportunities. The investment score also separately forecasts future total, intangible, and R&D investments. High-investment-score firms experience significant negative future abnormal returns. We demonstrate ChatGPT's applicability to measure other policies, such as dividends and employment.
    JEL: C81 E22 G14 G31 G32 O33
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:32161&r=ain
  10. By: Yuan Gao; Haokun Chen; Xiang Wang; Zhicai Wang; Xue Wang; Jinyang Gao; Bolin Ding
    Abstract: Machine learning models have demonstrated remarkable efficacy and efficiency in a wide range of stock forecasting tasks. However, the inherent challenges of data scarcity, including low signal-to-noise ratio (SNR) and data homogeneity, pose significant obstacles to accurate forecasting. To address this issue, we propose a novel approach that utilizes artificial intelligence-generated samples (AIGS) to enhance the training procedures. In our work, we introduce the Diffusion Model to generate stock factors with Transformer architecture (DiffsFormer). DiffsFormer is initially trained on a large-scale source domain, incorporating conditional guidance so as to capture global joint distribution. When presented with a specific downstream task, we employ DiffsFormer to augment the training procedure by editing existing samples. This editing step allows us to control the strength of the editing process, determining the extent to which the generated data deviates from the target domain. To evaluate the effectiveness of DiffsFormer augmented training, we conduct experiments on the CSI300 and CSI800 datasets, employing eight commonly used machine learning models. The proposed method achieves relative improvements of 7.2% and 27.8% in annualized return ratio for the respective datasets. Furthermore, we perform extensive experiments to gain insights into the functionality of DiffsFormer and its constituent components, elucidating how they address the challenges of data scarcity and enhance the overall model performance. Our research demonstrates the efficacy of leveraging AIGS and the DiffsFormer architecture to mitigate data scarcity in stock forecasting tasks.
    Date: 2024–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2402.06656&r=ain

This nep-ain issue is ©2024 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.