nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2024‒01‒15
fifteen papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. The Challenge of Using LLMs to Simulate Human Behavior: A Causal Inference Perspective By George Gui; Olivier Toubia
  2. Do LLM Agents Exhibit Social Behavior? By Yan Leng; Yuan Yuan
  3. Deciphering Algorithmic Collusion: Insights from Bandit Algorithms and Implications for Antitrust Enforcement By Frédéric Marty; Thierry Warin
  4. Generative artificial intelligence enhances individual creativity but reduces the collective diversity of novel content By Anil R. Doshi; Oliver P. Hauser
  5. The Uneven Impact of Generative AI on Entrepreneurial Performance By Otis, Nicholas G.; Clarke, Rowan Philip; Delecourt, Solene; Holtz, David; Koning, Rembrand
  6. FABLES: Framework for Autonomous Behaviour-rich Language-driven Emotion-enabled Synthetic populations. By HRADEC Jiri; OSTLAENDER Nicole; BERNINI Alba
  7. Vox Populi, Vox AI? Using Language Models to Estimate German Public Opinion By von der Heyde, Leah; Haensch, Anna-Carolina; Wenz, Alexander
  8. US-Skepticism: Misinformation and Transnational Conspiracy in the 2024 Taiwanese Presidential Elections By Chang, Ho-Chun Herbert; Wang, Austin Horng-En; Fang, Yu Sunny
  9. Advancing AI Audits for Enhanced AI Governance By Arisa Ema; Ryo Sato; Tomoharu Hase; Masafumi Nakano; Shinji Kamimura; Hiromu Kitamura
  10. Applying AI to Sustainability Policy Challenges: A Practical Playbook By Saeri, Alexander K; O'Connor, Ruby
  11. Artificial Intelligence for Interoperability in the European Public Sector By TANGI Luca; COMBETTO Marco; MARTIN BOSCH Jaume; RODRIGUEZ MÜLLER Paula
  12. AI and Jobs: Has the Inflection Point Arrived? Evidence from an Online Labor Platform By Dandan Qiao; Huaxia Rui; Qian Xiong
  13. Integrating New Technologies into Science: The case of AI By Stefano Bianchini; Moritz M\"uller; Pierre Pelletier
  14. chatReport: Democratizing Sustainability Disclosure Analysis through LLM-based Tools By Jingwei Ni; Julia Bingler; Chiara Colesanti Senni; Mathias Kraus; Glen Gostlow; Tobias Schimanski; Dominik Stammbach; Saeid Vaghefi; Qian Wang; Nicolas Webersinke; Tobias Wekhof; Tingyu Yu; Markus Leippold
  15. Deep Reinforcement Learning for Quantitative Trading By Maochun Xu; Zixun Lan; Zheng Tao; Jiawei Du; Zongao Ye

  1. By: George Gui; Olivier Toubia
    Abstract: Large Language Models (LLMs) have demonstrated impressive potential to simulate human behavior. Using a causal inference framework, we empirically and theoretically analyze the challenges of conducting LLM-simulated experiments, and explore potential solutions. In the context of demand estimation, we show that variations in the treatment included in the prompt (e.g., price of focal product) can cause variations in unspecified confounding factors (e.g., price of competitors, historical prices, outside temperature), introducing endogeneity and yielding implausibly flat demand curves. We propose a theoretical framework suggesting this endogeneity issue generalizes to other contexts and won't be fully resolved by merely improving the training data. Unlike real experiments where researchers assign pre-existing units across conditions, LLMs simulate units based on the entire prompt, which includes the description of the treatment. Therefore, due to associations in the training data, the characteristics of individuals and environments simulated by the LLM can be affected by the treatment assignment. We explore two potential solutions. The first specifies all contextual variables that affect both treatment and outcome, which we demonstrate to be challenging for a general-purpose LLM. The second explicitly specifies the source of treatment variation in the prompt given to the LLM (e.g., by informing the LLM that the store is running an experiment). While this approach only allows the estimation of a conditional average treatment effect that depends on the specific experimental design, it provides valuable directional results for exploratory analysis.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.15524&r=ain
  2. By: Yan Leng; Yuan Yuan
    Abstract: The advances of Large Language Models (LLMs) are expanding their utility in both academic research and practical applications. Recent social science research has explored the use of these "black-box" LLM agents for simulating complex social systems and potentially substituting human subjects in experiments. Our study delves into this emerging domain, investigating the extent to which LLMs exhibit key social interaction principles, such as social learning, social preference, and cooperative behavior, in their interactions with humans and other agents. We develop a novel framework for our study, wherein classical laboratory experiments involving human subjects are adapted to use LLM agents. This approach involves step-by-step reasoning that mirrors human cognitive processes and zero-shot learning to assess the innate preferences of LLMs. Our analysis of LLM agents' behavior includes both the primary effects and an in-depth examination of the underlying mechanisms. Focusing on GPT-4, the state-of-the-art LLM, our analyses suggest that LLM agents appear to exhibit a range of human-like social behaviors such as distributional and reciprocity preferences, responsiveness to group identity cues, engagement in indirect reciprocity, and social learning capabilities. However, our analysis also reveals notable differences: LLMs demonstrate a pronounced fairness preference, weaker positive reciprocity, and a more calculating approach in social learning compared to humans. These insights indicate that while LLMs hold great promise for applications in social science research, such as in laboratory experiments and agent-based modeling, the subtle behavioral differences between LLM agents and humans warrant further investigation. Careful examination and development of protocols in evaluating the social behaviors of LLMs are necessary before directly applying these models to emulate human behavior.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.15198&r=ain
  3. By: Frédéric Marty; Thierry Warin
    Abstract: This paper examines algorithmic collusion from legal and economic perspectives, highlighting the growing role of algorithms in digital markets and their potential for anti-competitive behavior. Using bandit algorithms as a model, traditionally applied in uncertain decision-making contexts, we illuminate the dynamics of implicit collusion without overt communication. Legally, the challenge is discerning and classifying these algorithmic signals, especially as unilateral communications. Economically, distinguishing between rational pricing and collusive patterns becomes intricate with algorithm-driven decisions. The paper emphasizes the imperative for competition authorities to identify unusual market behaviors, hinting at shifting the burden of proof to firms with algorithmic pricing. Balancing algorithmic transparency and collusion prevention is crucial. While regulations might address these concerns, they could hinder algorithmic development. As this form of collusion becomes central in antitrust, understanding through models like bandit algorithms is vital, since these last ones may converge faster towards an anticompetitive equilibrium. Cet article examine la collusion algorithmique du point de vue juridique et économique, mettant en évidence le rôle croissant des algorithmes dans les marchés numériques et leur potentiel comportement anticoncurrentiel. En utilisant les algorithmes de bandit comme modèle, traditionnellement appliqués dans des contextes de prise de décision incertaine, nous mettons en lumière la dynamique de la collusion implicite sans communication explicite. Sur le plan juridique, le défi réside dans le discernement et la classification de ces signaux algorithmiques, en particulier en tant que communications unilatérales. Sur le plan économique, la distinction entre une tarification rationnelle et des schémas collusifs devient complexe avec les décisions pilotées par des algorithmes. L'article met l'accent sur l'impératif pour les autorités de la concurrence d'identifier les comportements de marché inhabituels, laissant entendre un transfert du fardeau de la preuve aux entreprises pratiquant la tarification algorithmique. Équilibrer la transparence algorithmique et la prévention de la collusion est crucial. Bien que la réglementation puisse traiter ces préoccupations, elle pourrait entraver le développement des algorithmes. À mesure que cette forme de collusion devient centrale dans le domaine de la concurrence, la compréhension à travers des modèles tels que les algorithmes de bandit est essentielle, car ces derniers peuvent converger plus rapidement vers un équilibre anticoncurrentiel.
    Keywords: Algorithmic Collusion, Bandit Algorithms, Antitrust Enforcement, Unilateral Signals, Pricing Strategies, Collusion algorithmique, algorithmes de bandits, Application du droit de la concurrence, signaux unilatéraux, Stratégies de tarification
    JEL: L13 L41 K21
    Date: 2023–12–22
    URL: http://d.repec.org/n?u=RePEc:cir:cirwor:2023s-26&r=ain
  4. By: Anil R. Doshi; Oliver P. Hauser
    Abstract: Creativity is core to being human. Generative artificial intelligence (GenAI) holds promise for humans to be more creative by offering new ideas, or less creative by anchoring on GenAI ideas. We study the causal impact of GenAI ideas on the production of an unstructured creative output in an online experimental study where some writers could obtain ideas for a story from a GenAI platform. We find that access to GenAI ideas causes stories to be evaluated as more creative, better written and more enjoyable, especially among less creative writers. However, objective measures of story similarity within each condition reveal that GenAI-enabled stories are more similar to each other than stories by humans alone. These results point to an increase in individual creativity, but at the same time there is a risk of losing collective novelty: this dynamic resembles a social dilemma where individual writers are better off using GenAI to improve their own writing, but collectively a narrower scope of novel content may be produced with GenAI. Our results have implications for researchers, policy-makers and practitioners interested in bolstering creativity, but point to potential downstream consequences from over-reliance.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.00506&r=ain
  5. By: Otis, Nicholas G.; Clarke, Rowan Philip; Delecourt, Solene; Holtz, David (University of California, Berkeley); Koning, Rembrand (Harvard Business School)
    Abstract: There is a growing belief that scalable and low-cost AI assistance can improve firm decision-making and economic performance. However, running a business involves a myriad of open-ended problems, making it hard to generalize from recent studies showing that generative AI improves performance on well-defined writing tasks. In our five-month field experiment with 640 Kenyan entrepreneurs, we assessed the impact of AI-generated advice on small business revenues and profits. Participants were randomly assigned to a control group that received a standard business guide or to a treatment group that received a GPT-4 powered AI business mentor via WhatsApp. While we find no average treatment effect, this is because the causal effect of generative AI access varied with the baseline business performance of the entrepreneur: high performers benefited by just over 20% from AI advice, whereas low performers did roughly 10% worse with AI assistance. Exploratory analysis of the WhatsApp interaction logs shows that both groups sought the AI mentor’s advice, but that low performers did worse because they sought help on much more challenging business tasks. These findings highlight how the tasks selected by firms and entrepreneurs for AI assistance fundamentally shape who will benefit from generative AI.
    Date: 2023–12–21
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:hdjpk&r=ain
  6. By: HRADEC Jiri (European Commission - JRC); OSTLAENDER Nicole; BERNINI Alba
    Abstract: The research investigates how large language models (LLMs) emerge as reservoirs of a vast array of human experiences, behaviours, and emotions. Building upon prior work of the JRC on synthetic populations , it presents a complete step-by-step guide on how to use LLMs to create highly realistic modelling scenarios and complex societies of autonomous emotional AI agents. This technique is aligned with agent-based modelling (ABM) and facilitates quantitative evaluation. The report describes how the agents were instantiated using LLMs, enriched with personality traits using the ABC-EBDI model, equipped with short- and long-term memory, and access to detailed knowledge of their environment. This setting of embodied reasoning significantly improved the agents' problem-solving capabilities and when subjected to various scenarios, the LLM-driven agents exhibited behaviours mirroring human-like reasoning and emotions, inter-agent patterns and realistic conversations, including elements that mirrored critical thinking. These LLM-driven agents can serve as believable proxies for human behaviour in simulated environments presenting vast implications for future research and policy applications, including studying impacts of different policy scenarios. This bears the opportunity to combine the narrative-based world of foresight scenarios with the advantages of quantitative modelling
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:ipt:iptwpa:jrc135070&r=ain
  7. By: von der Heyde, Leah (LMU Munich); Haensch, Anna-Carolina; Wenz, Alexander (University of Mannheim)
    Abstract: The recent development of large language models (LLMs) has spurred discussions about whether LLM-generated “synthetic samples” could complement or replace traditional surveys, considering their training data potentially reflects attitudes and behaviors prevalent in the population. A number of mostly US-based studies have prompted LLMs to mimic survey respondents, finding that the responses closely match the survey data. However, several contextual factors related to the relationship between the respective target population and LLM training data might affect the generalizability of such findings. In this study, we investigate the extent to which LLMs can estimate public opinion in Germany, using the example of vote choice as outcome of interest. To generate a synthetic sample of eligible voters in Germany, we create personas matching the individual characteristics of the 2017 German Longitudinal Election Study respondents. Prompting GPT-3 with each persona, we ask the LLM to predict each respondents’ vote choice in the 2017 German federal elections and compare these predictions to the survey-based estimates on the aggregate and subgroup levels. We find that GPT-3 does not predict citizens’ vote choice accurately, exhibiting a bias towards the Green and Left parties, and making better predictions for more “typical” voter subgroups. While the language model is able to capture broad-brush tendencies tied to partisanship, it tends to miss out on the multifaceted factors that sway individual voter choices. Furthermore, our results suggest that GPT-3 might not be reliable for estimating nuanced, subgroup-specific political attitudes. By examining the prediction of voting behavior using LLMs in a new context, our study contributes to the growing body of research about the conditions under which LLMs can be leveraged for studying public opinion. The findings point to disparities in opinion representation in LLMs and underscore the limitation of applying them for public opinion estimation without accounting for the biases in their training data.
    Date: 2023–12–15
    URL: http://d.repec.org/n?u=RePEc:osf:socarx:8je9g&r=ain
  8. By: Chang, Ho-Chun Herbert; Wang, Austin Horng-En; Fang, Yu Sunny
    Abstract: Taiwan simultaneously has one of the highest freedom of speech indexes yet encounters the largest amount of foreign interference, due to its contentious history with China. Due to the large influx, Taiwan takes a public “crowdsourcing” approach using fact-checking ChatBots and AI to combat misinformation. Combining this public database with large-language models, we investigate misinformation across three platforms (Line, PTT, and Facebook) during the 2024 Taiwanese Presidential Elections. We find most misinformation attacks US-Taiwan relations through visuals and within pan-Blue identity groups. Curiously, we find misinformation rhetoric that references conspiracy groups in the west.
    Date: 2023–12–20
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:uefgw&r=ain
  9. By: Arisa Ema; Ryo Sato; Tomoharu Hase; Masafumi Nakano; Shinji Kamimura; Hiromu Kitamura
    Abstract: As artificial intelligence (AI) is integrated into various services and systems in society, many companies and organizations have proposed AI principles, policies, and made the related commitments. Conversely, some have proposed the need for independent audits, arguing that the voluntary principles adopted by the developers and providers of AI services and systems insufficiently address risk. This policy recommendation summarizes the issues related to the auditing of AI services and systems and presents three recommendations for promoting AI auditing that contribute to sound AI governance. Recommendation1.Development of institutional design for AI audits. Recommendation2.Training human resources for AI audits. Recommendation3. Updating AI audits in accordance with technological progress. In this policy recommendation, AI is assumed to be that which recognizes and predicts data with the last chapter outlining how generative AI should be audited.
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.00044&r=ain
  10. By: Saeri, Alexander K; O'Connor, Ruby
    Abstract: This playbook, written by researchers at Monash University, is a practical guide for academic AI experts to help them apply artificial intelligence (AI) tools and techniques to complex challenges in policy and sustainability. It includes a five step guide: (1) Finding and working with partners (2) Understanding the problem (3) Assessing fit and selecting an AI approach (4) Design and validation of AI tool(s) (5) Embedding the AI tool in practice. It also provides a simple introduction to policy, sustainability & sustainable development, and the current evidence on the promise & reality of applying AI to these challenges. As part of the attached OSF project, templates are provided to plan and conduct partner workshops and propose collaborative pilot projects.
    Date: 2023–12–17
    URL: http://d.repec.org/n?u=RePEc:osf:osfxxx:y75rq&r=ain
  11. By: TANGI Luca (European Commission - JRC); COMBETTO Marco; MARTIN BOSCH Jaume; RODRIGUEZ MÜLLER Paula (European Commission - JRC)
    Abstract: This report provides the result of a research study conducted within the context of the Public Sector Tech Watch, an observatory developed by DG DIGIT, with the support of the Joint Research Centre (JRC), that provides a knowledge hub and a virtual space where public administrations, civil society, GovTech companies and researchers can find and share knowledge and experience. The report’s primary goal is to offer an analysis of how Artificial Intelligence (AI) systems are improving interoperability in the European Public Sector. The findings are based on three pillars: (i) a literature and policy review on the synergies between AI and interoperability; (ii) a quantitative analysis of a selected set of 189 use cases fitting the purpose of the research question; and (iii) a qualitative study going deeper into some illustrative cases. The findings highlight that the one-fourth of the cases collected are using AI techniques to support interoperability through a varied set of applications. Moreover, the semantic interoperability layer is fundamental in most of the cases. In addition, ontologies and taxonomies combined with AI can help in establishing interoperability between different systems. The solutions analysed classify, detect and provide structure, among other actions performed on data. Hence, AI has the capability to standardise, clean, structure and increase the usage of large volumes of data, thus improving overall quality and making it easier to use and share between different systems.
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:ipt:iptwpa:jrc134713&r=ain
  12. By: Dandan Qiao; Huaxia Rui; Qian Xiong
    Abstract: Artificial intelligence (AI) refers to the ability of machines or software to mimic or even surpass human intelligence in a given cognitive task. While humans learn by both induction and deduction, the success of current AI is rooted in induction, relying on its ability to detect statistical regularities in task input -- an ability learnt from a vast amount of training data using enormous computation resources. We examine the performance of such a statistical AI in a human task through the lens of four factors, including task learnability, statistical resource, computation resource, and learning techniques, and then propose a three-phase visual framework to understand the evolving relation between AI and jobs. Based on this conceptual framework, we develop a simple economic model of competition to show the existence of an inflection point for each occupation. Before AI performance crosses the inflection point, human workers always benefit from an improvement in AI performance, but after the inflection point, human workers become worse off whenever such an improvement occurs. To offer empirical evidence, we first argue that AI performance has passed the inflection point for the occupation of translation but not for the occupation of web development. We then study how the launch of ChatGPT, which led to significant improvement of AI performance on many tasks, has affected workers in these two occupations on a large online labor platform. Consistent with the inflection point conjecture, we find that translators are negatively affected by the shock both in terms of the number of accepted jobs and the earnings from those jobs, while web developers are positively affected by the very same shock. Given the potentially large disruption of AI on employment, more studies on more occupations using data from different platforms are urgently needed.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.04180&r=ain
  13. By: Stefano Bianchini; Moritz M\"uller; Pierre Pelletier
    Abstract: New technologies have the power to revolutionize science. It has happened in the past and is happening again with the emergence of new computational tools, such as Artificial Intelligence (AI) and Machine Learning (ML). Despite the documented impact of these technologies, there remains a significant gap in understanding the process of their adoption within the scientific community. In this paper, we draw on theories of scientific and technical human capital (STHC) to study the integration of AI in scientific research, focusing on the human capital of scientists and the external resources available within their network of collaborators and institutions. We validate our hypotheses on a large sample of publications from OpenAlex, covering all sciences from 1980 to 2020. We find that the diffusion of AI is strongly driven by social mechanisms that organize the deployment and creation of human capital that complements the technology. Our results suggest that AI is pioneered by domain scientists with a `taste for exploration' and who are embedded in a network rich of computer scientists, experienced AI scientists and early-career researchers; they also come from institutions with high citation impact and a relatively strong publication history on AI. The pattern is similar across scientific disciplines, the exception being access to high-performance computing (HPC), which is important in chemistry and the medical sciences but less so in other fields. Once AI is integrated into research, most adoption factors continue to influence its subsequent reuse. Implications for the organization and management of science in the evolving era of AI-driven discovery are discussed.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.09843&r=ain
  14. By: Jingwei Ni (ETH Zurich); Julia Bingler (University of Oxford); Chiara Colesanti Senni (ETH Zürich; University of Zurich); Mathias Kraus (University of Erlangen); Glen Gostlow (University of Zurich); Tobias Schimanski (University of Zurich); Dominik Stammbach (ETH Zurich); Saeid Vaghefi (University of Zurich); Qian Wang (University of Zurich); Nicolas Webersinke (Friedrich-Alexander-Universität Erlangen-Nürnberg); Tobias Wekhof (ETH Zürich); Tingyu Yu (University of Zurich); Markus Leippold (University of Zurich; Swiss Finance Institute)
    Abstract: This paper introduces a novel approach to enhance Large Language Models (LLMs) with expert knowledge to automate the analysis of corporate sustainability reports by benchmarking them against the Task Force for Climate-Related Financial Disclosures (TCFD) recommendations. Corporate sustainability reports are crucial in assessing organizations' environmental and social risks and impacts. However, analyzing these reports' vast amounts of information makes human analysis often too costly. As a result, only a few entities worldwide have the resources to analyze these reports, which could lead to a lack of transparency. While AI-powered tools can automatically analyze the data, they are prone to inaccuracies as they lack domain-specific expertise. This paper introduces a novel approach to enhance LLMs with expert knowledge to automate the analysis of corporate sustainability reports. We christen our tool \textsc{chatReport}, and apply it in a first use case to assess corporate climate risk disclosures following the TCFD recommendations. ChatReport results from collaborating with experts in climate science, finance, economic policy, and computer science, demonstrating how domain experts can be involved in developing AI tools. We make our prompt templates, generated data, and scores available to the public to encourage transparency.
    Keywords: Task Force for Climate-Related Financial Disclosures, Sustainability Report, Large Language Model, ChatGPT
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:chf:rpseri:rp23111&r=ain
  15. By: Maochun Xu; Zixun Lan; Zheng Tao; Jiawei Du; Zongao Ye
    Abstract: Artificial Intelligence (AI) and Machine Learning (ML) are transforming the domain of Quantitative Trading (QT) through the deployment of advanced algorithms capable of sifting through extensive financial datasets to pinpoint lucrative investment openings. AI-driven models, particularly those employing ML techniques such as deep learning and reinforcement learning, have shown great prowess in predicting market trends and executing trades at a speed and accuracy that far surpass human capabilities. Its capacity to automate critical tasks, such as discerning market conditions and executing trading strategies, has been pivotal. However, persistent challenges exist in current QT methods, especially in effectively handling noisy and high-frequency financial data. Striking a balance between exploration and exploitation poses another challenge for AI-driven trading agents. To surmount these hurdles, our proposed solution, QTNet, introduces an adaptive trading model that autonomously formulates QT strategies through an intelligent trading agent. Incorporating deep reinforcement learning (DRL) with imitative learning methodologies, we bolster the proficiency of our model. To tackle the challenges posed by volatile financial datasets, we conceptualize the QT mechanism within the framework of a Partially Observable Markov Decision Process (POMDP). Moreover, by embedding imitative learning, the model can capitalize on traditional trading tactics, nurturing a balanced synergy between discovery and utilization. For a more realistic simulation, our trading agent undergoes training using minute-frequency data sourced from the live financial market. Experimental findings underscore the model's proficiency in extracting robust market features and its adaptability to diverse market conditions.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.15730&r=ain

This nep-ain issue is ©2024 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.