|
on Artificial Intelligence |
By: | Nir Chemaya; Daniel Martin |
Abstract: | The emergent abilities of Large Language Models (LLMs), which power tools like ChatGPT and Bard, have produced both excitement and worry about how AI will impact academic writing. In response to rising concerns about AI use, authors of academic publications may decide to voluntarily disclose any AI tools they use to revise their manuscripts, and journals and conferences could begin mandating disclosure and/or turn to using detection services, as many teachers have done with student writing in class settings. Given these looming possibilities, we investigate whether academics view it as necessary to report AI use in manuscript preparation and how detectors react to the use of AI in academic writing. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.14720&r=ain |
By: | Eric Schniter (Chapman University) |
Abstract: | "Experimental research in the realm of human-robot interactions has focused on the behavioral and psychological influences affecting human interaction and cooperation with robots. A robot is loosely defined as a device designed to perform agentic tasks autonomously or under remote control, often replicating or assisting human actions. Robots can vary widely in form, ranging from simple assembly line machines performing repetitive actions to advanced systems with no moving parts but with artificial intelligence (AI) capable of learning, problem-solving, communicating, and adapting to diverse environments and human interactions. Applications of experimental human-robot interaction research include the design, development, and implementation of robotic technologies that better align with human preferences, behaviors, and societal needs. As such, a central goal of experimental research on human-robot interactions is to better understand how trust is developed and maintained. A number of studies suggest that humans trust and act toward robots as they do towards humans, applying social norms and inferring agentic intent (Rai and Diermeier, 2015). While many robots are harmless and even helpful, some robots may reduce their human partner’s wages, security, or welfare and should not be trusted (Taddeo, McCutcheon and Floridi, 2019; Acemoglu and Restrepo, 2020; Alekseev, 2020). For example, more than half of all internet traffic is generated by bots, the majority of which are 'bad bots' (Imperva, 2016). Despite the hazards, robotic technologies are already transforming our everyday lives and finding their way into important domains such as healthcare, transportation, manufacturing, customer service, education, and disaster relief (Meyerson et al., 2023)." |
Keywords: | Trust, Robots, AI, Experiments, Evolution |
JEL: | B52 C72 C90 D63 D64 L5 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:chu:wpaper:23-14&r=ain |
By: | Keegan Harris; Nicole Immorlica; Brendan Lucier; Aleksandrs Slivkins |
Abstract: | How can an informed sender persuade a receiver, having only limited information about the receiver's beliefs? Motivated by research showing generative AI can simulate economic agents, we initiate the study of information design with an oracle. We assume the sender can learn more about the receiver by querying this oracle, e.g., by simulating the receiver's behavior. Aside from AI motivations such as general-purpose Large Language Models (LLMs) and problem-specific machine learning models, alternate motivations include customer surveys and querying a small pool of live users. Specifically, we study Bayesian Persuasion where the sender has a second-order prior over the receiver's beliefs. After a fixed number of queries to an oracle to refine this prior, the sender commits to an information structure. Upon receiving the message, the receiver takes a payoff-relevant action maximizing her expected utility given her posterior beliefs. We design polynomial-time querying algorithms that optimize the sender's expected utility in this Bayesian Persuasion game. As a technical contribution, we show that queries form partitions of the space of receiver beliefs that can be used to quantify the sender's knowledge. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.18138&r=ain |
By: | von der Heyde, Leah (LMU Munich); Haensch, Anna-Carolina; Wenz, Alexander (University of Mannheim) |
Abstract: | The rise of large language models (LLMs) like GPT-3 has sparked interest in their potential for creating synthetic datasets, particularly in the realm of privacy research. This study critically evaluates the use of LLMs in generating synthetic public opinion data, pointing out the biases inherent in the data generation process. While LLMs, trained on vast internet datasets, can mimic societal attitudes and behaviors, their application in synthesizing data poses significant privacy and accuracy challenges. We investigate these issues using the case of vote choice prediction in the 2017 German federal elections. Employing GPT-3, we construct synthetic personas based on the German Longitudinal Election Study, prompting the LLM to predict voting behavior. Our analysis compares these LLM-generated predictions with actual survey data, focusing on the implications of using such synthetic data and the biases it may contain. The results demonstrate GPT-3’s propensity to inaccurately predict voter choices, with biases favoring certain political groups and more predictable voter profiles. This outcome raises critical questions about the reliability and ethical use of LLMs in generating synthetic data. |
Date: | 2023–12–01 |
URL: | http://d.repec.org/n?u=RePEc:osf:socarx:97r8s&r=ain |
By: | Natvik, Gisle J. (Department of Economics); Tangerås, Thomas (Research Institute of Industrial Economics (IFN)) |
Abstract: | We study commercialization of personal data through personalized advertising by a content platform. Content consumption generates productive data about consumer preferences. The firm invests in artificial intelligence (AI) to improve analytical power and in quality to stimulate content consumption. The profit-maximizing tariff is zero if productive data are highly valuable. Subsidization of usage would generate nonproductive data and be unprofitable. Data provision is efficient when users pay entirely with personal data because then content consumption optimally trades off improvements in user experience against losses in privacy rent. Still, privacy protection is inefficient because of distorted incentives to invest in AI. |
Keywords: | Artificial intelligence; Content platform; Personalized advertising; Privacy; Quality |
JEL: | D82 L12 L15 L81 M37 |
Date: | 2023–12–07 |
URL: | http://d.repec.org/n?u=RePEc:hhs:iuiwop:1481&r=ain |
By: | Joao Guerreiro; Sergio Rebelo; Pedro Teles |
Abstract: | We consider an environment in which there is substantial uncertainty about the potential adverse external effects of AI algorithms. We find that subjecting algorithm implementation to regulatory approval or mandating testing is insufficient to implement the social optimum. When testing costs are low, a combination of mandatory testing for external effects and making developers liable for the adverse external effects of their algorithms comes close to implementing the social optimum even when developers have limited liability. |
JEL: | H21 O33 |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:31921&r=ain |
By: | Osea Giuntella; Johannes König; Luca Stella |
Abstract: | This study explores the relationship between artificial intelligence (AI) and workers’ well-being and mental health using longitudinal survey data from Germany (2000-2020). We construct a measure of individual exposure to AI technology based on the occupation in which workers in our sample were first employed and explore an event study design and a difference-in-differences approach to compare AI-exposed and non-exposed workers. Before AI became widely available, there is no evidence of differential pre-trends in workers’ well-being and concerns about their economic futures. Since 2015, however, with the increasing adoption of AI in firms across Germany, we find that AI-exposed workers have become less satisfied with their life and job and more concerned about job security and their personal economic situation. However, we find no evidence of a significant impact of AI on workers’ mental health, anxiety, or depression. |
Keywords: | Artificial Intelligence, Future of Work, Well-being, Mental Health |
JEL: | I10 J28 O30 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:diw:diwsop:diw_sp1194&r=ain |
By: | Goller, Daniel (University of Bern); Gschwendt, Christian (University of Bern); Wolter, Stefan C. (University of Bern) |
Abstract: | In this paper, we show the causal influence of the launch of generative AI in the form of ChatGPT on the search behavior of young people for apprenticeship vacancies. There is a strong and long-lasting decline in the intensity of searches for vacancies, which suggests great uncertainty among the affected cohort. Analyses based on the classification of occupations according to tasks, type of cognitive requirements, and the expected risk of automation to date show significant differences in the extent to which specific occupations are affected. Occupations with a high proportion of cognitive tasks, with high demands on language skills, and those whose automation risk had previously been assessed by experts as lower are significantly more affected by the decline. However, no differences can be found with regard to the proportion of routine vs. non-routine tasks. |
Keywords: | artificial intelligence, occupational choice, labor supply, technological change |
JEL: | J24 O33 |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp16638&r=ain |
By: | Eugenia Gonzalez Ehlinger; Fabian Stephany |
Abstract: | For emerging professions, such as jobs in the field of Artificial Intelligence (AI) or sustainability (green), labour supply does not meet industry demand. In this scenario of labour shortages, our work aims to understand whether employers have started focusing on individual skills rather than on formal qualifications in their recruiting. By analysing a large time series dataset of around one million online job vacancies between 2019 and 2022 from the UK and drawing on diverse literature on technological change and labour market signalling, we provide evidence that employers have started so-called “skill-based hiring” for AI and green roles, as more flexible hiring practices allow them to increase the available talent pool. In our observation period the demand for AI roles grew twice as much as average labour demand. At the same time, the mention of university education for AI roles declined by 23%, while AI roles advertise five times as many skills as job postings on average. Our analysis also shows that university degrees no longer show an educational premium for AI roles, while for green positions the educational premium persists. In contrast, AI skills have a wage premium of 16%, similar to having a PhD (17%). Our work recommends making use of alternative skill building formats such as apprenticeships, on-the-job training, MOOCs, vocational education and training, micro-certificates, and online bootcamps to use human capital to its full potential and to tackle talent shortages. |
Keywords: | future of work, labour markets, skills, education, AI, sustainability |
JEL: | C55 I23 J23 J24 J31 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_10817&r=ain |
By: | Fabien Petit (UCL Centre for Education Policy & Equalising Opportunities) |
Abstract: | The 2023 AI Safety Summit brought attention to the profound changes AI can bring to our work lives. While we have heard predictions of a work-free future before, it is crucial to focus on what is happening now. AI is reshaping jobs, affecting which skills are required and who gets hired, and can create inequality. To thrive in this changing landscape, current and future generations need to learn skills that are key to the future, such as maths, programming, and social skills. UK Government must make sure people are ready for the future of work, so we can have a prosperous and fair society as AI technology advances. |
Keywords: | AI, Employment, Skill Development |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:ucl:cepeob:27&r=ain |
By: | Jingyi Tian; Jun Nagayasu |
Abstract: | As an important domain of information technology development, artificial intelligence (AI) has garnered significant popularity in the financial sector. While AI offers numerous advantages, investigating potential risks associated with the widespread use of AI has become a critical point for researchers. We examine the impact of AI technologies on systemic risk within China’s financial industry. Our findings suggest that AI helps mitigate the increase of systemic risk. However, the impact of AI differs across different financial sectors and is more pronounced during crisis periods. Our study also suggests that AI can decrease systemic risk by enhancing the human capital of financial firms. Moreover, the theoretical framework presented in this paper provides insights into the notion that imprudent allocation of AI-related investment could potentially contribute to an increase in systemic risk. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:toh:tupdaa:44&r=ain |
By: | Mohammad Rasouli; Ravi Chiruvolu; Ali Risheh |
Abstract: | With the investment landscape becoming more competitive, efficiently scaling deal sourcing and improving deal insights have become a dominant strategy for funds. While funds are already spending significant efforts on these two tasks, they cannot be scaled with traditional approaches; hence, there is a surge in automating them. Many third party software providers have emerged recently to address this need with productivity solutions, but they fail due to a lack of personalization for the fund, privacy constraints, and natural limits of software use cases. Therefore, most major funds and many smaller funds have started developing their in-house AI platforms: a game changer for the industry. These platforms grow smarter by direct interactions with the fund and can be used to provide personalized use cases. Recent developments in large language models, e.g. ChatGPT, have provided an opportunity for other funds to also develop their own AI platforms. While not having an AI platform now is not a competitive disadvantage, it will be in two years. Funds require a practical plan and corresponding risk assessments for such AI platforms. |
Date: | 2023–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.06251&r=ain |
By: | Vincent Gurgul; Stefan Lessmann; Wolfgang Karl H\"ardle |
Abstract: | This paper explores the application of Machine Learning (ML) and Natural Language Processing (NLP) techniques in cryptocurrency price forecasting, specifically Bitcoin (BTC) and Ethereum (ETH). Focusing on news and social media data, primarily from Twitter and Reddit, we analyse the influence of public sentiment on cryptocurrency valuations using advanced deep learning NLP methods. Alongside conventional price regression, we treat cryptocurrency price forecasting as a classification problem. This includes both the prediction of price movements (up or down) and the identification of local extrema. We compare the performance of various ML models, both with and without NLP data integration. Our findings reveal that incorporating NLP data significantly enhances the forecasting performance of our models. We discover that pre-trained models, such as Twitter-RoBERTa and BART MNLI, are highly effective in capturing market sentiment, and that fine-tuning Large Language Models (LLMs) also yields substantial forecasting improvements. Notably, the BART MNLI zero-shot classification model shows considerable proficiency in extracting bullish and bearish signals from textual data. All of our models consistently generate profit across different validation scenarios, with no observed decline in profits or reduction in the impact of NLP data over time. The study highlights the potential of text analysis in improving financial forecasts and demonstrates the effectiveness of various NLP techniques in capturing nuanced market sentiment. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.14759&r=ain |
By: | Haoqiang Kang; Xiao-Yang Liu |
Abstract: | The hallucination issue is recognized as a fundamental deficiency of large language models (LLMs), especially when applied to fields such as finance, education, and law. Despite the growing concerns, there has been a lack of empirical investigation. In this paper, we provide an empirical examination of LLMs' hallucination behaviors in financial tasks. First, we empirically investigate LLM model's ability of explaining financial concepts and terminologies. Second, we assess LLM models' capacity of querying historical stock prices. Third, to alleviate the hallucination issue, we evaluate the efficacy of four practical methods, including few-shot learning, Decoding by Contrasting Layers (DoLa), the Retrieval Augmentation Generation (RAG) method and the prompt-based tool learning method for a function to generate a query command. Finally, our major finding is that off-the-shelf LLMs experience serious hallucination behaviors in financial tasks. Therefore, there is an urgent need to call for research efforts in mitigating LLMs' hallucination. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.15548&r=ain |
By: | Yangyang Yu; Haohang Li; Zhi Chen; Yuechen Jiang; Yang Li; Denghui Zhang; Rong Liu; Jordan W. Suchow; Khaldoun Khashanah |
Abstract: | Recent advancements in Large Language Models (LLMs) have exhibited notable efficacy in question-answering (QA) tasks across diverse domains. Their prowess in integrating extensive web knowledge has fueled interest in developing LLM autonomous agents. While LLMs are efficient in decoding human instructions and deriving solutions by holistically processing historical inputs, transitioning to purpose-driven agents requires a supplementary rational architecture to process multi-source information, establish reasoning chains, and prioritize critical tasks. Addressing this, we introduce \textsc{FinMe}, a novel LLM-based agent framework devised for financial decision-making, encompassing three core modules: Profiling, to outline the agent's characteristics; Memory, with layered processing, to aid the agent in assimilating realistic hierarchical financial data; and Decision-making, to convert insights gained from memories into investment decisions. Notably, \textsc{FinMe}'s memory module aligns closely with the cognitive structure of human traders, offering robust interpretability and real-time tuning. Its adjustable cognitive span allows for the retention of critical information beyond human perceptual limits, thereby enhancing trading outcomes. This framework enables the agent to self-evolve its professional knowledge, react agilely to new investment cues, and continuously refine trading decisions in the volatile financial environment. We first compare \textsc{FinMe} with various algorithmic agents on a scalable real-world financial dataset, underscoring its leading trading performance in stocks and funds. We then fine-tuned the agent's perceptual spans to achieve a significant trading performance. Collectively, \textsc{FinMe} presents a cutting-edge LLM agent framework for automated trading, boosting cumulative investment returns. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.13743&r=ain |
By: | Jai Pal |
Abstract: | This research paper focuses on the integration of Artificial Intelligence (AI) into the currency trading landscape, positing the development of personalized AI models, essentially functioning as intelligent personal assistants tailored to the idiosyncrasies of individual traders. The paper posits that AI models are capable of identifying nuanced patterns within the trader's historical data, facilitating a more accurate and insightful assessment of psychological risk dynamics in currency trading. The PRI is a dynamic metric that experiences fluctuations in response to market conditions that foster psychological fragility among traders. By employing sophisticated techniques, a classifying decision tree is crafted, enabling clearer decision-making boundaries within the tree structure. By incorporating the user's chronological trade entries, the model becomes adept at identifying critical junctures when psychological risks are heightened. The real-time nature of the calculations enhances the model's utility as a proactive tool, offering timely alerts to traders about impending moments of psychological risks. The implications of this research extend beyond the confines of currency trading, reaching into the realms of other industries where the judicious application of personalized modeling emerges as an efficient and strategic approach. This paper positions itself at the intersection of cutting-edge technology and the intricate nuances of human psychology, offering a transformative paradigm for decision making support in dynamic and high-pressure environments. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.15222&r=ain |
By: | Boyang Yu |
Abstract: | The impact of non-deterministic outputs from Large Language Models (LLMs) is not well examined for financial text understanding tasks. Through a compelling case study on investing in the US equity market via news sentiment analysis, we uncover substantial variability in sentence-level sentiment classification results, underscoring the innate volatility of LLM outputs. These uncertainties cascade downstream, leading to more significant variations in portfolio construction and return. While tweaking the temperature parameter in the language model decoder presents a potential remedy, it comes at the expense of stifled creativity. Similarly, while ensembling multiple outputs mitigates the effect of volatile outputs, it demands a notable computational investment. This work furnishes practitioners with invaluable insights for adeptly navigating uncertainty in the integration of LLMs into financial decision-making, particularly in scenarios dictated by non-deterministic information. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.15180&r=ain |
By: | Ummara Mumtaz; Summaya Mumtaz |
Abstract: | The rise of ChatGPT has brought a notable shift to the AI sector, with its exceptional conversational skills and deep grasp of language. Recognizing its value across different areas, our study investigates ChatGPT's capacity to predict stock market movements using only social media tweets and sentiment analysis. We aim to see if ChatGPT can tap into the vast sentiment data on platforms like Twitter to offer insightful predictions about stock trends. We focus on determining if a tweet has a positive, negative, or neutral effect on two big tech giants Microsoft and Google's stock value. Our findings highlight a positive link between ChatGPT's evaluations and the following days stock results for both tech companies. This research enriches our view on ChatGPT's adaptability and emphasizes the growing importance of AI in shaping financial market forecasts. |
Date: | 2023–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2311.06273&r=ain |
By: | Kim Nguyen (Reserve Bank of Australia); Jonathan Hambur (Reserve Bank of Australia) |
Abstract: | This paper examines the factors associated with the adoption of cloud computing and artificial intelligence/machine learning, two emerging digital general-purpose technologies (GPT), as well as firms' post-adoption outcomes. To do so we identify adoption of GPT based on references to these technologies in listed company reports, and merge this with data on their Board of Directors, their hiring activities and their financial performance. We find that firms that have directors with relevant technological backgrounds, or female representation on their Board, are more likely to profitably adopt GPT, with the former being particularly important. Worker skills also appear important, with firms that adopt GPT, particularly those that do so profitably, being more likely to hire skilled staff following adoption. Finally, while early adopters of GPT experience a dip in profitability following adoption, this is not evident for more recent adopters. This suggests that GPT may have become easier to adopt over time, potentially due to changes in the technologies or the availability of relevant skills, which is encouraging in terms of future productivity outcomes. |
Keywords: | technology adoption; productivity; management capability; skills |
JEL: | J2 O32 O33 |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:rba:rbardp:rdp2023-10&r=ain |
By: | Christophe Carugati |
Date: | 2023–12 |
URL: | http://d.repec.org/n?u=RePEc:bre:wpaper:node_9593&r=ain |
By: | Ruiqi Sun; Daniel Trefler |
Abstract: | The rise of artificial intelligence (AI) and of cross-border restrictions on data flows has created a host of new questions and related policy dilemmas. This paper addresses two questions: How is digital service trade shaped by (1) AI algorithms and (2) by the interplay between AI algorithms and cross-border restrictions on data flows? Answers lie in the palm of your hand: From London to Lagos, mobile app users trigger international transactions when they open AI-powered foreign apps. We have 2015-2020 usage data for the most popular 35, 575 mobile apps and, to quantify the AI deployed in each of these apps, we use a large language model (LLM) to link each app to each of the app developer's AI patents. (This linkage of specific products to specific patents is a methodological innovation.) Armed with data on app usage by country, with AI deployed in each app, and with an instrument for AI (a Heckscher-Ohlin cost-shifter), we answer our two questions. (1) On average, AI causally raises an app's number of foreign users by 2.67 log points or by more than 10-fold. (2) The impact of AI on foreign users is halved if the foreign users are in a country with strong restrictions on cross-border data flows. These countries are usually autocracies. We also provide a new way of measuring AI knowledge spillovers across firms and find large spillovers. Finally, our work suggests numerous ways in which LLMs such as ChatGPT can be used in other applications. |
JEL: | F12 F13 F14 F23 |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:31925&r=ain |