|
on Artificial Intelligence |
By: | Melissa Dell |
Abstract: | Deep learning provides powerful methods to impute structured information from large-scale, unstructured text and image datasets. For example, economists might wish to detect the presence of economic activity in satellite images, or to measure the topics or entities mentioned in social media, the congressional record, or firm filings. This review introduces deep neural networks, covering methods such as classifiers, regression models, generative AI, and embedding models. Applications include classification, document digitization, record linkage, and methods for data exploration in massive scale text and image corpora. When suitable methods are used, deep learning models can be cheap to tune and can scale affordably to problems involving millions or billions of data points.. The review is accompanied by a companion website, EconDL, with user-friendly demo notebooks, software resources, and a knowledge base that provides technical details and additional applications. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.15339 |
By: | Hill, Brian (HEC Paris) |
Abstract: | There is increasing speculation about the future role of ChatGPT and other artificial intelligence (AI) chatbots aiding humans in a variety of tasks. But do people do better when aided by these tools, as compared to when they complete tasks on their own? Can they properly evaluate and where necessary correct the responses provided by ChatGPT to enhance their performance? To investigate this question, this study gives university-level students class assignments involving both answering questions and correcting answers provided by ChatGPT. It finds a significant reduction in student performance when correcting a provided response as compared to when they produce an answer from scratch. One possible explanation for this discrepancy could be the confirmation bias. Beyond emphasising the need for continued research into human interaction with AI chatbots, this study exemplifies one potential way of bringing them into classroom: to raise awareness of the pitfalls of their improper use. |
Keywords: | ChatGPT; Human-AI chatbot interaction; Confirmation bias; Class assignments; AI in education; Future of work |
JEL: | A20 D90 |
Date: | 2023–06–01 |
URL: | https://d.repec.org/n?u=RePEc:ebg:heccah:1473 |
By: | Alex Kim; Maximilian Muhn; Valeri Nikolaev |
Abstract: | We investigate whether an LLM can successfully perform financial statement analysis in a way similar to a professional human analyst. We provide standardized and anonymous financial statements to GPT4 and instruct the model to analyze them to determine the direction of future earnings. Even without any narrative or industry-specific information, the LLM outperforms financial analysts in its ability to predict earnings changes. The LLM exhibits a relative advantage over human analysts in situations when the analysts tend to struggle. Furthermore, we find that the prediction accuracy of the LLM is on par with the performance of a narrowly trained state-of-the-art ML model. LLM prediction does not stem from its training memory. Instead, we find that the LLM generates useful narrative insights about a company's future performance. Lastly, our trading strategies based on GPT's predictions yield a higher Sharpe ratio and alphas than strategies based on other models. Taken together, our results suggest that LLMs may take a central role in decision-making. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.17866 |
By: | Felix Drinkall; Janet B. Pierrehumbert; Stefan Zohren |
Abstract: | Large Language Models (LLMs) have been shown to perform well for many downstream tasks. Transfer learning can enable LLMs to acquire skills that were not targeted during pre-training. In financial contexts, LLMs can sometimes beat well-established benchmarks. This paper investigates how well LLMs perform in the task of forecasting corporate credit ratings. We show that while LLMs are very good at encoding textual information, traditional methods are still very competitive when it comes to encoding numeric and multimodal data. For our task, current LLMs perform worse than a more traditional XGBoost architecture that combines fundamental and macroeconomic data with high-density text-based embedding features. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.17624 |
By: | Fernando Berzal; Alberto Garcia |
Abstract: | Trend following and momentum investing are common strategies employed by asset managers. Even though they can be helpful in the proper situations, they are limited in the sense that they work just by looking at past, as if we were driving with our focus on the rearview mirror. In this paper, we advocate for the use of Artificial Intelligence and Machine Learning techniques to predict future market trends. These predictions, when done properly, can improve the performance of asset managers by increasing returns and reducing drawdowns. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.13685 |
By: | Xiaohui Victor Li; Francesco Sanna Passino |
Abstract: | Dynamic knowledge graphs (DKGs) are popular structures to express different types of connections between objects over time. They can also serve as an efficient mathematical tool to represent information extracted from complex unstructured data sources, such as text or images. Within financial applications, DKGs could be used to detect trends for strategic thematic investing, based on information obtained from financial news articles. In this work, we explore the properties of large language models (LLMs) as dynamic knowledge graph generators, proposing a novel open-source fine-tuned LLM for this purpose, called the Integrated Contextual Knowledge Graph Generator (ICKG). We use ICKG to produce a novel open-source DKG from a corpus of financial news articles, called FinDKG, and we propose an attention-based GNN architecture for analysing it, called KGTransformer. We test the performance of the proposed model on benchmark datasets and FinDKG, demonstrating superior performance on link prediction tasks. Additionally, we evaluate the performance of the KGTransformer on FinDKG for thematic investing, showing it can outperform existing thematic ETFs. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.10909 |
By: | Qiqin Zhou |
Abstract: | In the contemporary financial landscape, accurately predicting the probability of filling a Request-For-Quote (RFQ) is crucial for improving market efficiency for less liquid asset classes. This paper explores the application of explainable AI (XAI) models to forecast the likelihood of RFQ fulfillment. By leveraging advanced algorithms including Logistic Regression, Random Forest, XGBoost and Bayesian Neural Tree, we are able to improve the accuracy of RFQ fill rate predictions and generate the most efficient quote price for market makers. XAI serves as a robust and transparent tool for market participants to navigate the complexities of RFQs with greater precision. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.15038 |
By: | Yuan Li; Bingqiao Luo; Qian Wang; Nuo Chen; Xu Liu; Bingsheng He |
Abstract: | The utilization of Large Language Models (LLMs) in financial trading has primarily been concentrated within the stock market, aiding in economic and financial decisions. Yet, the unique opportunities presented by the cryptocurrency market, noted for its on-chain data's transparency and the critical influence of off-chain signals like news, remain largely untapped by LLMs. This work aims to bridge the gap by developing an LLM-based trading agent, CryptoTrade, which uniquely combines the analysis of on-chain and off-chain data. This approach leverages the transparency and immutability of on-chain data, as well as the timeliness and influence of off-chain signals, providing a comprehensive overview of the cryptocurrency market. CryptoTrade incorporates a reflective mechanism specifically engineered to refine its daily trading decisions by analyzing the outcomes of prior trading decisions. This research makes two significant contributions. Firstly, it broadens the applicability of LLMs to the domain of cryptocurrency trading. Secondly, it establishes a benchmark for cryptocurrency trading strategies. Through extensive experiments, CryptoTrade has demonstrated superior performance in maximizing returns compared to traditional trading strategies and time-series baselines across various cryptocurrencies and market conditions. Our code and data are available at \url{https://anonymous.4open.science/r/C ryptoTrade-Public-92FC/}. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.09546 |
By: | Bonelli, Maxime (HEC Paris) |
Abstract: | Using data technologies, like machine learning, investors can gain a comparative advantage in forecasting outcomes frequently observed in historical data. I investigate the implications for capital allocation using venture capitalists (VCs) as a laboratory. VCs adopting data technologies tilt their investments towards startups developing businesses similar to those already explored, and become better at avoiding failures within this pool. However, these VCs become concurrently less likely to pick startups achieving rare major success. Plausibly exogenous variations in VCs' screening automation suggest a causality between data technologies adoption and these effects. These findings highlight potential downsides of investors embracing data technologies. |
Keywords: | big data; machine learning; artificial intelligence; venture capital; entrepreneurship; innovation; capital allocation |
JEL: | G24 L26 O30 |
Date: | 2023–02–22 |
URL: | https://d.repec.org/n?u=RePEc:ebg:heccah:1470 |
By: | Hurlin, Christophe (University of Orleans); Pérignon, Christophe (HEC Paris) |
Abstract: | This survey proposes a theoretical and practical reflection on the use of machine learning methods in the context of the Internal Ratings Based (IRB) approach to banks' capital requirements. While machine learning is still rarely used in the regulatory domain (IRB, IFRS 9, stress tests), recent discussions initiated by the European Banking Authority suggest that this may change in the near future. While technically complex, this subject is crucial given growing concerns about the potential financial instability caused by the banks' use of opaque internal models. Conversely, for their proponents, machine learning models offer the prospect of better measurement of credit risk and enhancing financial inclusion. This survey yields several conclusions and recommendations regarding (i) the accuracy of risk parameter estimations, (ii) the level of regulatory capital, (iii) the trade-off between performance and interpretability, (iv) international banking competition, and (v) the governance and operational risks of machine learning models. |
Keywords: | Banking; Machine Learning; Artificial Intelligence; Internal models; Prudential regulation; Regulatory capital |
JEL: | C10 C38 C55 G21 G29 |
Date: | 2023–06–25 |
URL: | https://d.repec.org/n?u=RePEc:ebg:heccah:1480 |
By: | D'Al, Francesco; Santarelli, Enrico; Vivarelli, Marco |
Abstract: | In this paper we integrate the insights of the Knowledge Spillover Theory of Entrepreneurship and Innovation (KSTE+I) with Schumpeter's idea that innovative entrepreneurs creatively apply available local knowledge, possibly mediated by Marshallian, Jacobian and Porter spillovers. In more detail, in this study we assess the degree of pervasiveness and the level of opportunities brought about by AI technologies by testing the possible correlation between the regional AI knowledge stock and the number of new innovative ventures (that is startups patenting in any technological field in the year of their foundation). Empirically, by focusing on 287 Nuts-2 European regions, we test whether the local AI stock of knowledge exerts an enabling role in fostering innovative entry within AI-related local industries (AI technologies as focused enablers) and within non AI-related local industries, as well (AI technologies as generalised enablers). Results from Negative Binomial fixed-effect and Poisson fixed-effect regressions (controlled for a variety of concurrent drivers of entrepreneurship) reveal that the local AI knowledge stock does promote the spread of innovative startups, so supporting both the KSTE+I approach and the enabling role of AI technologies; however, this relationship is confirmed only with regard to the sole high-tech/AI-related industries. |
Keywords: | KSTE+I, Artificial Intelligence, innovative entry, enabling technologies |
JEL: | O33 L26 |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:zbw:glodps:1473 |
By: | S\'andor Juh\'asz; Johannes Wachs; Jermain Kaminski; C\'esar A. Hidalgo |
Abstract: | Despite the growing importance of the digital sector, research on economic complexity and its implications continues to rely mostly on administrative records, e.g. data on exports, patents, and employment, that fail to capture the nuances of the digital economy. In this paper we use data on the geography of programming languages used in open-source software projects to extend economic complexity ideas to the digital economy. We estimate a country's software economic complexity and show that it complements the ability of measures of complexity based on trade, patents, and research papers to account for international differences in GDP per capita, income inequality, and emissions. We also show that open-source software follows the principle of relatedness, meaning that a country's software entries and exits are explained by specialization in related programming languages. We conclude by exploring the diversification and development of countries in open-source software in the context of large language models. Together, these findings help extend economic complexity methods and their policy considerations to the digital sector. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.13880 |
By: | Ding, Jeffrey |
Abstract: | Scholars connect China’s technology policy to government interventions that target particular industrial sectors. But not all sectors are created equal. Relying on evidence from China’s Artificial Intelligence (AI) policies, this paper develops a framework for assessing China’s approach toward promoting a technological domain that permeates across many industrial sectors: general-purpose technologies. It shows that China’s AI strategy diverges from expectations derived from typical characterizations of China’s industrial policy, which stress an emphasis on self-sufficiency, support for a limited number of national champions, and the essential role of military investment and demand for progress in dual-use domains. |
Keywords: | Social and Behavioral Sciences, Emerging technology, geopolitics, economic security, artificial intelligence |
Date: | 2022–12–09 |
URL: | https://d.repec.org/n?u=RePEc:cdl:globco:qt1sb844ws |
By: | Si, Yafei; Yang, Yuyi; Wang, Xi; An, Ruopeng; Zu, Jiaqi; Chen, Xi; Fan, Xiaojing; Gong, Sen |
Abstract: | Using simulated patients to mimic nine established non-communicable and infectious diseases over 27 trials, we assess ChatGPT's effectiveness and reliability in diagnosing and treating common diseases in low- and middle-income countries. We find ChatGPT's performance varied within a single disease, despite a high level of accuracy in both correct diagnosis (74.1%) and medication prescription (84.5%). Additionally, ChatGPT recommended a concerning level of unnecessary or harmful medications (85.2%) even with correct diagnoses. Finally, ChatGPT performed better in managing non-communicable diseases compared to infectious ones. These results highlight the need for cautious AI integration in healthcare systems to ensure quality and safety. |
Keywords: | ChatGPT, Large Language Models, Generative AI, Simulated Patient, Healthcare, Quality, Safety, Low- and Middle-Income Countries |
JEL: | C0 I10 I11 C90 |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:zbw:glodps:1472 |