|
on Artificial Intelligence |
By: | OECD |
Abstract: | This report focuses on regulatory sandboxes in artificial intelligence (AI), where authorities engage firms to test innovative products or services that challenge existing legal frameworks. Participating firms obtain a waiver from specific legal provisions or compliance processes to innovate. It highlights positive impacts like increased venture capital investment in fintech start-ups. It points out challenges, risks, and policy considerations for AI sandboxes, emphasizing interdisciplinary cooperation, building AI expertise, regulatory interoperability, and trade policy. It also addresses the importance of comprehensive criteria for eligibility and assessing trials, as well as the impact on innovation and competition. |
Date: | 2023–07–13 |
URL: | http://d.repec.org/n?u=RePEc:oec:stiaab:356-en&r=ain |
By: | Stefania Albanesi; António Dias da Silva; Juan F. Jimeno; Ana Lamo; Alena Wabitsch |
Abstract: | We examine the link between labour market developments and new technologies such as artificial intelligence (AI) and software in 16 European countries over the period 2011- 2019. Using data for occupations at the 3-digit level in Europe, we find that on average employment shares have increased in occupations more exposed to AI. This is particularly the case for occupations with a relatively higher proportion of younger and skilled workers. This evidence is in line with the Skill Biased Technological Change theory. While there exists heterogeneity across countries, only very few countries show a decline in employment shares of occupations more exposed to AI-enabled automation. Country heterogeneity for this result seems to be linked to the pace of technology diffusion and education, but also to the level of product market regulation (competition) and employment protection laws. In contrast to the findings for employment, we find little evidence for a relationship between wages and potential exposures to new technologies. |
JEL: | E24 J2 J21 J31 O30 O33 |
Date: | 2023–06 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:31357&r=ain |
By: | Astrid Bertrand (IP Paris - Institut Polytechnique de Paris, DIVA - Design, Interaction, Visualization & Applications - LTCI - Laboratoire Traitement et Communication de l'Information - IMT - Institut Mines-Télécom [Paris] - Télécom Paris); James Eagan (DIVA - Design, Interaction, Visualization & Applications - LTCI - Laboratoire Traitement et Communication de l'Information - IMT - Institut Mines-Télécom [Paris] - Télécom Paris, IP Paris - Institut Polytechnique de Paris, INFRES - Département Informatique et Réseaux - Télécom ParisTech); Winston Maxwell (SES - Département Sciences Economiques et Sociales - Télécom ParisTech, ECOGE - Economie Gestion - I3 SES - Institut interdisciplinaire de l’innovation de Telecom Paris - Télécom ParisTech - I3 - Institut interdisciplinaire de l’innovation - CNRS - Centre National de la Recherche Scientifique, IP Paris - Institut Polytechnique de Paris, Télécom ParisTech, I3 SES - Institut interdisciplinaire de l’innovation de Telecom Paris - Télécom ParisTech - I3 - Institut interdisciplinaire de l’innovation - CNRS - Centre National de la Recherche Scientifique) |
Abstract: | Robo-advisors are democratizing access to life-insurance by enabling fully online underwriting. In Europe, financial legislation requires that the reasons for recommending a life insurance plan be explained according to the characteristics of the client, in order to empower the client to make a "fully informed decision". In this study conducted in France, we seek to understand whether legal requirements for feature-based explanations actually help users in their decision-making. We conduct a qualitative study to characterize the explainability needs formulated by non-expert users and by regulators expert in customer protection. We then run a large-scale quantitative study using Robex, a simplified robo-advisor built using ecological interface design that delivers recommendations with explanations in different hybrid textual and visual formats: either "dialogic"-more textual-or "graphical"-more visual. We find that providing feature-based explanations does not improve appropriate reliance or understanding compared to not providing any explanation. In addition, dialogic explanations increase users' trust in the recommendations of the robo-advisor, sometimes to the users' detriment. This real-world scenario illustrates how XAI can address information asymmetry in complex areas such as finance. This work has implications for other critical, AI-based recommender systems, where the General Data Protection Regulation (GDPR) may require similar provisions for feature-based explanations. CCS CONCEPTS • Human-centered computing → Empirical studies in HCI. |
Keywords: | explainability intelligibility AI regulation financial inclusion, explainability, intelligibility, AI regulation, financial inclusion |
Date: | 2023–06–12 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-04125939&r=ain |
By: | Alex Kim; Maximilian Muhn; Valeri Nikolaev |
Abstract: | Generative AI tools such as ChatGPT can fundamentally change the way investors process information. We probe the economic usefulness of these tools in summarizing complex corporate disclosures using the stock market as a laboratory. The unconstrained summaries are dramatically shorter, often by more than 70% compared to the originals, whereas their information content is amplified. When a document has a positive (negative) sentiment, its summary becomes more positive (negative). More importantly, the summaries are more effective at explaining stock market reactions to the disclosed information. Motivated by these findings, we propose a measure of information "bloat." We show that bloated disclosure is associated with adverse capital markets consequences, such as lower price efficiency and higher information asymmetry. Finally, we show that the model is effective at constructing targeted summaries that identify firms' (non-)financial performance and risks. Collectively, our results indicate that generative language modeling adds considerable value for investors with information processing constraints. |
Date: | 2023–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2306.10224&r=ain |
By: | Xinli Yu; Zheng Chen; Yuan Ling; Shujing Dong; Zongyi Liu; Yanbin Lu |
Abstract: | This paper presents a novel study on harnessing Large Language Models' (LLMs) outstanding knowledge and reasoning abilities for explainable financial time series forecasting. The application of machine learning models to financial time series comes with several challenges, including the difficulty in cross-sequence reasoning and inference, the hurdle of incorporating multi-modal signals from historical news, financial knowledge graphs, etc., and the issue of interpreting and explaining the model results. In this paper, we focus on NASDAQ-100 stocks, making use of publicly accessible historical stock price data, company metadata, and historical economic/financial news. We conduct experiments to illustrate the potential of LLMs in offering a unified solution to the aforementioned challenges. Our experiments include trying zero-shot/few-shot inference with GPT-4 and instruction-based fine-tuning with a public LLM model Open LLaMA. We demonstrate our approach outperforms a few baselines, including the widely applied classic ARMA-GARCH model and a gradient-boosting tree model. Through the performance comparison results and a few examples, we find LLMs can make a well-thought decision by reasoning over information from both textual news and price time series and extracting insights, leveraging cross-sequence information, and utilizing the inherent knowledge embedded within the LLM. Additionally, we show that a publicly available LLM such as Open-LLaMA, after fine-tuning, can comprehend the instruction to generate explainable forecasts and achieve reasonable performance, albeit relatively inferior in comparison to GPT-4. |
Date: | 2023–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2306.11025&r=ain |
By: | Thibault Collin (Université Paris Dauphine-PSL - PSL - Université Paris sciences et lettres) |
Abstract: | The general scope of this thesis will be to further study the application of artificial neural networks in the context of hedging rainbow options. Due to their inherently complex features, such as the correlated paths that the prices of their underlying assets take or their absence from traded markets, finding an optimal hedging strategy for rainbow options is difficult, and traders usually have to resort to models and methods they know are inaccurate. An alternative approach involving deep learning however recently surfaced in the context of hedging vanilla options [6], and researchers have started to see potential in the use of neural networks for options endowed with exotic features in [5], [12] and [22]. The key to a near-perfect hedge for contingent claims might be hidden behind the training of neural network algorithms [6], and the scope of this research will be to further investigate how those innovative hedging techniques can be extended to rainbow options [22], using recent research [21], and to compare our results with those proposed by the current models and techniques used by traders, such as running Monte-Carlo path simulations. In order to accomplish that, we will try to develop an algorithm capable of designing an innovative and optimal hedging strategy for rainbow options using some intuition developed to hedge vanilla options [21] and price exotics [5]. But although it was shown from past literature to be potentially efficient and cost-effective, the opaque nature of an artificial neural network will make it difficult for the deep learning algorithm to be fully trusted and used as a sole method for hedging purposes, but rather as an additional technique associated with other more reliable models. |
Keywords: | Quantitative finance, deep hedging, deep learning, machine learning, rainbow options, call options, call worst-of options, black scholes, geometric brownian motion |
Date: | 2023–06–04 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-04060013&r=ain |
By: | Kai Feng; Han Hong; Ke Tang; Jingyuan Wang |
Abstract: | This paper proposes a statistical framework with which artificial intelligence can improve human decision making. The performance of each human decision maker is first benchmarked against machine predictions; we then replace the decisions made by a subset of the decision makers with the recommendation from the proposed artificial intelligence algorithm. Using a large nationwide dataset of pregnancy outcomes and doctor diagnoses from prepregnancy checkups of reproductive age couples, we experimented with both a heuristic frequentist approach and a Bayesian posterior loss function approach with an application to abnormal birth detection. We find that our algorithm on a test dataset results in a higher overall true positive rate and a lower false positive rate than the diagnoses made by doctors only. We also find that the diagnoses of doctors from rural areas are more frequently replaceable, suggesting that artificial intelligence assisted decision making tends to improve precision more in less developed regions. |
Date: | 2023–06 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2306.11689&r=ain |
By: | Gaël Le Mens; Balász Kovács; Michael T. Hannan; Guillem Pros |
Abstract: | Recently, the world's attention has been captivated by Large Language Models (LLMs) thanks to OpenAI's Chat-GPT, which rapidly proliferated as an app powered by GPT-3 and now its successor, GPT-4. If these LLMs produce human-like text, the semantic spaces they construct likely align with those used by humans for interpreting and generating language. This suggests that social scientists could use these LLMs to construct measures of semantic similarity that match human judgment. In this article, we provide an empirical test of this intuition. We use GPT-4 to construct a new measure of typicality– the similarity of a text document to a concept or category. We evaluate its performance against other model-based typicality measures in terms of their correspondence with human typicality ratings. We conduct this comparative analysis in two domains: the typicality of books in literary genres (using an existing dataset of book descriptions) and the typicality of tweets authored by US Congress members in the Democratic and Republican parties (using a novel dataset). The GPT-4 Typicality measure not only meets or exceeds the current state-of-the-art but accomplishes this without any model training. This is a breakthrough because the previous state-of-the-art measure required fine-tuning a model (a BERT text classifier) on hundreds of thousands of text documents to achieve its performance. Our comparative analysis emphasizes the need for systematic empirical validation of measures based on LLMs: several measures based on other recent LLMs achieve at best a moderate correspondence with human judgments. |
Keywords: | categories, concepts, deep learning, typicality, GPT, chatGPT, BERT, Similarity |
JEL: | C18 C52 |
Date: | 2023–06 |
URL: | http://d.repec.org/n?u=RePEc:upf:upfgen:1864&r=ain |
By: | Károly Fazekas (Centre for Economic and Regional Studies – Institute of Economics) |
Abstract: | This paper provides a summary of the latest advancements in generative artificial intelligence using large language models over the past six months. The impact of this breakthrough remains uncertain, but it is evident that GPT is a General-Purpose Technology (GPT) that will significantly alter various aspects of our economy and society in ways that are yet to be fully comprehended. While it is essential for the government to regulate GPT technology, it is inevitable that the technology will continue to expand and evolve at a rapid pace. There is no doubt that every corner of the new world if it exists at all, will be covered by millions of forms of artificial intelligence. The taming of AIs and successful social and personal cooperation with domesticated AIs could ensure our survival and prosperity in that world. Whether or not AIs are capable and willing to cooperate will populate the new world is neither an individual nor a national matter. But how a country and its people fare in the new world is more so. |
Keywords: | innovation and invention: processes and incentives; technological change: choices and consequences; diffusion processes; technological innovation |
JEL: | O31 O33 Q55 |
Date: | 2023–06 |
URL: | http://d.repec.org/n?u=RePEc:has:discpr:2314&r=ain |