|
on Computational Economics |
Issue of 2024‒03‒11
28 papers chosen by |
By: | Mestiri, Sami |
Abstract: | In the last years, the financial sector has seen an increase in the use of machine learning models in banking and insurance contexts. Advanced analytic teams in the financial community are implementing these models regularly. In this paper, we analyses the limitations of machine learning methods, and then provides some suggestions on the choice of methods in financial applications. We refer the reader to the R libraries that can be used to compute the Machine learning methods |
Keywords: | Financial applications; Machine learning ; R software. |
JEL: | C45 C5 G23 |
Date: | 2024 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:119998&r=cmp |
By: | Magassouba, Aboubacar Sidiki; Diallo, Abdourahmane; Nkurunziza, Armel; Tchole, Ali Issakou Malam; Touré, Almamy Amara; Magassouba, Mamoudou; Sylla, Younoussa; diallo, Mamadou Abdoulaye R; Nabé, Aly Badara |
Abstract: | Background: Contraceptive continuation is crucial for assessing the quality and effectiveness of family planning programs, yet it remains challenging, particularly in sub-Saharan Africa. Traditional statistical methods may only partially capture complex relationships and interactions among variables. Machine learning, an artificial intelligence domain, offers the potential to handle large and intricate datasets, uncover hidden patterns, make accurate predictions, and provide interpretable results. Objective: Using data from the last four Demographic and Health Surveys, our study utilised a machine learning model to predict contraceptive continuation among women aged 15-49 in a West African country. Additionally, we employed SHAP (SHapley Additive exPlanations) analysis to identify and rank the most influential features for the prediction. Methods: We employed LightGBM, a gradient-boosting framework that employs tree-based learning algorithms, to construct our predictive model. Our multilevel LGBM model accounted for country-level variations while controlling for individual variables. Furthermore, optimization techniques were utilized to enhance performance and computation efficiency. Hyperparameter tuning was conducted using Optuna, and the machine learning model performance was evaluated based on accuracy and area under the curve (AUC) metrics. SHAP analysis was employed to elucidate the model's predictions and feature impacts. Results: Our final model demonstrated an accuracy of 74% and an AUC of 82%, highlighting its effectiveness in predicting contraceptive continuation among women aged 15-49. The most influential features for prediction encompassed the number of children under 5 in household, age, desire for more children, current family planning method type, total children ever born, household relationship structure, recent health facility visits, country, and husband's desire for children. Conclusion: Machine learning is a valuable tool for accurately predicting and interpreting contraceptive continuation among women in sub-Saharan Africa. The identified influential features offer insights for designing interventions tailored to different groups, catering to their specific needs and preferences, and ultimately improving reproductive health outcomes.Keywords Contraceptive continuation, Machine learning, Sub-Saharan Africa, Predictive model |
Date: | 2024–01–26 |
URL: | http://d.repec.org/n?u=RePEc:osf:socarx:u38sh&r=cmp |
By: | Yulu Gong; Mengran Zhu; Shuning Huo; Yafei Xiang; Hanyi Yu |
Abstract: | In the age of the Internet, people's lives are increasingly dependent on today's network technology. However, network technology is a double-edged sword, bringing convenience to people but also posing many security challenges. Maintaining network security and protecting the legitimate interests of users is at the heart of network construction. Threat detection is an important part of a complete and effective defense system. In the field of network information security, the technical update of network attack and network protection is spiraling. How to effectively detect unknown threats is one of the concerns of network protection. Currently, network threat detection is usually based on rules and traditional machine learning methods, which create artificial rules or extract common spatiotemporal features, which cannot be applied to large-scale data applications, and the emergence of unknown threats causes the detection accuracy of the original model to decline. With this in mind, this paper uses deep learning for advanced threat detection to improve cybersecurity resilienc e in the financial industry. Many network security researchers have shifted their focus to exceptio n-based intrusion detection techniques. The detection technology mainly uses statistical machine learning methods - collecting normal program and network behavior data, extracting multidimensional features, and training decision machine learning models on this basis (commonly used include naive Bayes, decision trees, support vector machines, random forests, etc.). In the detection phase, program code or network behavior that deviates from the normal value beyond the tolerance is considered malicious code or network attack behavior. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2402.09820&r=cmp |
By: | Kelvin J. L. Koa; Yunshan Ma; Ritchie Ng; Tat-Seng Chua |
Abstract: | Explaining stock predictions is generally a difficult task for traditional non-generative deep learning models, where explanations are limited to visualizing the attention weights on important texts. Today, Large Language Models (LLMs) present a solution to this problem, given their known capabilities to generate human-readable explanations for their decision-making process. However, the task of stock prediction remains challenging for LLMs, as it requires the ability to weigh the varying impacts of chaotic social texts on stock prices. The problem gets progressively harder with the introduction of the explanation component, which requires LLMs to explain verbally why certain factors are more important than the others. On the other hand, to fine-tune LLMs for such a task, one would need expert-annotated samples of explanation for every stock movement in the training set, which is expensive and impractical to scale. To tackle these issues, we propose our Summarize-Explain-Predict (SEP) framework, which utilizes a self-reflective agent and Proximal Policy Optimization (PPO) to let a LLM teach itself how to generate explainable stock predictions in a fully autonomous manner. The reflective agent learns how to explain past stock movements through self-reasoning, while the PPO trainer trains the model to generate the most likely explanations from input texts. The training samples for the PPO trainer are also the responses generated during the reflective process, which eliminates the need for human annotators. Using our SEP framework, we fine-tune a LLM that can outperform both traditional deep-learning and LLM methods in prediction accuracy and Matthews correlation coefficient for the stock classification task. To justify the generalization capability of our framework, we further test it on the portfolio construction task, and demonstrate its effectiveness through various portfolio metrics. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2402.03659&r=cmp |
By: | Phoebe Koundouri; Panagiotis Stavros Aslanidis; Konstantinos Dellis; Georgios Feretzakis; Angelos Plataniotis |
Abstract: | This paper introduces a machine learning (ML) based approach for integrating Human Security (HS) and Sustainable Development Goals (SDGs). Originating in the 1990s, HS focuses on strategic, people-centric interventions for ensuring comprehensive welfare and resilience. It closely aligns with the SDGs, together forming the foundation for global sustainable development initiatives. Our methodology involves mapping 44 reports to the 17 SDGs using expert-annotated keywords and advanced ML techniques, resulting in a web-based SDG mapping tool. This tool is specifically tailored for the HS-SDG nexus, enabling the analysis of 13 new reports and their connections to the SDGs. Through this, we uncover detailed insights and establish strong links between the reports and global objectives, offering a nuanced understanding of the interplay between HS and sustainable development. This research provides a scalable framework to explore the relationship between HS and the Paris Agenda, offering a practical, efficient resource for scholars and policymakers. |
Keywords: | Artificial Intelligence in Policy Making, Data Mining, Human-Centric Governance Strategies, Human Security, Machine Learning, Sustainable Development Goals |
Date: | 2024–02–20 |
URL: | http://d.repec.org/n?u=RePEc:aue:wpaper:2406&r=cmp |
By: | Sven Klaassen; Jan Teichert-Kluge; Philipp Bach; Victor Chernozhukov; Martin Spindler; Suhas Vijaykumar |
Abstract: | This paper explores the use of unstructured, multimodal data, namely text and images, in causal inference and treatment effect estimation. We propose a neural network architecture that is adapted to the double machine learning (DML) framework, specifically the partially linear model. An additional contribution of our paper is a new method to generate a semi-synthetic dataset which can be used to evaluate the performance of causal effect estimation in the presence of text and images as confounders. The proposed methods and architectures are evaluated on the semi-synthetic dataset and compared to standard approaches, highlighting the potential benefit of using text and images directly in causal studies. Our findings have implications for researchers and practitioners in economics, marketing, finance, medicine and data science in general who are interested in estimating causal quantities using non-traditional data. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2402.01785&r=cmp |
By: | Johann Lussange; Stefano Vrizzi; Stefano Palminteri; Boris Gutkin |
Abstract: | Building on a previous foundation work (Lussange et al. 2020), this study introduces a multi-agent reinforcement learning (MARL) model simulating crypto markets, which is calibrated to the Binance's daily closing prices of $153$ cryptocurrencies that were continuously traded between 2018 and 2022. Unlike previous agent-based models (ABM) or multi-agent systems (MAS) which relied on zero-intelligence agents or single autonomous agent methodologies, our approach relies on endowing agents with reinforcement learning (RL) techniques in order to model crypto markets. This integration is designed to emulate, with a bottom-up approach to complexity inference, both individual and collective agents, ensuring robustness in the recent volatile conditions of such markets and during the COVID-19 era. A key feature of our model also lies in the fact that its autonomous agents perform asset price valuation based on two sources of information: the market prices themselves, and the approximation of the crypto assets fundamental values beyond what those market prices are. Our MAS calibration against real market data allows for an accurate emulation of crypto markets microstructure and probing key market behaviors, in both the bearish and bullish regimes of that particular time period. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2402.10803&r=cmp |
By: | Benjamin Patrick Evans; Sumitra Ganesh |
Abstract: | Agent-based models (ABMs) have shown promise for modelling various real world phenomena incompatible with traditional equilibrium analysis. However, a critical concern is the manual definition of behavioural rules in ABMs. Recent developments in multi-agent reinforcement learning (MARL) offer a way to address this issue from an optimisation perspective, where agents strive to maximise their utility, eliminating the need for manual rule specification. This learning-focused approach aligns with established economic and financial models through the use of rational utility-maximising agents. However, this representation departs from the fundamental motivation for ABMs: that realistic dynamics emerging from bounded rationality and agent heterogeneity can be modelled. To resolve this apparent disparity between the two approaches, we propose a novel technique for representing heterogeneous processing-constrained agents within a MARL framework. The proposed approach treats agents as constrained optimisers with varying degrees of strategic skills, permitting departure from strict utility maximisation. Behaviour is learnt through repeated simulations with policy gradients to adjust action likelihoods. To allow efficient computation, we use parameterised shared policy learning with distributions of agent skill levels. Shared policy learning avoids the need for agents to learn individual policies yet still enables a spectrum of bounded rational behaviours. We validate our model's effectiveness using real-world data on a range of canonical $n$-agent settings, demonstrating significantly improved predictive capability. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2402.00787&r=cmp |
By: | Teodoro Baldazzi (Università Roma Tre); Luigi Bellomarini (Bank of Italy); Stefano Ceri (Politecnico di Milano); Andrea Colombo (Politecnico di Milano); Andrea Gentili (Bank of Italy); Emanuel Sallinger (TU Wien; University of Oxford) |
Abstract: | Large Language Models (LLMs) usually undergo a pre-training process on extensive collections of generic textual data, which are often publicly accessible. Pre-training enables LLMs to grasp language grammar, understand context, and convey a sense of common knowledge. Pre-training can be likened to machine learning training: the LLM is trained to predict the next basic text unit (e.g., a word or a sequence of words) based on the sequence of previously observed units. However, despite the impressive generalization and human-like interaction capabilities shown in Natural Language Processing (NLP) tasks, pre-trained LLMs exhibit significant limitations and provide poor accuracy when applied in specialized domains. Their main limitation stems from the fact that data used in generic pre-training often lacks knowledge related to the specific domain. To address these limitations, fine-tuning techniques are often employed to refine pre-trained models using domain-specific data. Factual information is extracted from company databases to create text collections for fine-tuning purposes. However, even in this case, results tend to be unsatisfactory in complex domains, such as financial markets and finance in general. Examining the issue from a different perspective, the Knowledge Representation and Reasoning (KRR) community has focused on producing formalisms, methods, and systems for representing complex Enterprise Knowledge. In particular, Enterprise Knowledge Graphs (EKGs) can leverage a combination of factual information in databases and business knowledge specified in a compact and formal fashion. EKGs serve the purpose of answering specific domain queries through established techniques such as ontological reasoning. Domain knowledge is represented in symbolic forms, e.g., logic-based languages, and used to draw consequential conclusions from the available data. However, while EKGs are applied successfully in many financial scenarios, they lack flexibility, common sense and linguistic orientation, essential for NLP. This paper proposes an approach aimed at enhancing the utility of LLMs for specific applications, such as those related to financial markets. The approach involves guiding the fine-tuning process of LLMs through ontological reasoning on EKGs. In particular, we exploit the Vadalog system and its language, a state-of-the-art automated reasoning framework, to synthesize an extensive fine- tuning corpus from a logical formalization of domain knowledge in an EKG. Our contribution consists of a technique called verbalization, which transforms the set of inferences determined by ontological reasoning into a corpus for fine-tuning. We present a complete software architecture that applies verbalization to four NLP tasks: question answering, i.e., providing accurate responses in a specific domain in good prose; explanation, i.e., systematically justifying the conclusions drawn; translation, i.e., converting domain specifications into logical formalization; and description, i.e., explaining formal specifications in prose. We apply the approach and our architecture in the context of financial markets, presenting a proof of concept that highlights their advantages. |
Keywords: | Ontological reasoning, Large language models, Knowledge graphs |
Date: | 2024–01 |
URL: | http://d.repec.org/n?u=RePEc:bdi:wpmisp:mip_044_24&r=cmp |
By: | Cai, Hengrui; Shi, Chengchun; Song, Rui; Lu, Wenbin |
Abstract: | An individualized decision rule (IDR) is a decision function that assigns each individual a given treatment based on his/her observed characteristics. Most of the existing works in the literature consider settings with binary or finitely many treatment options. In this paper, we focus on the continuous treatment setting and propose a jump interval-learning to develop an individualized interval-valued decision rule (I2DR) that maximizes the expected outcome. Unlike IDRs that recommend a single treatment, the proposed I2DR yields an interval of treatment options for each individual, making it more flexible to implement in practice. To derive an optimal I2DR, our jump interval-learning method estimates the conditional mean of the outcome given the treatment and the covariates via jump penalized regression, and derives the corresponding optimal I2DR based on the estimated outcome regression function. The regressor is allowed to be either linear for clear interpretation or deep neural network to model complex treatment-covariates interactions. To implement jump interval-learning, we develop a searching algorithm based on dynamic programming that efficiently computes the outcome regression function. Statistical properties of the resulting I2DR are established when the outcome regression function is either a piecewise or continuous function over the treatment space. We further develop a procedure to infer the mean outcome under the (estimated) optimal policy. Extensive simulations and a real data application to a Warfarin study are conducted to demonstrate the empirical validity of the proposed I2DR. |
Keywords: | continuous treatment; dynamic programming; individualized interval-valued decision rule; jump interval-learning; precision medicine |
JEL: | C1 |
Date: | 2023–02–13 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:118231&r=cmp |
By: | Ozili, Peterson K |
Abstract: | Artificial intelligence (AI) is a topic of interest in the finance literature. However, its role and implications for central banks have not received much attention in the literature. Using discourse analysis method, this article identifies the benefits and risks of artificial intelligence in central banking. The benefits of artificial intelligence for central banks are that deploying artificial intelligence systems will encourage central banks to develop information technology (IT) and data science capabilities, it will assist central banks in detecting financial stability risks, it will aid the search for granular micro economic/non-economic data from the internet so that the data can support central banks in making policy decisions, it enables the use of AI-generated synthetic data, and it enables task automation in central banking operations. However, the use of artificial intelligence in central banking poses some risks which include data privacy risk, the risk that using synthetic data could lead to false positives, high risk of embedded bias, difficulty of central banks to explain AI-based policy decisions, and cybersecurity risk. The article also offers some considerations for responsible use of artificial intelligence in central banking. |
Keywords: | central bank, artificial intelligence, financial stability, responsible AI, artificial intelligence model. |
JEL: | E51 E52 E58 |
Date: | 2024 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:120151&r=cmp |
By: | J. van den Berg, Gerard (IFAU and University of Gronigen); Kunaschk, Max (IAB Nuremberg); Lang, Julia (IAB Nuremberg); Stephan, Gesine (IAB Nuremberg); Uhlendorff, Arne (CNRS and CREST i Paris) |
Abstract: | We analyze unique data on three sources of information on the probability of re-employment within 6 months (RE6), for the same individuals sampled from the inflow into unemployment. First, they were asked for their perceived probability of RE6. Second, their caseworkers revealed whether they expected RE6. Third, random-forest machine learning methods are trained on admin istrative data on the full inflow, to predict individual RE6. We compare the predictive performance of these measures and consider how combinations im prove this performance. We show that self-reported (and to a lesser extent caseworker) assessments sometimes contain information not captured by the machine learning algorithm. |
Keywords: | Unemployment; expectations; prediction; random forest; unemloyment insurance; information; |
JEL: | C21 C41 C53 C55 J64 J65 |
Date: | 2023–11–10 |
URL: | http://d.repec.org/n?u=RePEc:hhs:ifauwp:2023_022&r=cmp |
By: | Joshua S. Gans |
Abstract: | New generative artificial intelligence (AI) models, including large language models and image generators, have created new challenges for copyright policy as such models may be trained on data that includes copy-protected content. This paper examines this issue from an economics perspective and analyses how different copyright regimes for generative AI will impact the quality of content generated as well as the quality of AI training. A key factor is whether generative AI models are small (with content providers capable of negotiations with AI providers) or large (where negotiations are prohibitive). For small AI models, it is found that giving original content providers copyright protection leads to superior social welfare outcomes compared to having no copyright protection. For large AI models, this comparison is ambiguous and depends on the level of potential harm to original content providers and the importance of content for AI training quality. However, it is demonstrated that an ex-post `fair use' type mechanism can lead to higher expected social welfare than traditional copyright regimes. |
JEL: | K20 O34 |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:32106&r=cmp |
By: | Jean Lee; Nicholas Stevens; Soyeon Caren Han; Minseok Song |
Abstract: | Large Language Models (LLMs) have shown remarkable capabilities across a wide variety of Natural Language Processing (NLP) tasks and have attracted attention from multiple domains, including financial services. Despite the extensive research into general-domain LLMs, and their immense potential in finance, Financial LLM (FinLLM) research remains limited. This survey provides a comprehensive overview of FinLLMs, including their history, techniques, performance, and opportunities and challenges. Firstly, we present a chronological overview of general-domain Pre-trained Language Models (PLMs) through to current FinLLMs, including the GPT-series, selected open-source LLMs, and financial LMs. Secondly, we compare five techniques used across financial PLMs and FinLLMs, including training methods, training data, and fine-tuning methods. Thirdly, we summarize the performance evaluations of six benchmark tasks and datasets. In addition, we provide eight advanced financial NLP tasks and datasets for developing more sophisticated FinLLMs. Finally, we discuss the opportunities and the challenges facing FinLLMs, such as hallucination, privacy, and efficiency. To support AI research in finance, we compile a collection of accessible datasets and evaluation benchmarks on GitHub. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2402.02315&r=cmp |
By: | Takuji Arai; Yuto Imai |
Abstract: | This paper aims to develop a supervised deep-learning scheme to compute call option prices for the Barndorff-Nielsen and Shephard model with a non-martingale asset price process having infinite active jumps. In our deep learning scheme, teaching data is generated through the Monte Carlo method developed by Arai and Imai (2024). Moreover, the BNS model includes many variables, which makes the deep learning accuracy worse. Therefore, we will create another input variable using the Black-Scholes formula. As a result, the accuracy is improved dramatically. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2402.00445&r=cmp |
By: | Zhenglong Li; Vincent Tam; Kwan L. Yeung |
Abstract: | Deep or reinforcement learning (RL) approaches have been adapted as reactive agents to quickly learn and respond with new investment strategies for portfolio management under the highly turbulent financial market environments in recent years. In many cases, due to the very complex correlations among various financial sectors, and the fluctuating trends in different financial markets, a deep or reinforcement learning based agent can be biased in maximising the total returns of the newly formulated investment portfolio while neglecting its potential risks under the turmoil of various market conditions in the global or regional sectors. Accordingly, a multi-agent and self-adaptive framework namely the MASA is proposed in which a sophisticated multi-agent reinforcement learning (RL) approach is adopted through two cooperating and reactive agents to carefully and dynamically balance the trade-off between the overall portfolio returns and their potential risks. Besides, a very flexible and proactive agent as the market observer is integrated into the MASA framework to provide some additional information on the estimated market trends as valuable feedbacks for multi-agent RL approach to quickly adapt to the ever-changing market conditions. The obtained empirical results clearly reveal the potential strengths of our proposed MASA framework based on the multi-agent RL approach against many well-known RL-based approaches on the challenging data sets of the CSI 300, Dow Jones Industrial Average and S&P 500 indexes over the past 10 years. More importantly, our proposed MASA framework shed lights on many possible directions for future investigation. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2402.00515&r=cmp |
By: | Bernhard Hientzsch |
Abstract: | We consider two data driven approaches, Reinforcement Learning (RL) and Deep Trajectory-based Stochastic Optimal Control (DTSOC) for hedging a European call option without and with transaction cost according to a quadratic hedging P&L objective at maturity ("variance-optimal hedging" or "final quadratic hedging"). We study the performance of the two approaches under various market environments (modeled via the Black-Scholes and/or the log-normal SABR model) to understand their advantages and limitations. Without transaction costs and in the Black-Scholes model, both approaches match the performance of the variance-optimal Delta hedge. In the log-normal SABR model without transaction costs, they match the performance of the variance-optimal Barlett's Delta hedge. Agents trained on Black-Scholes trajectories with matching initial volatility but used on SABR trajectories match the performance of Bartlett's Delta hedge in average cost, but show substantially wider variance. To apply RL approaches to these problems, P&L at maturity is written as sum of step-wise contributions and variants of RL algorithms are implemented and used that minimize expectation of second moments of such sums. |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2401.08600&r=cmp |
By: | Saizhuo Wang; Hang Yuan; Lionel M. Ni; Jian Guo |
Abstract: | Autonomous agents based on Large Language Models (LLMs) that devise plans and tackle real-world challenges have gained prominence.However, tailoring these agents for specialized domains like quantitative investment remains a formidable task. The core challenge involves efficiently building and integrating a domain-specific knowledge base for the agent's learning process. This paper introduces a principled framework to address this challenge, comprising a two-layer loop.In the inner loop, the agent refines its responses by drawing from its knowledge base, while in the outer loop, these responses are tested in real-world scenarios to automatically enhance the knowledge base with new insights.We demonstrate that our approach enables the agent to progressively approximate optimal behavior with provable efficiency.Furthermore, we instantiate this framework through an autonomous agent for mining trading signals named QuantAgent. Empirical results showcase QuantAgent's capability in uncovering viable financial signals and enhancing the accuracy of financial forecasts. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2402.03755&r=cmp |
By: | Matteo Coronese; Martina Occelli; Francesco Lamperti; Andrea Roventini |
Abstract: | Economic and population growth increasingly pressure the Earth system. Fertile soils are essential to ensure global food security, requiring high-yielding agro-technological regimes to cope with rising soil degradation and macro-nutrients deficiencies, which may be further exacerbated by climate change. In this work, we extend the AgriLOVE land-use agent-based model (Coronese et al., 2023) to investigate trade-offs in the transition between conventional and sustainable farming regimes in a smallholder economy exposed to explicit environmental boundaries. We investigate the ability of the system to favor a sustainable transition when prolonged conventional farming leads to soil depletion. First, we showcase the emergence of three endogenous scenarios of transition and lock-in. Then, we analyze transition dynamics under several behavioral, environmental and policy scenarios. Our results highlights a strong path-dependence of the agricultural sector, with scarce capacity to foster successful transitions to a sustainable regime in absence of external interventions. The role of behavioral changes is limited and we find evidence of negative tipping points induced by mismanagement of grassland and forests. These findings call for policies strongly supporting sustainable agriculture. We test regulatory measures aimed at protecting common environmental goods and public incentives to encourage the search for novel production techniques targeted at closing the sustainable-conventional yield gap. We find that their effectiveness is highly time-dependent, with rapidly closing windows of opportunity. |
Keywords: | Agriculture; Land use; Agent-based model; Technological change; Transition; Environmental boundaries; Sustainability |
Date: | 2024–02–20 |
URL: | http://d.repec.org/n?u=RePEc:ssa:lemwps:2024/05&r=cmp |
By: | Daniel Borup; Philippe Goulet Coulombe; Erik Christian Montes Schütte; David E. Rapach; Sander Schwenk-Nebbe |
Abstract: | We introduce the performance-based Shapley value (PBSV) to measure the contributions of individual predictors to the out-of-sample loss for time-series forecasting models. Our new metric allows a researcher to anatomize out-of-sample forecasting accuracy, thereby providing valuable information for interpreting time-series forecasting models. The PBSV is model agnostic—so it can be applied to any forecasting model, including "black box" models in machine learning, and it can be used for any loss function. We also develop the TS-Shapley-VI, a version of the conventional Shapley value that gauges the importance of predictors for explaining the in-sample predictions in the entire sequence of fitted models that generates the time series of out-of-sample forecasts. We then propose the model accordance score to compare predictor ranks based on the TS-Shapley-VI and PBSV, thereby linking the predictors' in-sample importance to their contributions to out-of-sample forecasting accuracy. We illustrate our metrics in an application forecasting US inflation. |
Keywords: | model interpretation; Shapley value; predictor importance; loss function; machine learning; inflation |
JEL: | C22 C45 C52 C53 E31 E37 |
Date: | 2024–02–21 |
URL: | http://d.repec.org/n?u=RePEc:fip:fedawp:97785&r=cmp |
By: | Joshua S. Gans |
Abstract: | This paper examines recent proposals and research suggesting that AI adoption should be delayed until its potential harms are properly understood. It is shown that conclusions regarding the social optimality of delayed AI adoption are sensitive to assumptions regarding the process by which regulators learn about the salience of particular harms. When such learning is by doing -- based on the real-world adoption of AI -- this generally favours acceleration of AI adoption to surface and react to potential harms more quickly. This case is strengthened when AI adoption is potentially reversible. The paper examines how different conclusions regarding the optimality of accelerated or delayed AI adoption influence and are influenced by other policies that may moderate AI harm. |
JEL: | L51 O33 |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:32105&r=cmp |
By: | Yike Wang; Chris Gu; Taisuke Otsu |
Abstract: | This paper presents a novel application of graph neural networks for modeling and estimating network heterogeneity. Network heterogeneity is characterized by variations in unit's decisions or outcomes that depend not only on its own attributes but also on the conditions of its surrounding neighborhood. We delineate the convergence rate of the graph neural networks estimator, as well as its applicability in semiparametric causal inference with heterogeneous treatment effects. The finite-sample performance of our estimator is evaluated through Monte Carlo simulations. In an empirical setting related to microfinance program participation, we apply the new estimator to examine the average treatment effects and outcomes of counterfactual policies, and to propose an enhanced strategy for selecting the initial recipients of program information in social networks. |
Date: | 2024–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2401.16275&r=cmp |
By: | Marcos Lacasa Cazcarra |
Abstract: | This paper analyzes the impact of the National Minimum Wage from 2001 to 2021. The MNW increased from 505.7/month (2001) to 1, 108.3/month (2021). Using the data provided by the Spanish Tax Administration Agency, databases that represent the entire population studied can be analyzed. More accurate results and more efficient predictive models are provided by these counts. This work is characterized by the database used, which is a national census and not a sample or projection. Therefore, the study reflects results and analyses based on historical data from the Spanish Salary Census 2001-2021. Various machine-learning models show that income inequality has been reduced by raising the minimum wage. Raising the minimum wage has not led to inflation or increased unemployment. On the contrary, it has been consistent with increased net employment, contained prices, and increased corporate profit margins. The most important conclusion is that an increase in the minimum wage in the period analyzed has led to an increase in the wealth of the country, increasing employment and company profits, and is postulated, under the conditions analyzed, as an effective method for the redistribution of wealth. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2402.02402&r=cmp |
By: | Johannes Carow (Johannes Gutenberg University Mainz); Niklas M. Witzig (Johannes Gutenberg University Mainz) |
Abstract: | We study the impact of time pressure on strategic risk-taking of professional chess players. We propose a novel machine-learning-based measure for the degree of strategic risk of a single chess move and apply this measure to the 2013-2023 FIDE Chess World Cups that allow for plausibly exogenous variation in thinking time. Our results indicate that time pressure leads chess players to opt for more risk-averse moves. We additionally provide correlational evidence for strategic loss aversion, a tendency for risky moves after a mistake/ in a disadvantageous position. This suggests that high-proficiency decision-makers in highstake situations react to time pressure and contextual factors more broadly. We discuss the origins and implication of this finding in our setting. |
Keywords: | Chess, Risk, Time Pressure, Loss Aversion, Machine Learning |
JEL: | C26 C45 D91 |
Date: | 2024–02–22 |
URL: | http://d.repec.org/n?u=RePEc:jgu:wpaper:2404&r=cmp |
By: | Xiaorui Zuo; Yao-Tsung Chen; Wolfgang Karl H\"ardle |
Abstract: | In the burgeoning realm of cryptocurrency, social media platforms like Twitter have become pivotal in influencing market trends and investor sentiments. In our study, we leverage GPT-4 and a fine-tuned transformer-based BERT model for a multimodal sentiment analysis, focusing on the impact of emoji sentiment on cryptocurrency markets. By translating emojis into quantifiable sentiment data, we correlate these insights with key market indicators like BTC Price and the VCRIX index. This approach may be fed into the development of trading strategies aimed at utilizing social media elements to identify and forecast market trends. Crucially, our findings suggest that strategies based on emoji sentiment can facilitate the avoidance of significant market downturns and contribute to the stabilization of returns. This research underscores the practical benefits of integrating advanced AI-driven analyses into financial strategies, offering a nuanced perspective on the interplay between digital communication and market dynamics in an academic context. |
Date: | 2024–02 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2402.10481&r=cmp |
By: | Mateo Seré |
Abstract: | This study, utilizing a novel dataset from economic seminar audio recordings, investigates gender-based peer interactions, structured around five key findings: (i) Female speakers are interrupted more frequently, earlier, and differently than males; (ii) the extra interruptions largely stem from female, not male, audience members; (iii) male participants pose fewer questions but more comments to female presenters; (iv) audience members of both genders interrupt female speakers with a more negative tone; (v) less senior female presenters receive more interruptions from women. Control variables include seminar series, presentation topic, and factors like presenter affiliation, seniority, and department ranking. |
Date: | 2023–07 |
URL: | http://d.repec.org/n?u=RePEc:hdl:wpaper:2309&r=cmp |
By: | Matthews, Ben |
Abstract: | The paper demonstrates simulating Victimization Divides (Hunter and Tseloni, 2016) using an example from the Crime Survey for England and Wales. The simulation method is based on King et al. (2000). |
Date: | 2024–02–02 |
URL: | http://d.repec.org/n?u=RePEc:osf:socarx:k8j9e&r=cmp |
By: | Leogrande, Angelo; Costantiello, Alberto; Leogrande, Domenico |
Abstract: | In the following article, we analyse the determinants of the number of physicians in the context of ISTAT BES-Benessere Equo Sostenibile data among twenty Italian regions in the period 2004-2022. We apply Panel Data with Random Effects, Panel Data with Fixed Effects, and Pooled OLS-Ordinary Least Squares. We found that the number of Physicians among Italian regions is positively associated, among others, to “Trust in the Police and Firefighters”, “Net Income Inequality”, and negatively associated, among others, to “Research and Development Intensity” and “Soil waterproofing by artificial cover”. Furthermore, we apply the k-Means algorithm optimized with the Silhouette Coefficient and we find the presence of two clusters. Finally, we confront eight different machine-learning algorithms to predict the future value of physicians and we find that the PNN-Probabilistic Neural Network is the best predictive algorithm. |
Date: | 2023–11–14 |
URL: | http://d.repec.org/n?u=RePEc:osf:socarx:92wnh&r=cmp |