|
on Computational Economics |
By: | Kumar, Deepak; Weissenberger-Eibl, Marion |
Abstract: | In the fast-paced realm of technological evolution, accurately forecasting emerging trends is critical for both academic inquiry and industry application. Traditional trend analysis methodologies, while valuable, struggle to efficiently process and interpret the vast datasets of today's information age. This paper introduces a novel approach that synergizes Generative AI and Bidirectional Encoder Representations from Transformers (BERT) for semantic insights and trend forecasting, leveraging the power of Retrieval-Augmented Generation (RAG) and the analytical prowess of BERT topic modeling. By automating the analysis of extensive datasets from publications and patents, the presented methodology not only expedites the discovery of emergent trends but also enhances the precision of these findings by generating a short summary for found emergent trends. For validation, three technologies - reinforcement learning, quantum machine learning, and Cryptocurrencies - were analysed prior to their first appearance in the Gartner Hype Cycle. Research highlights the integration of advanced AI techniques in trend forecasting, providing a scalable and accurate tool for strategic planning and innovation management. Results demonstrated a significant correlation between model's predictions and the technologies' appearances in the Hype Cycle, underscoring the potential of this methodology in anticipating technological shifts across various sectors |
Keywords: | BERT, Topic modelling, RAG, Gartner Hype Cycle, LLM, BERTopic |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:zbw:esconf:300545 |
By: | Jaydip Sen; Hetvi Waghela; Sneha Rakshit |
Abstract: | This paper explores using a deep learning Long Short-Term Memory (LSTM) model for accurate stock price prediction and its implications for portfolio design. Despite the efficient market hypothesis suggesting that predicting stock prices is impossible, recent research has shown the potential of advanced algorithms and predictive models. The study builds upon existing literature on stock price prediction methods, emphasizing the shift toward machine learning and deep learning approaches. Using historical stock prices of 180 stocks across 18 sectors listed on the NSE, India, the LSTM model predicts future prices. These predictions guide buy/sell decisions for each stock and analyze sector profitability. The study's main contributions are threefold: introducing an optimized LSTM model for robust portfolio design, utilizing LSTM predictions for buy/sell transactions, and insights into sector profitability and volatility. Results demonstrate the efficacy of the LSTM model in accurately predicting stock prices and informing investment decisions. By comparing sector profitability and prediction accuracy, the work provides valuable insights into the dynamics of the current financial markets in India. |
Date: | 2024–05 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.01572 |
By: | GOMEZ Emilia (European Commission - JRC); PORCARO Lorenzo (European Commission - JRC); FRAU AMAR Pedro; VINAGRE Joao (European Commission - JRC) |
Abstract: | This report provides an overview of the divinAI project and provides a set of diversity indicators for seven core artificial intelligence (AI) conferences from 2007 to 2023: the International Joint Conference on Artificial Intelligence (IJCAI), the Annual Association for the Advancement of Artificial Intelligence (AAAI) Conference, the International Conference on Machine Learning (ICML), Neural Information Processing Systems (NeurIPS) Conference, the Association for Computing Machinery (ACM) Recommender Systems (RecSys) Conference, the European Conference on Artificial Intelligence (ECAI) and the European Conference on Machine Learning/Practice of Knowledge Discovery in Databases (ECML/PKDD). We observe that, in general, Conference Diversity Index (CDI) values are still low for the selected conferences, although showing a slight temporal improvement thanks to diversity initiatives in the AI field. We also note slight differences between conferences, being RecSys the one with higher comparative diversity indicators, followed by general AI conferences (IJCAI, ECAI and AAAI). The selected Machine Learning conferences NeurIPS and ICML seem to provide lower values for diversity indicators. Regarding the different dimensions of diversity, gender diversity reflects a low proportion of female authors in all considered conferences, even given current gender diversity efforts in the field, which is in line with the low presence of women in technological fields. In terms of country distribution, we observe a notable presence of researchers from the EU, US and China in the selected conferences, where the presence of Chinese authors has increased in the last few years. Regarding institutions, universities and research centers or institutes play a central role in the AI scientific conferences under analysis, and the presence of industry seems to be more notable in machine learning conferences. An online dashboard that allows exploration and reproducibility complements the report. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:ipt:iptwpa:jrc137550 |
By: | Ozili, Peterson K |
Abstract: | The purpose of this article is to explore the role of artificial intelligence, or AI, in a central bank digital currency project and its challenges. Artificial intelligence is transforming the digital finance landscape. Central bank digital currency is also transforming the nature of central bank money. This study also suggests some considerations which central banks should be aware of when deploying artificial intelligence in their central bank digital currency project. The study concludes by acknowledging that artificial intelligence will continue to evolve, and its role in developing a sustainable CBDC will expand. While AI will be useful in many CBDC projects, ethical concerns will emerge about the use AI in a CBDC project. When such concerns arise, central banks should be prepared to have open discussions about how they are using, or intend to use, AI in their CBDC projects. |
Keywords: | artificial intelligence, central bank digital currency, CBDC, machine learning, deep learning, cryptocurrency, CBDC project, CBDC pilot, blockchain |
JEL: | E50 E51 E52 E58 O31 |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:121567 |
By: | Zhang, Michael PhD; Gao, Hang PhD; Chen, Di; Qi, Yanlin |
Abstract: | Managing traffic flow in high-occupancy toll (HOT) lanes is a tough balancing act and current tolling schemes often lead to either under- or over-utilization of HOT lane capacity. The inherent linear/nonlinear relationship between flow and tolls in HOT lanes suggest that recent advances in machine learning and the use of a data-driven model may help set toll rates for optimal flow and lane use. In this research project, a data-driven model was developed, using long short-term memory (LSTM) neural networks to capture the underlying flow-toll pattern on both HOT and general-purpose lanes. Then, a dynamic control strategy, using linear quadratic regulator (LQR) feedback controller was implemented to fully utilize the HOT lane capacity while maintaining congestion-free conditions. A case study of the I-580 freeway in Alameda County, California was carried out. The control system was evaluated in terms of vehicle hours traveled and person hours traveled for solo drivers and carpoolers. Results show that the tolling strategy helps to mitigate congestion in HOT and general-purpose lanes, benefiting every traveler on I-580. |
Keywords: | Engineering, High occupancy toll lanes, traffic flow, traffic models, highway traffic control systems |
Date: | 2024–06–01 |
URL: | https://d.repec.org/n?u=RePEc:cdl:itsdav:qt71d0h6hz |
By: | Calil, Yuri Clements Daglia |
Keywords: | Livestock Production/Industries, Marketing, Agricultural Finance |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:ags:aaea22:343928 |
By: | Hendriks, Patrick; Sturm, Timo; Geis, Maximilian; Grimminger, Till; Mast, Benedikt |
Abstract: | Research has long underscored the critical role of effective team collaboration in surpassing the limits of individual members’ capabilities. With organizations now increasingly integrating artificial intelligence (AI) as quasi-team members to enhance learning, problem-solving, and decision-making in teams, there is a pressing need to understand how to foster effective collaboration between teams and AI systems (i.e., team-AI collaboration). By adopting a design science approach and conducting nine semi-structured interviews with knowledge workers, we identify design requirements and principles for effective team-AI collaboration systems from an end-user perspective. We then develop a team-AI collaboration system within Discord (a voice, video, and text chat application) and evaluate its design through five laboratory experiments with human-AI teams. Our results show that introducing configurable roles and personalities for AI team members prompts humans to reconsider their own biases. However, human preconceptions still play a dominant role in shaping team performance. |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:dar:wpaper:146863 |
By: | Sankalp Srivastava (Department of Information Systems and Analytics, IIM Shillong, India Author-2-Name: Dr. Parijat Upadhyay Author-2-Workplace-Name: Department of Information Systems and Analytics, IIM Shillong, India Author-3-Name: Author-3-Workplace-Name: Author-4-Name: Author-4-Workplace-Name: Author-5-Name: Author-5-Workplace-Name: Author-6-Name: Author-6-Workplace-Name: Author-7-Name: Author-7-Workplace-Name: Author-8-Name: Author-8-Workplace-Name:) |
Abstract: | " Objective - Indigenous communities face various challenges, including marginalization, loss of cultural heritage, language endangerment, health disparities, and economic inequities. Digitalization, empowered by Artificial Intelligence (AI), offers transformative solutions for preserving and revitalizing indigenous knowledge systems and improving the quality of life for these communities. Methodology/Technique - This review critically examines the impact of digitalization and AI on indigenous populations, focusing on culture, language, health, and economic status. It evaluates both the positive outcomes and the potential biases introduced by AI technologies. Finding - By exploring the application of Generative AI, this review extends existing studies and demonstrates its capability to mitigate biases and enrich our understanding of Indigenous cultures. The review identifies the dual narrative present in existing research, the beneficial effects of digitalization and AI, and the potential for bias. Novelty - This study uniquely focuses on the dual narrative of AI impacts, particularly the potential for Generative AI to mitigate biases, offering new insights into the intersection of digitalization and Indigenous knowledge systems. Type of Paper - Empirical" |
Keywords: | indigenous communities, artificial intelligence, deep learning, large language, models, digitalization, decolonial AI, ethical artificial intelligence. |
JEL: | O33 I15 Z13 L86 |
Date: | 2024–06–30 |
URL: | https://d.repec.org/n?u=RePEc:gtr:gatrjs:gjbssr649 |
By: | Xi Cheng; Jinghao Zhang; Yunan Zeng; Wenfang Xue |
Abstract: | Algorithmic trading refers to executing buy and sell orders for specific assets based on automatically identified trading opportunities. Strategies based on reinforcement learning (RL) have demonstrated remarkable capabilities in addressing algorithmic trading problems. However, the trading patterns differ among market conditions due to shifted distribution data. Ignoring multiple patterns in the data will undermine the performance of RL. In this paper, we propose MOT, which designs multiple actors with disentangled representation learning to model the different patterns of the market. Furthermore, we incorporate the Optimal Transport (OT) algorithm to allocate samples to the appropriate actor by introducing a regularization loss term. Additionally, we propose Pretrain Module to facilitate imitation learning by aligning the outputs of actors with expert strategy and better balance the exploration and exploitation of RL. Experimental results on real futures market data demonstrate that MOT exhibits excellent profit capabilities while balancing risks. Ablation studies validate the effectiveness of the components of MOT. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.01577 |
By: | Yuji Shinozaki (Deputy Director, Institute for Monetary and Economic Studies, Bank of Japan (currently, Associate Professor, Musashino University, E-mail:y-shino@musashino-u.ac.jp)) |
Abstract: | The application of machine learning to the field of finance has recently become the subject of active discussions. In particular, the deep learning is expected to significantly advance the techniques of hedging and calibration. As these two techniques play a central role in financial engineering and mathematical finance, the application to them attracts attentions of both practitioners and researchers. Deep hedging, which applies deep learning to hedging, is expected to make it possible to analyze how factors such as transaction costs affect hedging strategies. Since the impact of these factors was difficult to be assessed quantitatively due to the computational costs, deep hedging opens possibilities not only for refining and automating hedging operations of derivatives but also for broader applications in risk management. Deep calibration, which applies deep learning to calibration, is expected to make the parameter optimization calculation, which is an essential procedure in derivative pricing and risk management, faster and more stable. This paper provides an overview of the existing literature and suggests future research directions from both practical and academic perspectives. Specifically, the paper shows the implications of deep learning to existing theoretical frameworks and practical motivations in finance and identifies potential future developments that deep learning can bring about and the practical challenges. |
Keywords: | Financial engineering, Mathematical finance, Derivatives, Hedging, Calibration, Numerical optimization |
JEL: | C63 G12 G13 |
Date: | 2024–04 |
URL: | https://d.repec.org/n?u=RePEc:ime:imedps:24-e-02 |
By: | Duwe, Daniel; Weissenberger-Eibl, Marion A. |
Abstract: | Megatrends such as sustainability force companies to adjust their business models or even to create entirely new ones. However, many companies struggle to do so be-cause of multiple reasons such as lacking creativity and capacity. The advent of Generative AI tools such as ChatGPT can help to overcome this challenge. In this paper, it was analyzed if and how Generative AI can be used to develop innovative business models and whether the results are of the same quality as compared to the ones from human origin. |
Keywords: | Business model innovation, Sustainability, Generative artificial intelligence |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:zbw:esconf:300532 |
By: | Ellenrieder, Sara; Ellenrieder, Nils; Hendriks, Patrick; Mehler, Maren F. |
Abstract: | Despite immense improvements in machine learning (ML)-based decision support systems (DSSs), these systems are still prone to errors. For use in high-risk environments such as aviation it is critical, to find out what costs the different types of ML error cause for decision makers. Thus, we provide pilots holding a valid flight license with explainable and non-explainable ML-based DSSs that output different types of ML errors while supporting the visual detection of other aircraft in the vicinity in 222 recorded scenes of flight simulations. The study reveals that both false positives (FPs) and false negatives (FNs) detrimentally affect pilot trust and performance, with a more pronounced effect observed for FNs. While explainable ML output design mitigates some negative effects, it significantly increases the mental workload for pilots when dealing with FPs. These findings inform the development of ML-based DSSs aligned with Error Management Theory to enhance applications in high-stakes environments. |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:dar:wpaper:146775 |
By: | Andrea Carriero; Davide Pettenuzzo; Shubhranshu Shekhar |
Abstract: | This paper presents a comparative analysis evaluating the accuracy of Large Language Models (LLMs) against traditional macro time series forecasting approaches. In recent times, LLMs have surged in popularity for forecasting due to their ability to capture intricate patterns in data and quickly adapt across very different domains. However, their effectiveness in forecasting macroeconomic time series data compared to conventional methods remains an area of interest. To address this, we conduct a rigorous evaluation of LLMs against traditional macro forecasting methods, using as common ground the FRED-MD database. Our findings provide valuable insights into the strengths and limitations of LLMs in forecasting macroeconomic time series, shedding light on their applicability in real-world scenarios |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.00890 |
By: | Lippens, Louis (Ghent University) |
Abstract: | The advent of large language models (LLMs) may reshape hiring in the labour market. This paper investigates how generative pre-trained transformers (GPTs)—i.e. OpenAI’s GPT-3.5, GPT-4, and GPT-4o—can aid hiring decisions. In a direct comparison between humans and GPTs on an identical hiring task, I show that GPTs tend to select candidates more liberally than humans but exhibit less ethnic bias. GPT-4 even slightly favours certain ethnic minorities. While LLMs may complement humans in hiring by making a (relatively extensive) pre-selection of job candidates, the findings suggest that they may miss-select due to a lack of contextual understanding and may reproduce pre-trained human bias at scale. |
Date: | 2024–07–11 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:zxf5y |
By: | Gräf, Miriam; Mehler, Maren F.; Ellenrieder, Sara |
Abstract: | Organizations integrating artificial intelligence (AI) services to optimize their performance regularly face a make-or-buy (MoB) decision, whether they develop AI services internally, purchase from external providers, or decide on a combination of both. The literature lacks detailed knowledge on how MoB decisions for AI-based services are uniquely influenced. We conducted a case study in a large mobility and logistics organization to derive these factors. We interviewed 16 experts on AI and MoB decisions to validate existing factors regarding MoB for IT and identify additional factors specific to AI services. Our findings suggest that strategic importance and data security are particularly relevant when making MoB decisions for AI services. In addition, the latter and case differentiation are specific factors we identified for AI services. The guidelines derived from our study provide a framework for organizations and scholars to evaluate MoB for AI services for informed decisions that align with their strategy. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:dar:wpaper:146709 |
By: | Trinath Sai Subhash Reddy Pittala; Uma Maheswara R Meleti; Hemanth Vasireddy |
Abstract: | In the burgeoning market of short-term rentals, understanding pricing dynamics is crucial for a range of stake-holders. This study delves into the factors influencing Airbnb pricing in major European cities, employing a comprehensive dataset sourced from Kaggle. We utilize advanced regression techniques, including linear, polynomial, and random forest models, to analyze a diverse array of determinants, such as location characteristics, property types, and host-related factors. Our findings reveal nuanced insights into the variables most significantly impacting pricing, highlighting the varying roles of geographical, structural, and host-specific attributes. This research not only sheds light on the complex pricing landscape of Airbnb accommodations in Europe but also offers valuable implications for hosts seeking to optimize pricing strategies and for travelers aiming to understand pricing trends. Furthermore, the study contributes to the broader discourse on pricing mechanisms in the shared economy, suggesting avenues for future research in this rapidly evolving sector. |
Date: | 2024–04 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2407.01555 |
By: | Zi Wang; Xingcheng Xu; Yanqing Yang; Xiaodong Zhu |
Abstract: | We propose a deep learning framework, DL-opt, designed to efficiently solve for optimal policies in quantifiable general equilibrium trade models. DL-opt integrates (i) a nested fixed point (NFXP) formulation of the optimization problem, (ii) automatic implicit differentiation to enhance gradient descent for solving unilateral optimal policies, and (iii) a best-response dynamics approach for finding Nash equilibria. Utilizing DL-opt, we solve for non-cooperative tariffs and industrial subsidies across 7 economies and 44 sectors, incorporating sectoral external economies of scale. Our quantitative analysis reveals significant sectoral heterogeneity in Nash policies: Nash industrial subsidies increase with scale elasticities, whereas Nash tariffs decrease with trade elasticities. Moreover, we show that global dual competition, involving both tariffs and industrial subsidies, results in lower tariffs and higher welfare outcomes compared to a global tariff war. These findings highlight the importance of considering sectoral heterogeneity and policy combinations in understanding global economic competition. |
Keywords: | Deep Learning; Tariff Wars; Industrial Policies; Optimal Policies; Nash Equilibria; Best-response dynamics; Quantitative Trade Models |
JEL: | F12 F51 C61 C63 |
Date: | 2024–07–24 |
URL: | https://d.repec.org/n?u=RePEc:tor:tecipa:tecipa-781 |
By: | Neugart, Michael |
Abstract: | The matching function has become a popular tool in labor economics. It relates job creation (a flow variable) to two stock variables: vacancies and job searchers. In most studies the matching function is considered to be exogenous and assumed to have certain properties. The present study, instead, looks at the properties of an endogenous matching function. For this purpose we have programmed an agent-based computational labor market model with endogenous job creation and endogenous job search behavior. Our~simulations suggest that the endogenous matching technology is subject to decreasing returns to scale. The Beveridge curve reveals substitutability of job searchers and vacancies for a small range of inputs, but is flat for relatively high numbers of job searchers and vertical for relatively high numbers of vacancies. Moreover, the matching technology changes with labor market policies. This raises concerns about the validity of labor market policy evaluations conducted with flow models of the labor market that employ exogenous matching functions. |
Date: | 2024–06–24 |
URL: | https://d.repec.org/n?u=RePEc:dar:wpaper:146279 |
By: | HRADEC Jiri (European Commission - JRC); DI LEO Margherita; KOTSEV Alexander (European Commission - JRC) |
Abstract: | This policy brief explores the potential of three distinct levels of data to inform policy-making: original datasets, synthetic replicas, and fully AI-generated data. Original datasets: These are the foundation of data-driven policy-making, providing authentic insights into real-world phenomena. However, original datasets often come with limitations, including privacy concerns, accessibility issues, and utility constraints. Synthetic replicas: To address these limitations, synthetic replicas of original datasets can be created. These replicas mimic the statistical properties of the original data, offering a privacy-safe alternative for analysis and research. Synthetic data can facilitate the integration of siloed data, enhancing data-driven decision-making without compromising sensitive information. Fully AI-generated data: The latest advancement in data synthesis is the use of artificial intelligence (AI) to generate fully synthetic data. This technology has the potential to revolutionize policy-making by providing detailed and context-rich data that can support groundbreaking research and product development. AI-generated data can be particularly valuable in sectors like healthcare and AI, where data privacy concerns are paramount. However, the adoption of synthetic and AI-generated data also introduces challenges, including data quality, biases, and ethical considerations. To address these challenges, rigorous quality controls and robust governance frameworks are necessary. This policy brief advocates for a unified approach towards the responsible use and governance of AI-generated data, ensuring its effective integration into policy-making frameworks within the European Union. This approach promises not only to enhance the precision of policy outcomes but also to democratize data access, fostering a more inclusive and insightful policy-making process. By recognizing the distinct characteristics and potential of each level of data, policymakers can harness the power of AI-generated data to inform more effective and responsible decision-making. |
Date: | 2024–07 |
URL: | https://d.repec.org/n?u=RePEc:ipt:iptwpa:jrc138521 |
By: | Patrick Mellacher (University of Graz, Austria); Gernot Lechner (University of Graz, Austria) |
Abstract: | Many surveys require respondents to place themselves on a left-right ideology scale. However, non-experts may not understand the scale or their “objective†position. Furthermore, a uni-dimensional approach may not suffice to describe ideology coherently. We thus develop a novel way to measure voter ideology: Combining expert and voter survey data, we use classification models to infer how experts would place voters based on their policy stances on three axes: general left-right, economic left-right and libertarian-authoritarian. We validate our approach by finding i) a strong connection between policies and ideology using data-driven approaches, ii) a strong predictive power of our models in cross-validation exercises, and iii) that “objective†ideology as predicted by our models significantly explains the vote choice in simple spatial voting models even after accounting for the subjective ideological distance between voters and parties as perceived by the voters. Our results shed new light on debates around mass polarization. |
Keywords: | machine learning, random forest, voter ideology, political economy, spatial voting. |
JEL: | C38 D70 D72 |
Date: | 2024–01 |
URL: | https://d.repec.org/n?u=RePEc:grz:wpaper:2024-03 |
By: | Asatryan, Zareh; Birkholz, Carlo; Heinemann, Friedrich |
Abstract: | Independent and high-quality evaluations of government policies are an important input for designing evidence-based policy. Lack of incentives and institutions to write such evaluations, on the other hand, carry the risk of turning the system into a costly beauty contest. We study one of the most advanced markets of policy evaluations in the world, the evaluations of EU Cohesion Policies by its Member States (MS). We use large language models quantify the findings of about 2, 300 evaluations, and complement this data with our own survey of the authors. We show that the findings of evaluations are inconsistent with those of the academic literature on the output impacts of Cohesion Policy. Using further variation across MS, our analysis suggests that the market of evaluations is rather oligopolistic within MS, that it is very fragmented across the EU, and that there is often a strong involvement of managing authorities in the work of formally independent evaluators. These factors contribute to making the findings of the evaluations overly optimistic (beautiful) risking their overall usefulness (evidence-based policy). We conclude by discussing reform options to make the evaluations of EU Cohesion Policies more unbiased and effective. |
Keywords: | Policy Evaluation, EU Cohesion Policy, Large Language Model |
JEL: | A11 C45 D83 H43 H54 |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:zbw:zewdip:300241 |
By: | Nawrath, Marcel; Nowak, Agnieszka Wiktoria; Ratz, Tristan; Walenta, Danilo Constantin; Opitz, Juri; Ribeiro, Leonardo F. R.; Sedoc, João; Deutsch, Daniel; Mille, Simon; Liu, Yixin; Gehrmann, Sebastian; Zhang, Lining; Mahamood, Saad; Clinciu, Miruna; Chandu, Khyathi; Hou, Yufang |
Abstract: | At the heart of the Pyramid evaluation method for text summarization lie human written summary content units (SCUs). These SCUs areconcise sentences that decompose a summary into small facts. Such SCUs can be used to judge the quality of a candidate summary, possibly partially automated via natural language inference (NLI) systems. Interestingly, with the aim to fully automate the Pyramid evaluation, Zhang and Bansal (2021) show that SCUs can be approximated by automatically generated semantic role triplets (STUs). However, several questions currently lack answers, in particular: i) Are there other ways of approximating SCUs that can offer advantages?ii) Under which conditions are SCUs (or their approximations) offering the most value? In this work, we examine two novel strategiesto approximate SCUs: generating SCU approximations from AMR meaning representations (SMUs) and from large language models (SGUs), respectively. We find that while STUs and SMUs are competitive, the best approximation quality is achieved by SGUs. We also show through a simple sentence-decomposition baseline (SSUs) that SCUs (and their approximations) offer the most value when rankingshort summaries, but may not help as much when ranking systems or longer summaries. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:dar:wpaper:146677 |
By: | Andrea Baronchelli |
Abstract: | As Artificial Intelligence (AI) becomes increasingly integrated into our lives, the need for new norms is urgent. However, AI evolves at a much faster pace than the characteristic time of norm formation, posing an unprecedented challenge to our societies. This paper examines possible criticalities of the processes of norm formation surrounding AI. Thus, it focuses on how new norms can be established, rather than on what these norms should be. It distinguishes different scenarios based on the centralisation or decentralisation of the norm formation process, analysing the cases where new norms are shaped by formal authorities, informal institutions, or emerge spontaneously in a bottom-up fashion. On the latter point, the paper reports a conversation with ChatGPT in which the LLM discusses some of the emerging norms it has observed. Far from seeking exhaustiveness, this article aims to offer readers interpretive tools to understand society's response to the growing pervasiveness of AI. An outlook on how AI could influence the formation of future social norms emphasises the importance for open societies to anchor their formal deliberation process in an open, inclusive, and transparent public discourse. |
Date: | 2023–07 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2307.08564 |
By: | Eric Langlais; Nanxi Li |
Abstract: | This paper studies how the combination of Product Liability and Tort Law shapes a monopoly' incentives to invest in R&D for developing risky AI-based technologies ("robots") that may accidentally induce harm to third-party victims. We assume that at the engineering stage, robots are designed to have two alternative modes of motion (fully autonomous vs human-driven), corresponding to optimized performances in predefined circumstances. In the autonomous mode, the monopoly (i.e. AI designer) faces Product Liability and undertakes maintenance expenditures to mitigate victims' expected harm. In the human-driven mode, AI users face Tort Law and exert a level of care to reduce victims' expected harm. In this set-up, efficient maintenance by the AI designer and efficient care by AI users result whatever the liability rule enforced in each area of law (strict liability, or negligence). However, overinvestment as well as underinvestment in R&D may occur at equilibrium, whether liability laws rely on strict liability or negligence, and whether the monopoly uses or does not use price discrimination. The first best level of R&D investments is reached at equilibrium only if simultaneously the monopoly uses (perfect) price discrimination, a regulator sets the output at the socially optimal level, and Courts implement strict liability in Tort Law and Product Liability. |
Keywords: | Artificial Intelligence, Algorithms, Tort Law, Product Liability, Strict Liability, Negligence |
JEL: | K13 K2 L1 |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:drm:wpaper:2024-22 |
By: | BORNUKOVA Kateryna (European Commission - JRC); PICOS Fidel (European Commission - JRC); AMORES Antonio F (European Commission - JRC); BELOUSOVA Irina (European Commission - JRC); CRUCES Hugo (European Commission - JRC); DE AGOSTINI Paola (European Commission - JRC); DE POLI Silvia (European Commission - JRC); DREONI Ilda (European Commission - JRC); GRUNBERGER Klaus (European Commission - JRC); HERNANDEZ MARTIN Adrian (European Commission - JRC); JEDRYCH VILLA Marta (European Commission - JRC); LEVENTI Chrysa (European Commission - JRC); MAIER Sofia (European Commission - JRC); MANIOS Kostas (European Commission - JRC); MANSO Luis (European Commission - JRC); MAZZON Alberto (European Commission - JRC); NAVARRO BERDEAL Silvia; PALMA FERNANDEZ Bianey (European Commission - JRC); PAPINI Andrea (European Commission - JRC); RICCI Mattia (European Commission - JRC); SERRUYS Hannes (European Commission - JRC) |
Abstract: | This report provides a selection of baseline simulation results and headline indicators from the latest public version (I6.0+) of EUROMOD, the tax-benefit microsimulation model for the EU. We begin by presenting indicators for income inequality and at-risk-of-poverty and how they are affected by the tax-benefit system. We then provide a comparative decomposition of the redistributive effect of the tax-benefit systems across the EU. We study how Member States achieve various degrees of redistribution through different combinations of progressivity and size of their tax-benefit system and each of its components. We then analyse various work incentive indicators affecting both the decision whether to work and that of how much to work, discussing how effective marginal rates of taxation and net replacement rates of going into unemployment vary across countries. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:ipt:taxref:202403 |
By: | Francisco Peñaranda (Queens College CUNY); Enrique Sentana (CEMFI, Centro de Estudios Monetarios y Financieros) |
Abstract: | The purpose of this survey is to summarize the academic literature that studies some of the ways in which portfolio management has been affected in recent years by the availability of big datasets: many assets, many characteristics for each of them, many macro predictors, and various sources of unstructured data. Thus, we deliberately focus on applications rather than methods. We also include brief reviews of the financial theories underlying asset management, which provide the relevant background to assess the plethora of recent contributions to such an active research field. |
Keywords: | Conditioning information, intertemporal portfolio decisions, machine learning, mean-variance analysis, stochastic discount factors. |
JEL: | G11 G12 C55 G17 |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:cmf:wpaper:wp2024_2411 |