|
on Artificial Intelligence |
By: | Steve Phelps; Yvan I. Russell |
Abstract: | In this study, we investigate the capacity of large language models (LLMs), specifically GPT-3.5, to operationalise natural language descriptions of cooperative, competitive, altruistic, and self-interested behavior in social dilemmas. Our focus is on the iterated Prisoner's Dilemma, a classic example of a non-zero-sum interaction, but our broader research program encompasses a range of experimental economics scenarios, including the ultimatum game, dictator game, and public goods game. Using a within-subject experimental design, we instantiated LLM-generated agents with various prompts that conveyed different cooperative and competitive stances. We then assessed the agents' level of cooperation in the iterated Prisoner's Dilemma, taking into account their responsiveness to the cooperative or defection actions of their partners. Our results provide evidence that LLMs can translate natural language descriptions of altruism and selfishness into appropriate behaviour to some extent, but exhibit limitations in adapting their behavior based on conditioned reciprocity. The observed pattern of increased cooperation with defectors and decreased cooperation with cooperators highlights potential constraints in the LLM's ability to generalize its knowledge about human behavior in social dilemmas. We call upon the research community to further explore the factors contributing to the emergent behavior of LLM-generated agents in a wider array of social dilemmas, examining the impact of model architecture, training parameters, and various partner strategies on agent behavior. As more advanced LLMs like GPT-4 become available, it is crucial to investigate whether they exhibit similar limitations or are capable of more nuanced cooperative behaviors, ultimately fostering the development of AI systems that better align with human values and social norms. |
Date: | 2023–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2305.07970&r=cbe |
By: | Nikodem Tomczak |
Abstract: | Adam Smith developed a version of moral philosophy where better decisions are made by interrogating an impartial spectator within us. We discuss the possibility of using an external non-human-based substitute tool that would augment our internal mental processes and play the role of the impartial spectator. Such tool would have more knowledge about the world, be more impartial, and would provide a more encompassing perspective on moral assessment. |
Date: | 2023–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2305.11519&r=cbe |
By: | Agam Shah; Sudheer Chava |
Abstract: | Recently large language models (LLMs) like ChatGPT have shown impressive performance on many natural language processing tasks with zero-shot. In this paper, we investigate the effectiveness of zero-shot LLMs in the financial domain. We compare the performance of ChatGPT along with some open-source generative LLMs in zero-shot mode with RoBERTa fine-tuned on annotated data. We address three inter-related research questions on data annotation, performance gaps, and the feasibility of employing generative models in the finance domain. Our findings demonstrate that ChatGPT performs well even without labeled data but fine-tuned models generally outperform it. Our research also highlights how annotating with generative models can be time-intensive. Our codebase is publicly available on GitHub under CC BY-NC 4.0 license. |
Date: | 2023–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2305.16633&r=cbe |
By: | Jonas Tallberg; Eva Erman; Markus Furendal; Johannes Geith; Mark Klamberg; Magnus Lundgren |
Abstract: | Artificial intelligence (AI) represents a technological upheaval with the potential to change human society. Because of its transformative potential, AI is increasingly becoming subject to regulatory initiatives at the global level. Yet, so far, scholarship in political science and international relations has focused more on AI applications than on the emerging architecture of global AI regulation. The purpose of this article is to outline an agenda for research into the global governance of AI. The article distinguishes between two broad perspectives: an empirical approach, aimed at mapping and explaining global AI governance; and a normative approach, aimed at developing and applying standards for appropriate global AI governance. The two approaches offer questions, concepts, and theories that are helpful in gaining an understanding of the emerging global governance of AI. Conversely, exploring AI as a regulatory issue offers a critical opportunity to refine existing general approaches to the study of global governance. |
Date: | 2023–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2305.11528&r=cbe |
By: | Alexandra Brintrup; George Baryannis; Ashutosh Tiwari; Svetan Ratchev; Giovanna Martinez-Arellano; Jatinder Singh |
Abstract: | While the increased use of AI in the manufacturing sector has been widely noted, there is little understanding on the risks that it may raise in a manufacturing organisation. Although various high level frameworks and definitions have been proposed to consolidate potential risks, practitioners struggle with understanding and implementing them. This lack of understanding exposes manufacturing to a multitude of risks, including the organisation, its workers, as well as suppliers and clients. In this paper, we explore and interpret the applicability of responsible, ethical, and trustworthy AI within the context of manufacturing. We then use a broadened adaptation of a machine learning lifecycle to discuss, through the use of illustrative examples, how each step may result in a given AI trustworthiness concern. We additionally propose a number of research questions to the manufacturing research community, in order to help guide future research so that the economic and societal benefits envisaged by AI in manufacturing are delivered safely and responsibly. |
Date: | 2023–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2305.11581&r=cbe |
By: | Jonas Tallberg; Magnus Lundgren; Johannes Geith |
Abstract: | As the development and use of artificial intelligence (AI) continues to grow, policymakers are increasingly grappling with the question of how to regulate this technology. The most far-reaching international initiative is the European Union (EU) AI Act, which aims to establish the first comprehensive framework for regulating AI. In this article, we offer the first systematic analysis of non-state actor preferences toward international regulation of AI, focusing on the case of the EU AI Act. Theoretically, we develop an argument about the regulatory preferences of business actors and other non-state actors under varying conditions of AI sector competitiveness. Empirically, we test these expectations using data on non-state actor preferences from public consultations on European AI regulation. Our findings are threefold. First, all types of non-state actors express concerns about AI and support regulation in some form. Second, there are nonetheless significant differences across actor types, with business actors being less concerned about the downsides of AI and more in favor of lax regulation than other non-state actors. Third, these differences are more pronounced in countries with stronger commercial AI sectors than in countries with lesser developed AI sectors. Our findings shed new light on non-state actor preferences toward AI regulation and point to challenges for policymakers having to balance competing interests. |
Date: | 2023–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2305.11523&r=cbe |
By: | Sarita Rosenstock |
Abstract: | This paper motivates and develops a framework for understanding how the socio-technical systems surrounding AI development interact with social welfare. It introduces the concept of ``signaling'' from evolutionary game theory and demonstrates how it can enhance existing theory and practice surrounding the evaluation and governance of AI systems. |
Date: | 2023–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2305.02561&r=cbe |
By: | Zhuang Liu; Michael Sockin; Wei Xiong |
Abstract: | This paper develops a foundation for a consumer's preference for data privacy by linking it to the desire to hide behavioral vulnerabilities. Data sharing with digital platforms enhances the matching efficiency for standard consumption goods, but also exposes individuals with self-control issues to temptation goods. This creates a new form of inequality in the digital era—algorithmic inequality. Although data privacy regulations provide consumers with the option to opt out of data sharing, these regulations cannot fully protect vulnerable consumers because of data-sharing externalities. The coordination problem among consumers may also lead to multiple equilibria with drastically different levels of data sharing by consumers. Our quantitative analysis further illustrates that although data is non-rival and beneficial to social welfare, it can also exacerbate algorithmic inequality. |
JEL: | D0 E0 |
Date: | 2023–05 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:31250&r=cbe |
By: | Kim, Minho; Han, Jaepil |
Abstract: | Despite high hopes for artificial intelligence (AI) to generate powerful innovations across the public sphere backed by its strong prediction skills, Korea has not fully brought the technologies into the public sector in tasks like identifying policy target groups and managing follow-up tasks in line with its policy objectives.Recent cases of AI-applied public services in Korea show limited usage, mainly replacing simple repetitive tasks. Few leading countries are trying to apply AI-based analysis to select promising policy target groups to effectively achieve policy goals and follow up on the performance of public projects. While the existing management system for policy performance is mostly about ex-post assessment of project outcomes, the application of AI technologies signifies a shift to data-driven decision-making that uses ex-ante forecasts of policy effects. An analysis of AI-applied recipient selection of small and medium enterprise (SME) policy support programs demonstrated the efficiency of AI in predicting the performance of beneficiary firms after the program and AI's potential to significantly improve the effectiveness of public support by providing helpful information in screening out unfit SMEs. Using firm-level data, this study applies machine learning to various public financing programs (subsidies or loans for SMEs) funded by the Ministry of SMEs and Startups and finds that AI helps predict the growth of recipient firms in the years following policy support. The application of AI in identifying fitting recipients likely to achieve intended objectives may increase project effectiveness. In a KDI survey in 2020, respondents pointed out that what hinders transitioning into a system of AI-applied, data-driven policymaking in the public sector are: 1) incomplete standardization and linkage of policy information between governmental ministries and 2) lack of expertise in technology utilization in the public sector. By developing a strategy to propel a transition into data-driven policymaking in the public sector, coordinated national-level efforts must be made to heighten policy effectiveness across different public fields, including education, health care, public safety, national defense, and business support. One way to adopt AI technologies in the public sector is by designing a policy to support technology adoption for competent public institutions. Support measures may cover system, data platform, security, organizational consulting, training, etc. Detailed strategies are: 1) unifying existing data management systems into one single platform, 2) reorganizing the way government work gets done to enable efficient exchange of policy information, and 3) building a trust-based public-private partnership. By examining the policy cycle from planning and implementation to evaluation, it is important to clarify areas for AI to contribute to policy decision-making. Also, the government needs step-by-step strategies toward data-driven policymaking, such as setting clear project objectives, selecting and sharing data, establishing system and security, and promoting operational transparency. |
Keywords: | Artificial intelligence, Public sector, SME policy, South Korea |
Date: | 2022 |
URL: | http://d.repec.org/n?u=RePEc:zbw:kdifor:288&r=cbe |
By: | Aliyev, Nihad; Huseynov, Fariz; Rzayev, Khaladdin |
Abstract: | Does the increased prevalence of algorithmic trading (AT) produce real economic effects? We find that AT contributes to managerial learning by fostering the production of new information and thereby increases firms' investment-to-price sensitivity. We link AT's impact on the investment-to-price sensitivity to the revelatory price efficiency - extent to which stock prices reveal information for real efficiency. AT-driven investment-to-price sensitivity helps managers make better investment decisions, leading to improved firm performance. While in aggregate AT contributes positively to managerial learning, we also show that there is a subset of AT strategies, namely opportunistic AT that is harmful to managerial learning. |
Keywords: | algorithmic trading; real effects of algorithmic trading; revelatory price efficiency; investment-to-price sensitivity |
JEL: | G20 G30 |
Date: | 2022–09–02 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:118844&r=cbe |
By: | Corrocher, Nicoletta; Moschella, Daniele; Staccioli, Jacopo; Vivarelli, Marco |
Abstract: | This paper deals with the complex relationship between innovation and the labor market, analyzing the impact of new technological advancements on overall employment, skills and wages. After a critical review of the extant literature and the available empirical studies, novel evidence is presented on the distribution of labor-saving automation (namely robotics and AI), based on natural language processing of US patents. This mapping shows that both upstream high-tech providers and downstream users of new technologies - such as Boeing and Amazon - lead the underlying innovative effort. |
Keywords: | Innovation, Technological Change, Skills, Wages, Technological Unemployment |
JEL: | O33 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:zbw:glodps:1284&r=cbe |
By: | Deng, Liuchun (Yale-NUS College); Müller, Steffen (IWH Halle); Plümpe, Verena (IWH Halle); Stegmaier, Jens (Institute for Employment Research (IAB), Nuremberg) |
Abstract: | We analyze the impact of robot adoption on employment composition using novel micro data on robot use of German manufacturing plants linked with social security records and data on job tasks. Our task-based model predicts more favorable employment effects for the least routine-task intensive occupations and for young workers, the latter being better at adapting to change. An event-study analysis for robot adoption confirms both predictions. We do not find decreasing employment for any occupational or age group but churning among low-skilled workers rises sharply. We conclude that the displacement effect of robots is occupation-biased but age neutral whereas the reinstatement effect is age-biased and benefits young workers most. |
Keywords: | robots, jobs, occupation, worker age |
JEL: | J23 |
Date: | 2023–05 |
URL: | http://d.repec.org/n?u=RePEc:iza:izadps:dp16128&r=cbe |
By: | Eric Budish; Ruiquan Gao; Abraham Othman; Aviad Rubinstein; Qianfan Zhang |
Abstract: | Approximate Competitive Equilibrium from Equal Incomes (A-CEEI) is an equilibrium-based solution concept for fair division of discrete items to agents with combinatorial demands. In theory, it is known that in asymptotically large markets: 1. For incentives, the A-CEEI mechanism is Envy-Free-but-for-Tie-Breaking (EF-TB), which implies that it is Strategyproof-in-the-Large (SP-L). 2. From a computational perspective, computing the equilibrium solution is unfortunately a computationally intractable problem (in the worst-case, assuming $\textsf{PPAD}\ne \textsf{FP}$). We develop a new heuristic algorithm that outperforms the previous state-of-the-art by multiple orders of magnitude. This new, faster algorithm lets us perform experiments on real-world inputs for the first time. We discover that with real-world preferences, even in a realistic implementation that satisfies the EF-TB and SP-L properties, agents may have surprisingly simple and plausible deviations from truthful reporting of preferences. To this end, we propose a novel strengthening of EF-TB, which dramatically reduces the potential for strategic deviations from truthful reporting in our experiments. A (variant of) our algorithm is now in production: on real course allocation problems it is much faster, has zero clearing error, and has stronger incentive properties than the prior state-of-the-art implementation. |
Date: | 2023–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2305.11406&r=cbe |
By: | Linyi Yang; Yingpeng Ma; Yue Zhang |
Abstract: | Financial forecasting has been an important and active area of machine learning research, as even the most modest advantage in predictive accuracy can be parlayed into significant financial gains. Recent advances in natural language processing (NLP) bring the opportunity to leverage textual data, such as earnings reports of publicly traded companies, to predict the return rate for an asset. However, when dealing with such a sensitive task, the consistency of models -- their invariance under meaning-preserving alternations in input -- is a crucial property for building user trust. Despite this, current financial forecasting methods do not consider consistency. To address this problem, we propose FinTrust, an evaluation tool that assesses logical consistency in financial text. Using FinTrust, we show that the consistency of state-of-the-art NLP models for financial forecasting is poor. Our analysis of the performance degradation caused by meaning-preserving alternations suggests that current text-based methods are not suitable for robustly predicting market information. All resources are available on GitHub. |
Date: | 2023–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2305.08524&r=cbe |
By: | Lin An; Andrew A. Li; Benjamin Moseley; R. Ravi |
Abstract: | The standard newsvendor model assumes a stochastic demand distribution as well as costs for overages and underages. The celebrated critical fractile formula can be used to determine the optimal inventory levels. While the model has been leveraged in numerous applications, often in practice more characteristics and features of the problem are known. Using these features, it is common to employ machine learning to predict inventory levels over the classic newsvendor approach. An emerging line of work has shown how to use incorporate machine learned predictions into models to circumvent lower bounds and give improved performance. This paper develops the first newsvendor model that incorporates machine learned predictions. The paper considers a repeated newsvendor setting with nonstationary demand. There is a prediction is for each period's demand and, as is the case in machine learning, the prediction can be noisy. The goal is for an inventory management algorithm to take advantage of the prediction when it is high quality and to have performance bounded by the best possible algorithm without a prediction when the prediction is highly inaccurate. This paper proposes a generic model of a nonstationary newsvendor without predictions and develops optimal upper and lower bounds on the regret. The paper then propose an algorithm that takes a prediction as advice which, without a priori knowledge of the accuracy of the advice, achieves the nearly optimal minimax regret. The perforamce mataches the best possible had the accuracy been known in advance. We show the theory is predictive of practice on real data and demonstrtate emprically that our algorithm has a 14% to 19% lower cost than a clairvoyant who knows the quality of the advice beforehand. |
Date: | 2023–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2305.07993&r=cbe |