nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2024‒01‒22
ten papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. Algorithmic Fairness with Feedback By John W. Patty; Elizabeth Maggie Penn
  2. The Economics of Human Oversight: How Norms and Incentives Affect Costs and Performance of AI Workers By Johann Laux; Fabian Stephany; Alice Liefgreen
  3. The New Age of Collusion? An Empirical Study into Airbnb's Pricing Dynamics and Market Behavior By Richeng Piao
  4. Artificial Intelligence in the Knowledge Economy By Enrique Ide; Eduard Talamas
  5. Artificial Intelligence, Tasks, Skills and Wages: Worker-Level Evidence from Germany By Engberg, Erik; Koch, Michael; Lodefalk, Magnus; Schroeder, Sarah
  6. Skills or Degree? The Rise of Skill-Based Hiring for AI and Green Jobs By Eugenia Gonzalez Ehlinger; Fabian Stephany
  7. AI Unboxed and Jobs: A Novel Measure and Firm-Level Evidence from Three Countries By Engberg, Erik; Görg, Holger; Lodefalk, Magnus; Javed, Farrukh; Längkvist, Martin; Monteiro, Natália; Kyvik Nordås, Hildegunn; Pulito, Giuseppe; Schroeder, Sarah; Tang, Aili
  8. How Generative-AI can be Effectively used in Government Chatbots By Zeteng Lin
  9. The Transformative Effects of AI on International Economics By Rafael Andersson Lipcsey
  10. Shai: A large language model for asset management By Zhongyang Guo; Guanran Jiang; Zhongdan Zhang; Peng Li; Zhefeng Wang; Yinchun Wang

  1. By: John W. Patty; Elizabeth Maggie Penn
    Abstract: The field of algorithmic fairness has rapidly emerged over the past 15 years as algorithms have become ubiquitous in everyday lives. Algorithmic fairness traditionally considers statistical notions of fairness algorithms might satisfy in decisions based on noisy data. We first show that these are theoretically disconnected from welfare-based notions of fairness. We then discuss two individual welfare-based notions of fairness, envy freeness and prejudice freeness, and establish conditions under which they are equivalent to error rate balance and predictive parity, respectively. We discuss the implications of these findings in light of the recently discovered impossibility theorem in algorithmic fairness (Kleinberg, Mullainathan, & Raghavan (2016), Chouldechova (2017)).
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.03155&r=ain
  2. By: Johann Laux; Fabian Stephany; Alice Liefgreen
    Abstract: The global surge in AI applications is transforming industries, leading to displacement and complementation of existing jobs, while also giving rise to new employment opportunities. Human oversight of AI is an emerging task in which human workers interact with an AI model to improve its performance, safety, and compliance with normative principles. Data annotation, encompassing the labelling of images or annotating of texts, serves as a critical human oversight process, as the quality of a dataset directly influences the quality of AI models trained on it. Therefore, the efficiency of human oversight work stands as an important competitive advantage for AI developers. This paper delves into the foundational economics of human oversight, with a specific focus on the impact of norm design and monetary incentives on data quality and costs. An experimental study involving 307 data annotators examines six groups with varying task instructions (norms) and monetary incentives. Results reveal that annotators provided with clear rules exhibit higher accuracy rates, outperforming those with vague standards by 14%. Similarly, annotators receiving an additional monetary incentive perform significantly better, with the highest accuracy rate recorded in the group working with both clear rules and incentives (87.5% accuracy). However, both groups require more time to complete tasks, with a 31% increase in average task completion time compared to those working with standards and no incentives. These empirical findings underscore the trade-off between data quality and efficiency in data curation, shedding light on the nuanced impact of norm design and incentives on the economics of AI development. The paper contributes experimental insights to discussions on the economical, ethical, and legal considerations of AI technologies.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.14565&r=ain
  3. By: Richeng Piao
    Abstract: This study investigates the implications of algorithmic pricing in digital marketplaces, focusing on Airbnb's pricing dynamics. With the advent of Airbnb's new pricing tool, this research explores how digital tools influence hosts' pricing strategies, potentially leading to market dynamics that straddle the line between efficiency and collusion. Utilizing a Regression Discontinuity Design (RDD) and Propensity Score Matching (PSM), the study examines the causal effects of the pricing tool on pricing behavior among hosts with different operational strategies. The findings aim to provide insights into the evolving landscape of digital economies, examining the balance between competitive market practices and the risk of tacit collusion facilitated by algorithmic pricing. This study contributes to the discourse on digital market regulation, offering a nuanced understanding of the implications of AI-driven tools in market dynamics and antitrust analysis.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.05633&r=ain
  4. By: Enrique Ide; Eduard Talamas
    Abstract: How does Artificial Intelligence (AI) affect the organization of work and the structure of wages? We study this question in a model where heterogeneous agents in terms of knowledge--humans and machines--endogenously sort into hierarchical teams: Less knowledgeable agents become "workers" (i.e., execute routine tasks), while more knowledgeable agents become "managers" (i.e., specialize in problem solving). When AI's knowledge is equivalent to that of a pre-AI worker, AI displaces humans from routine work into managerial work compared to the pre-AI outcome. In contrast, when AI's knowledge is that of a pre-AI manager, it shifts humans from managerial work to routine work. AI increases total human labor income, but it necessarily creates winners and losers: When AI's knowledge is low, only the most knowledgeable humans experience income gains. In contrast, when AI's knowledge is high, both extremes of the knowledge distribution benefit. In any case, the introduction of AI harms the middle class.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.05481&r=ain
  5. By: Engberg, Erik (Örebro University School of Business); Koch, Michael (Aarhus University); Lodefalk, Magnus (Örebro University School of Business); Schroeder, Sarah (Aarhus University)
    Abstract: This paper documents novel facts on within-occupation task and skill changes over the past two decades in Germany. In a second step, it reveals a distinct relationship between occupational work content and exposure to artificial intelligence (AI) and automation (robots). Workers in occupations with high AI exposure, perform different activities and face different skill requirements, compared to workers in occupations exposed to robots. In a third step, the study uses individual labour market biographies to investigate the impact on wages between 2010 and 2017. Results indicate a wage growth premium in occupations more exposed to AI, contrasting with a wage growth discount in occupations exposed to robots. Finally, the study further explores the dynamic influence of AI exposure on individual wages over time, uncovering positive associations with wages, with nuanced variations across occupational groups.
    Keywords: Artificial intelligence technologies; Task content; Skills; Wages
    JEL: J23 J24 J44 N34 O33
    Date: 2023–12–27
    URL: http://d.repec.org/n?u=RePEc:hhs:oruesi:2023_012&r=ain
  6. By: Eugenia Gonzalez Ehlinger; Fabian Stephany
    Abstract: For emerging professions, such as jobs in the field of Artificial Intelligence (AI) or sustainability (green), labour supply does not meet industry demand. In this scenario of labour shortages, our work aims to understand whether employers have started focusing on individual skills rather than on formal qualifications in their recruiting. By analysing a large time series dataset of around one million online job vacancies between 2019 and 2022 from the UK and drawing on diverse literature on technological change and labour market signalling, we provide evidence that employers have started so-called "skill-based hiring" for AI and green roles, as more flexible hiring practices allow them to increase the available talent pool. In our observation period the demand for AI roles grew twice as much as average labour demand. At the same time, the mention of university education for AI roles declined by 23%, while AI roles advertise five times as many skills as job postings on average. Our regression analysis also shows that university degrees no longer show an educational premium for AI roles, while for green positions the educational premium persists. In contrast, AI skills have a wage premium of 16%, similar to having a PhD (17%). Our work recommends making use of alternative skill building formats such as apprenticeships, on-the-job training, MOOCs, vocational education and training, micro-certificates, and online bootcamps to use human capital to its full potential and to tackle talent shortages.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.11942&r=ain
  7. By: Engberg, Erik (Örebro University School of Business); Görg, Holger (University of Kiel); Lodefalk, Magnus (Örebro University School of Business); Javed, Farrukh (Lund University); Längkvist, Martin (Örebro University); Monteiro, Natália (University of Minho); Kyvik Nordås, Hildegunn (Örebro University School of Business); Pulito, Giuseppe (Humboldt University); Schroeder, Sarah (Aarhus University); Tang, Aili (Örebro University School of Business)
    Abstract: We unbox developments in artificial intelligence (AI) to estimate how exposure to these developments affect firm-level labour demand, using detailed register data from Denmark, Portugal and Sweden over two decades. Based on data on AI capabilities and occupational work content, We develop and validate a time-variant measure for occupational exposure to AI across subdomains of AI, including language modelling. According to our model, white collar occupations are most exposed to AI, and especially white collar work that entails relatively little social interaction. We illustrate its usefulness by applying it to near-universal data on firms and individuals from Sweden, Denmark, and Portugal, and estimating firm labour demand regressions. We find a positive (negative) association between AI exposure and labour demand for highskilled white (blue) collar work. Overall, there is an up-skilling effect, with the share of white-collar to blue collar workers increasing with AI exposure. Exposure to AI within the subdomains of image and language are positively (negatively) linked to demand for high-skilled white collar (blue collar) work, whereas other AI-areas are heterogeneously linked to groups of workers.
    Keywords: Artificial intelligence; Labour demand; Multi-country firm-level evidence
    JEL: E24 J23 J24 N34 O33
    Date: 2023–12–27
    URL: http://d.repec.org/n?u=RePEc:hhs:oruesi:2023_013&r=ain
  8. By: Zeteng Lin
    Abstract: With the rapid development of artificial intelligence and breakthroughs in machine learning and natural language processing, intelligent question-answering robots have become widely used in government affairs. This paper conducts a horizontal comparison between Guangdong Province's government chatbots, ChatGPT, and Wenxin Ernie, two large language models, to analyze the strengths and weaknesses of existing government chatbots and AIGC technology. The study finds significant differences between government chatbots and large language models. China's government chatbots are still in an exploratory stage and have a gap to close to achieve "intelligence." To explore the future direction of government chatbots more deeply, this research proposes targeted optimization paths to help generative AI be effectively applied in government chatbot conversations.
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.02181&r=ain
  9. By: Rafael Andersson Lipcsey
    Abstract: As AI adoption accelerates, research on its economic impacts becomes a salient source to consider for stakeholders of AI policy. Such research is however still in its infancy, and one in need of review. This paper aims to accomplish just that and is structured around two main themes. Firstly, the path towards transformative AI, and secondly the wealth created by it. It is found that sectors most embedded into global value chains will drive economic impacts, hence special attention is paid to the international trade perspective. When it comes to the path towards transformative AI, research is heterogenous in its predictions, with some predicting rapid, unhindered adoption, and others taking a more conservative view based on potential bottlenecks and comparisons to past disruptive technologies. As for wealth creation, while some agreement is to be found in AI's growth boosting abilities, predictions on timelines are lacking. Consensus exists however around the dispersion of AI induced wealth, which is heavily biased towards developed countries due to phenomena such as anchoring and reduced bargaining power of developing countries. Finally, a shortcoming of economic growth models in failing to consider AI risk is discovered. Based on the review, a calculated, and slower adoption rate of AI technologies is recommended.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.06679&r=ain
  10. By: Zhongyang Guo; Guanran Jiang; Zhongdan Zhang; Peng Li; Zhefeng Wang; Yinchun Wang
    Abstract: This paper introduces "Shai" a 10B level large language model specifically designed for the asset management industry, built upon an open-source foundational model. With continuous pre-training and fine-tuning using a targeted corpus, Shai demonstrates enhanced performance in tasks relevant to its domain, outperforming baseline models. Our research includes the development of an innovative evaluation framework, which integrates professional qualification exams, tailored tasks, open-ended question answering, and safety assessments, to comprehensively assess Shai's capabilities. Furthermore, we discuss the challenges and implications of utilizing large language models like GPT-4 for performance assessment in asset management, suggesting a combination of automated evaluation and human judgment. Shai's development, showcasing the potential and versatility of 10B-level large language models in the financial sector with significant performance and modest computational requirements, hopes to provide practical insights and methodologies to assist industry peers in their similar endeavors.
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2312.14203&r=ain

This nep-ain issue is ©2024 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.