nep-ain New Economics Papers
on Artificial Intelligence
Issue of 2023‒12‒18
fourteen papers chosen by
Ben Greiner, Wirtschaftsuniversität Wien


  1. Humanization of Virtual Assistants and Delegation Choices By Yang, Nanyin; Palma, Marco; Drichoutis, Andreas C.
  2. The Impact of Generative Artificial Intelligence By Kaichen Zhang; Ohchan Kwon; Hui Xiong
  3. AI-powered decision-making in facilitating insurance claim dispute resolution By Zhang, Wen; Shi, Jingwen; Wang, Xiaojun; Wynn, Henry
  4. Artificial intelligence and the skill premium By David E. Bloom; Klaus Prettner; Jamel Saadaoui; Mario Veruete
  5. "This time it's different" Generative Artificial Intelligence and Occupational Choice By Daniel Goller; Christian Gschwends; Stefan C. Wolter
  6. First, Do No Harm: Algorithms, AI, and Digital Product Liability By Marc J. Pfeiffer
  7. Critical AI Challenges in Legal Practice : An application to French Administrative Decisions By Khaoula Naili
  8. Large Language Models in Finance: A Survey By Yinheng Li; Shaofei Wang; Han Ding; Hang Chen
  9. Natural Language Processing for Financial Regulation By Ixandra Achitouv; Dragos Gorduza; Antoine Jacquier
  10. Predictive AI for SME and Large Enterprise Financial Performance Management By Ricardo Cuervo
  11. The Power of Trust: Designing Trustworthy Machine Learning Systems in Healthcare By Fecho, Mariska; Zöll, Anne
  12. Predicting Patient Length of Stay Using Artificial Intelligence to Assist Healthcare Professionals in Resource Planning and Scheduling Decisions By Yazan Alnsour; Marina Johnson; Abdullah Albizri; Antoine Harfouche Harfouche
  13. AI Ethics and Ordoliberalism 2.0: Towards A 'Digital Bill of Rights' By Manuel Woersdoerfer
  14. A Hypothesis on Good Practices for AI-based Systems for Financial Time Series Forecasting: Towards Domain-Driven XAI Methods By Branka Hadji Misheva; Joerg Osterrieder

  1. By: Yang, Nanyin; Palma, Marco; Drichoutis, Andreas C.
    Abstract: Virtual assistants powered by artificial intelligence are present in virtually every aspect of daily life. Although they are computer algorithms, most are represented with humanized personal characteristics. We study whether assigning them a gender affects the propensity to delegate a search in two online experiments and compare it to human counterparts of identical characteristics. Virtual assistants generally receive higher delegation than humans. Gender has differential effects in delegation rates impacting the user's welfare. The results are entirely driven by female subjects. We find mild spillover effects, primarily decreasing selection of male humans after interacting with low-quality male virtual assistants.
    Keywords: anthropomorphic features; artificial intelligence; autonomy; delegation; gender;
    JEL: C90 D23 D82 O33
    Date: 2023–11–26
    URL: http://d.repec.org/n?u=RePEc:pra:mprapa:119275&r=ain
  2. By: Kaichen Zhang; Ohchan Kwon; Hui Xiong
    Abstract: The rise of generative artificial intelligence (AI) has sparked concerns about its potential influence on unemployment and market depression. This study addresses this concern by examining the impact of generative AI on product markets. To overcome the challenge of causal inference, given the inherent limitations of conducting controlled experiments, this paper identifies an unanticipated and sudden leak of a highly proficient image-generative AI as a novel instance of a "natural experiment". This AI leak spread rapidly, significantly reducing the cost of generating anime-style images compared to other styles, creating an opportunity for comparative assessment. We collect real-world data from an artwork outsourcing platform. Surprisingly, our results show that while generative AI lowers average prices, it substantially boosts order volume and overall revenue. This counterintuitive finding suggests that generative AI confers benefits upon artists rather than detriments. The study further offers theoretical economic explanations to elucidate this unexpected phenomenon. By furnishing empirical evidence, this paper dispels the notion that generative AI might engender depression, instead underscoring its potential to foster market prosperity. These findings carry significant implications for practitioners, policymakers, and the broader AI community.
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2311.07071&r=ain
  3. By: Zhang, Wen; Shi, Jingwen; Wang, Xiaojun; Wynn, Henry
    Abstract: Leveraging Artificial Intelligence (AI) techniques to empower decision-making can promote social welfare by generating significant cost savings and promoting efficient utilization of public resources, besides revolutionizing commercial operations. This study investigates how AI can expedite dispute resolution in road traffic accident (RTA) insurance claims, benefiting all parties involved. Specifically, we devise and implement a disciplined AI-driven approach to derive the cost estimates and inform negotiation decision-making, compared to conventional practices that draw upon official guidance and lawyer experience. We build the investigation on 88 real-life RTA cases and detect an asymptotic relationship between the final judicial cost and the duration of the most severe injury, marked by a notable predicted R2 value of 0.527. Further, we illustrate how various AI-powered toolkits can facilitate information processing and outcome prediction: (1) how regular expression (RegEx) collates precise injury information for subsequent predictive analysis; (2) how alternative natural language processing (NLP) techniques construct predictions directly from narratives. Our proposed RegEx framework enables automated information extraction that accommodates diverse report formats; different NLP methods deliver comparable plausible performance. This research unleashes AI’s untapped potential for social good to reinvent legal-related decision-making processes, support litigation efforts, and aid in the optimization of legal resource consumption.
    Keywords: professional service operation; insurance claim; civil litigation; AI; natural language processing
    JEL: C1
    Date: 2023–10–23
    URL: http://d.repec.org/n?u=RePEc:ehl:lserod:120649&r=ain
  4. By: David E. Bloom; Klaus Prettner; Jamel Saadaoui; Mario Veruete
    Abstract: What will likely be the effect of the emergence of ChatGPT and other forms of artificial intelligence (AI) on the skill premium? To address this question, we develop a nested constant elasticity of substitution production function that distinguishes between industrial robots and AI. Industrial robots predominantly substitute for low-skill workers, whereas AI mainly helps to perform the tasks of high-skill workers. We show that AI reduces the skill premium as long as it is more substitutable for high-skill workers than low-skill workers are for high-skill workers.
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2311.09255&r=ain
  5. By: Daniel Goller; Christian Gschwends; Stefan C. Wolter
    Abstract: In this paper, we show the causal influence of the launch of generative AI in the form of ChatGPT on the search behavior of young people for apprenticeship vacancies. There is a strong and long-lasting decline in the intensity of searches for vacancies, which suggests great uncertainty among the affected cohort. Analyses based on the classification of occupations according to tasks, type of cognitive requirements, and the expected risk of automation to date show significant differences in the extent to which specific occupations are affected. Occupations with a high proportion of cognitive tasks, with high demands on language skills, and those whose automation risk had previously been assessed by experts as lower are significantly more affected by the decline. However, no differences can be found with regard to the proportion of routine vs. non-routine tasks.
    Keywords: Artificial intelligence, occupational choice, labor supply, technological change
    JEL: J24 O33
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:iso:educat:0209&r=ain
  6. By: Marc J. Pfeiffer
    Abstract: The ethical imperative for technology should be first, do no harm. But digital innovations like AI and social media increasingly enable societal harms, from bias to misinformation. As these technologies grow ubiquitous, we need solutions to address unintended consequences. This report proposes a model to incentivize developers to prevent foreseeable algorithmic harms. It does this by expanding negligence and product liability laws. Digital product developers would be incentivized to mitigate potential algorithmic risks before deployment to protect themselves and investors. Standards and penalties would be set proportional to harm. Insurers would require harm mitigation during development in order to obtain coverage. This shifts tech ethics from move fast and break things to first, do no harm. Details would need careful refinement between stakeholders to enact reasonable guardrails without stifling innovation. Policy and harm prevention frameworks would likely evolve over time. Similar accountability schemes have helped address workplace, environmental, and product safety. Introducing algorithmic harm negligence liability would acknowledge the real societal costs of unethical tech. The timing is right for reform. This proposal provides a model to steer the digital revolution toward human rights and dignity. Harm prevention must be prioritized over reckless growth. Vigorous liability policies are essential to stop technologists from breaking things
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2311.10861&r=ain
  7. By: Khaoula Naili (Université de Franche-Comté, CRESE, Besançon, France)
    Abstract: We use AI methods to evaluate the accuracy of several standard machine learning models for predicting judicial decision outcomes. We highlight the key steps and challenges in predicting judicial outcomes by applying these models to a database of administrative court decisions.These findings significantly contribute to our understanding of the potential advantages of AI in the context of predictive justice. We utilize AI methods to analyze administrative court decisions sourced from the database provided by the French Council of State. This analysis has been made possible due to the Council of State’s decision to make its decisions publicly accessible since March 2022. Our innovative approach pioneers the use of prediction models on the open data from the French Council of State, addressing the complexities associated with data analysis. Our primary objective is to assess the accuracy of these models in predicting outcomes in French administrative tribunals and identify the most effective model for forecasting administrative tribunal court decisions. The selected models are trained and evaluated on multi-class datasets, where decisions are traditionally categorized into various classes.
    Keywords: Artificial intelligence, Machine learning, Natural language processing, Predictive justice, Legal text
    JEL: K4
    Date: 2023–12
    URL: http://d.repec.org/n?u=RePEc:afd:wpaper:2304&r=ain
  8. By: Yinheng Li; Shaofei Wang; Han Ding; Hang Chen
    Abstract: Recent advances in large language models (LLMs) have opened new possibilities for artificial intelligence applications in finance. In this paper, we provide a practical survey focused on two key aspects of utilizing LLMs for financial tasks: existing solutions and guidance for adoption. First, we review current approaches employing LLMs in finance, including leveraging pretrained models via zero-shot or few-shot learning, fine-tuning on domain-specific data, and training custom LLMs from scratch. We summarize key models and evaluate their performance improvements on financial natural language processing tasks. Second, we propose a decision framework to guide financial professionals in selecting the appropriate LLM solution based on their use case constraints around data, compute, and performance needs. The framework provides a pathway from lightweight experimentation to heavy investment in customized LLMs. Lastly, we discuss limitations and challenges around leveraging LLMs in financial applications. Overall, this survey aims to synthesize the state-of-the-art and provide a roadmap for responsibly applying LLMs to advance financial AI.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2311.10723&r=ain
  9. By: Ixandra Achitouv; Dragos Gorduza; Antoine Jacquier
    Abstract: This article provides an understanding of Natural Language Processing techniques in the framework of financial regulation, more specifically in order to perform semantic matching search between rules and policy when no dataset is available for supervised learning. We outline how to outperform simple pre-trained sentences-transformer models using freely available resources and explain the mathematical concepts behind the key building blocks of Natural Language Processing.
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2311.08533&r=ain
  10. By: Ricardo Cuervo
    Abstract: Financial performance management is at the core of business management and has historically relied on financial ratio analysis using Balance Sheet and Income Statement data to assess company performance as compared with competitors. Little progress has been made in predicting how a company will perform or in assessing the risks (probabilities) of financial underperformance. In this study I introduce a new set of financial and macroeconomic ratios that supplement standard ratios of Balance Sheet and Income Statement. I also provide a set of supervised learning models (ML Regressors and Neural Networks) and Bayesian models to predict company performance. I conclude that the new proposed variables improve model accuracy when used in tandem with standard industry ratios. I also conclude that Feedforward Neural Networks (FNN) are simpler to implement and perform best across 6 predictive tasks (ROA, ROE, Net Margin, Op Margin, Cash Ratio and Op Cash Generation); although Bayesian Networks (BN) can outperform FNN under very specific conditions. BNs have the additional benefit of providing a probability density function in addition to the predicted (expected) value. The study findings have significant potential helping CFOs and CEOs assess risks of financial underperformance to steer companies in more profitable directions; supporting lenders in better assessing the condition of a company and providing investors with tools to dissect financial statements of public companies more accurately.
    Date: 2023–09
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2311.05840&r=ain
  11. By: Fecho, Mariska; Zöll, Anne
    Abstract: Machine Learning (ML) systems have an enormous potential to improve medical care, but skepticism about their use persists. Their inscrutability is a major concern which can lead to negative attitudes reducing end users trust and resulting in rejection. Consequently, many ML systems in healthcare suffer from a lack of user-centricity. To overcome these challenges, we designed a user-centered, trustworthy ML system by applying design science research. The design includes meta-requirements and design principles instantiated by mockups. The design is grounded on our kernel theory, the Trustworthy Artificial Intelligence principles. In three design cycles, we refined the design through focus group discussions (N1=8), evaluation of existing applications, and an online survey (N2=40). Finally, an effectiveness test was conducted with end users (N3=80) to assess the perceived trustworthiness of our design. The results demonstrated that the end users did indeed perceive our design as more trustworthy.
    Date: 2023–12–10
    URL: http://d.repec.org/n?u=RePEc:dar:wpaper:138903&r=ain
  12. By: Yazan Alnsour (UWO - University of Wisconsin Oshkosh); Marina Johnson (MSU - Montclair State University [USA]); Abdullah Albizri (MSU - Montclair State University [USA]); Antoine Harfouche Harfouche (UPN - Université Paris Nanterre)
    Abstract: Artificial intelligence (AI) significantly revolutionizes and transforms the global healthcare industry by improving outcomes, increasing efficiency, and enhancing resource utilization. The applications of AI impact every aspect of healthcare operation, particularly resource allocation and capacity planning. This study proposes a multi-step AI-based framework and applies it to a real dataset to predict the length of stay (LOS) for hospitalized patients. The results show that the proposed framework can predict the LOS categories with an AUC of 0.85 and their actual LOS with a mean absolute error of 0.85 days. This framework can support decision-makers in healthcare facilities providing inpatient care to make better front-end operational decisions, such as resource capacity planning and scheduling decisions. Predicting LOS is pivotal in today's healthcare supply chain (HSC) systems where resources are scarce, and demand is abundant due to various global crises and pandemics. Thus, this research's findings have practical and theoretical implications in AI and HSC management.
    Keywords: Artificial Intelligence, Predictive Analytics, Length of Stay, Healthcare Supply Chain, Clinical Decision Support
    Date: 2023–01
    URL: http://d.repec.org/n?u=RePEc:hal:journl:hal-04263512&r=ain
  13. By: Manuel Woersdoerfer
    Abstract: This article analyzes AI ethics from a distinct business ethics perspective, i.e., 'ordoliberalism 2.0.' It argues that the ongoing discourse on (generative) AI relies too much on corporate self-regulation and voluntary codes of conduct and thus lacks adequate governance mechanisms. To address these issues, the paper suggests not only introducing hard-law legislation with a more effective oversight structure but also merging already existing AI guidelines with an ordoliberal-inspired regulatory and competition policy. However, this link between AI ethics, regulation, and antitrust is not yet adequately discussed in the academic literature and beyond. The paper thus closes a significant gap in the academic literature and adds to the predominantly legal-political and philosophical discourse on AI governance. The paper's research questions and goals are twofold: First, it identifies ordoliberal-inspired AI ethics principles that could serve as the foundation for a 'digital bill of rights.' Second, it shows how those principles could be implemented at the macro level with the help of ordoliberal competition and regulatory policy.
    Date: 2023–10
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2311.10742&r=ain
  14. By: Branka Hadji Misheva; Joerg Osterrieder
    Abstract: Machine learning and deep learning have become increasingly prevalent in financial prediction and forecasting tasks, offering advantages such as enhanced customer experience, democratising financial services, improving consumer protection, and enhancing risk management. However, these complex models often lack transparency and interpretability, making them challenging to use in sensitive domains like finance. This has led to the rise of eXplainable Artificial Intelligence (XAI) methods aimed at creating models that are easily understood by humans. Classical XAI methods, such as LIME and SHAP, have been developed to provide explanations for complex models. While these methods have made significant contributions, they also have limitations, including computational complexity, inherent model bias, sensitivity to data sampling, and challenges in dealing with feature dependence. In this context, this paper explores good practices for deploying explainability in AI-based systems for finance, emphasising the importance of data quality, audience-specific methods, consideration of data properties, and the stability of explanations. These practices aim to address the unique challenges and requirements of the financial industry and guide the development of effective XAI tools.
    Date: 2023–11
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2311.07513&r=ain

This nep-ain issue is ©2023 by Ben Greiner. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at https://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.