|
on Evolutionary Economics |
By: | Fabian Dvorak; Regina Stumpf; Sebastian Fehrler; Urs Fischbacher |
Abstract: | Generative artificial intelligence (AI) is poised to reshape the way individuals communicate and interact. While this form of AI has the potential to efficiently make numerous human decisions, there is limited understanding of how individuals respond to its use in social interaction. In particular, it remains unclear how individuals engage with algorithms when the interaction entails consequences for other people. Here, we report the results of a large-scale pre-registered online experiment (N = 3, 552) indicating diminished fairness, trust, trustworthiness, cooperation, and coordination by human players in economic twoplayer games, when the decision of the interaction partner is taken over by ChatGPT. On the contrary, we observe no adverse welfare effects when individuals are uncertain about whether they are interacting with a human or generative AI. Therefore, the promotion of AI transparency, often suggested as a solution to mitigate the negative impacts of generative AI on society, shows a detrimental effect on welfare in our study. Concurrently, participants frequently delegate decisions to ChatGPT, particularly when the AI's involvement is undisclosed, and individuals struggle to discern between AI and human decisions. |
Date: | 2024–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2401.12773&r=evo |
By: | Scott E. Allen; Ren\'e F. Kizilcec; A. David Redish |
Abstract: | More than 30 years of research has firmly established the vital role of trust in human organizations and relationships, but the underlying mechanisms by which people build, lose, and rebuild trust remains incompletely understood. We propose a mechanistic model of trust that is grounded in the modern neuroscience of decision making. Since trust requires anticipating the future actions of others, any mechanistic model must be built upon up-to-date theories on how the brain learns, represents, and processes information about the future within its decision-making systems. Contemporary neuroscience has revealed that decision making arises from multiple parallel systems that perform distinct, complementary information processing. Each system represents information in different forms, and therefore learns via different mechanisms. When an act of trust is reciprocated or violated, this provides new information that can be used to anticipate future actions. The taxonomy of neural information representations that is the basis for the system boundaries between neural decision-making systems provides a taxonomy for categorizing different forms of trust and generating mechanistic predictions about how these forms of trust are learned and manifested in human behavior. Three key predictions arising from our model are (1) strategic risk-taking can reveal how to best proceed in a relationship, (2) human organizations and environments can be intentionally designed to encourage trust among their members, and (3) violations of trust need not always degrade trust, but can also provide opportunities to build trust. |
Date: | 2024–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2401.08064&r=evo |
By: | Houda Nait El Barj; Theophile Sautory |
Abstract: | We study the dynamics of opinions in a setting where a leader has a payoff that depends on agents' beliefs and where agents derive psychological utility from their beliefs. Agents sample a signal that maximises their utility and then communicate with each other through a network formed by disjoint social groups. The leader has a choice to target a finite set of social groups with a specific signal to influence their beliefs and maximise his returns. Heterogeneity in agents' preferences allows us to analyse the evolution of opinions as a dynamical system with asymmetric forces. We apply our model to explain the emergence of hatred and the spread of racism in a society. We show that when information is restricted, the equilibrium level of hatred is determined solely by the belief of the most extremist agent in the group regardless of the inherent structure of the network. On the contrary, when information is dense, the space is completely polarised in equilibrium with the presence of multiple "local truths" which oscillate in periodic cycles. We find that when preferences are uniformly distributed, the equilibrium level of hatred depends solely on the value of the practical punishment associated with holding a hate belief. Our finding suggests that an optimal policy to reduce hatred should focus on increasing the cost associated with holding a racist belief. |
Date: | 2024–01 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2401.07178&r=evo |
By: | David Echeverry Pérez, María Cristina Figueroa, Sandra Polanía-Reyes. |
Keywords: | Reciprocity, altruism, inequity aversion, latent class models, policy intervention. |
JEL: | C51 C93 D63 H41 Q20 |
Date: | 2023–11 |
URL: | http://d.repec.org/n?u=RePEc:nva:unnvaa:wp02-2023&r=evo |