|
on Cognitive and Behavioural Economics |
Issue of 2024‒07‒22
three papers chosen by |
By: | Paul Heidhues (Heinrich Heine University Düsseldorf & DICE); Botond Kőszegi (University of Bonn); Philipp Strack (Yale University) |
Abstract: | We model an agent who stubbornly underestimates how much his behavior is driven by undesirable motives, and, attributing his behavior to other considerations, updates his views about those considerations. We study general properties of the model, and then apply the framework to identify novel implications of partially naive present bias. In many stable situations, the agent appears realistic in that he eventually predicts his behavior well. His unrealistic self-view does, however, manifest itself in several other ways. First, in basic settings he always comes to act in a more present-biased manner than a sophisticated agent. Second, he systematically mispredicts how he will react when circumstances change, such as when incentives for forwardlooking behavior increase or he is placed in a new, ex-ante identical environment. Third, even for physically non-addictive products, he follows empirically realistic addiction-like consumption dynamics that he does not anticipate. Fourth, he holds beliefs that — when compared to those of other agents — display puzzling correlations between logically unrelated issues. Our model implies that existing empirical tests of sophistication in intertemporal choice can reach incorrect conclusions. Indeed, we argue that some previous findings are more consistent with our model than with a model of correctly specified learningsophistication in intertemporal choice can reach incorrect conclusions. Indeed, we argue that some previous findings are more consistent with our model than with a model of correctly specified learning. |
Keywords: | Present bias, naivete, sophistication, misspecified learning, apparent sophistication, implicit bias, prejudice |
JEL: | D91 D83 D11 |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:ajk:ajkdps:317&r= |
By: | Rojas Méndez, Ana María; Scartascini, Carlos |
Abstract: | Behavioral biases often lead to suboptimal decisions, a vulnerability that extends to policymakers who operate under conditions of fatigue, stress, and time constraints and with significant implications for public welfare. While behavioral economics offers strategies like default adjustments to mitigate decision-making costs, deploying these policy interventions is not always feasible. Thus, enhancing the quality of policy decision-making is crucial, and evidence suggests that targeted training can boost job performance among policymakers. This study evaluates the impact of a behavioral training course on policy decision-making through a randomized experiment and a survey test that incorporates problem-solving and decision-making tasks among approximately 25, 000 participants enrolled in the course. Our findings reveal a significant improvement in the treated group, with responses averaging 0.6 standard deviations better than those in the control group. Given the increasing prevalence of such courses, this paper underscores the potential of behavioral training in improving policy decisions and advocates for further research through additional experimental studies. |
Keywords: | Experimental Design;behavioral economics;Training;public policy;Government officials |
JEL: | H83 Z18 |
Date: | 2024–04 |
URL: | https://d.repec.org/n?u=RePEc:idb:brikps:13476&r= |
By: | Hein, Ilka; Cecil, Julia (Ludwig-Maximilians-Universität München); Lermer, Eva (LMU Munich) |
Abstract: | Artificial intelligence (AI) is increasingly taking over leadership tasks in companies, including the provision of feedback. However, the effect of AI-driven feedback on employees and its theoretical foundations are poorly understood. We aimed to reduce this research gap by comparing perceptions of AI and human feedback based on construal level theory and the feedback process model. A 2 x 2 between-subjects design with vignettes was applied to manipulate feedback source (human vs. AI) and valence (negative vs. positive). In a preregistered experimental study (S1) and subsequent direct replication (S2), responses from NS1 = 263 and NS2 = 449 participants who completed a German online questionnaire were studied. Regression analyses showed that AI feedback was rated as less accurate and led to lower performance motivation, acceptance of the feedback provider, and intention to seek further feedback. These effects were mediated by perceived social distance. Moreover, for feedback acceptance and performance motivation, the differences were only found for positive but not for negative feedback in the first study. This implies that AI feedback may not inherently be perceived as more negatively than human feedback as it depends on the feedback’s valence. Furthermore, the mediation effects indicate that the shown negative evaluations of the AI can be explained by higher social distance and that increased social closeness to feedback providers may improve appraisals of them and of their feedback. Theoretical contributions of the studies and implications for the use of AI for providing feedback in the workplace are discussed, emphasizing the influence of effects related to construal level theory. |
Date: | 2024–06–06 |
URL: | https://d.repec.org/n?u=RePEc:osf:osfxxx:uczaw&r= |