|
on Utility Models and Prospect Theory |
Issue of 2024‒07‒22
seventeen papers chosen by |
By: | Whelan, Karl |
Abstract: | Samuelson (1963) conjectured that accepting multiple independent gambles you would reject on a stand-alone basis violated expected utility theory. Ross (1999) and others presented examples where expected utility maximizers would accept multiple gambles that would be rejected on a stand-alone basis once the number of gambles gets large enough. We show that a stronger result than Samuelson's conjecture applies for DARA preferences over wealth. Expected utility maximizers with DARA preferences have threshold levels of wealth such that those above the threshold will accept N positive expected value gambles while those below will not and these thresholds are increasing with N. |
Keywords: | Risk aversion; Paul Samuelson; Law of large numbers |
JEL: | D81 |
Date: | 2024–07–04 |
URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:121384&r= |
By: | René Van Den Brink (Department of Economics and Tinbergen Institute, VU University, Amsterdam, The Netherlands); Agnieszka Rusinowska (Centre d'Economie de la Sorbonne, CNRS, Université Paris 1 Panthéon-Sorbonne, Paris School of Economics) |
Abstract: | This paper aims to connect the social network literature on centrality measures with the economic literature on von Neumann-Morgenstern expected utility functions using cooperative game theory. The social network literature studies various concepts of network centrality, such as degree, betweenness, connectedness, and so on. This resulted in a great number of network centrality measures, each measuring centrality in a different way. In this paper, we aim to explore which centrality measures can be supported as von Neumann-Morgenstern expected utility functions, reflecting preferences over different network positions in different networks. Besides standard axioms on lotteries and preference relations, we consider neutrality to ordinary risk. We show that this leads to a class of centrality measures that is fully determined by the degrees (i.e. the numbers of neighbours) of the positions in a network. Although this allows for externalities, in the sense that the preferences of a position might depend on the way how other positions are connected, these externalities can be taken into account only by considering the degrees of the network positions. Besides bilateral networks, we extend our result to general cooperative TU-games to give a utility foundation of a class of TU-game solutions containing the Shapley value |
Keywords: | group decisions and negotiations; weighted graph; degree centrality; von Neumann-Morgenstern expected utility function; cooperative game |
JEL: | D85 D81 C02 |
Date: | 2023–08 |
URL: | https://d.repec.org/n?u=RePEc:mse:cesdoc:23012r&r= |
By: | David Gunawan; William Griffiths; Duangkamon Chotikapanich |
Abstract: | Using both single-index measures and stochastic dominance concepts, we show how Bayesian inference can be used to make multivariate welfare comparisons. A four-dimensional distribution for the well-being attributes income, mental health, education, and happiness are estimated via Bayesian Markov chain Monte Carlo using unit-record data taken from the Household, Income and Labour Dynamics in Australia survey. Marginal distributions of beta and gamma mixtures and discrete ordinal distributions are combined using a copula. Improvements in both well-being generally and poverty magnitude are assessed using posterior means of single-index measures and posterior probabilities of stochastic dominance. The conditions for stochastic dominance depend on the class of utility functions that is assumed to define a social welfare function and the number of attributes in the utility function. Three classes of utility functions are considered, and posterior probabilities of dominance are computed for one, two, and four-attribute utility functions for three time intervals within the period 2001 to 2019. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.13395&r= |
By: | Mario Ghossoub; Michael Boyuan Zhu |
Abstract: | We study Pareto efficiency in a pure-exchange economy where agents' preferences are represented by risk-averse monetary utilities. These coincide with law-invariant monetary utilities, and they can be shown to correspond to the class of monotone, (quasi-)concave, Schur concave, and translation-invariant utility functionals. This covers a large class of utility functionals, including a variety of law-invariant robust utilities. We show that Pareto optima exist and are comonotone, and we provide a crisp characterization thereof in the case of law-invariant positively homogeneous monetary utilities. This characterization provides an easily implementable algorithm that fully determines the shape of Pareto-optimal allocations. In the special case of law-invariant comonotone-additive monetary utility functionals (concave Yaari-Dual utilities), we provide a closed-form characterization of Pareto optima. As an application, we examine risk-sharing markets where all agents evaluate risk through law-invariant coherent risk measures, a widely popular class of risk measures. In a numerical illustration, we characterize Pareto-optimal risk-sharing for some special types of coherent risk measures. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.02712&r= |
By: | Hiroki Saruya; Masayuki Yagasaki |
Abstract: | Social norms are an important determinant of behavior, but the behavioral and welfare effects of norms are not well understood. We propose and axiomatize a decision-theoretic model in which a reference point is formed by the decision maker's perceptions of which actions are admired (prescriptive norms) and which are prevalent (descriptive norms), and utility depends on the pride of exceeding the reference point or the shame of falling below it. The model is simple, yet provides a unified explanation for previous empirical findings, and is useful for welfare analysis of norm-evoking policies with a revealed preference approach. |
Date: | 2024–06–24 |
URL: | https://d.repec.org/n?u=RePEc:toh:tupdaa:50&r= |
By: | Marco Cozzi (Department of Economics, University of Victoria) |
Abstract: | A vast experimental literature in both psychology and economics documents that individuals exhibit rankdependent probability weighting in economic decisions characterized by risk. I incorporate this well-known behavioral bias in a rank-dependent expected utility (RDEU) model. I develop a dynamic general equilibrium model with heterogeneous agents, labor market risk, and aggregate fluctuations, featuring households that are RDEU maximizers in an environment characterized by realistic labor market dynamics. I use the model to quantify the importance of RDEU for a number of macroeconomic outcomes. In a calibration of the model exploiting U.S. data, I find that RDEU plays a quantitatively important role for both the amount of aggregate wealth and the degree of wealth inequality, which are affected by the increased importance of precautionary saving, driven by households' pessimism. As for the outcomes routinely studied in the analysis of business-cycles, such as the volatility of both consumption and investment (relative to income), I find that RDEU improves the fit of the model. Overall, in terms of the discrepancy between model-generated business-cycle statistics and data, the RDEU models attain lower root mean squared errors than the EU one. |
Keywords: | Rank-dependent Probability Weighting, Rank-dependent Expected Utility, Heterogeneous Agents, Incomplete Markets, Labor Market Risk, Business Cycles. JEL Classifications: C63, D15, D52, D58, D90, E32, E71, J64. |
Date: | 2024–05–26 |
URL: | https://d.repec.org/n?u=RePEc:vic:vicddp:2402&r= |
By: | Matthew Polisson; John K. -H. Quah |
Abstract: | This note explains the equivalence between approximate rationalizability and approximate cost-rationalizability within the context of consumer demand. In connection with these results, we interpret Afriat's (1973) critical cost-efficiency index (CCEI) as a measure of approximate rationalizability through cost inefficiency, in the sense that an agent is spending more money than is required to achieve her utility targets. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.10136&r= |
By: | David Baqaee; Ariel Burstein; Yasutaka Koike-Mori |
Abstract: | We provide a method to measure welfare, in money-metric terms, taking into account expectations about the future. Our two key assumptions are that (1) the expenditure function is separable between the present and the future, and (2) there are some households that do not face idiosyncratic undiversifiable risk. Our sufficient statistics methodology allows for incomplete markets, lifecycle motives, non-rational expectations, non-exponential time discounting, and arbitrary functional forms. To apply our formulas, we require estimates of the elasticity of intertemporal substitution, goods and services’ prices over time, and repeated cross-sectional information on households’ income, balance sheets, and expenditures. We illustrate our method using the PSID from the United States. We find that static measures overstate cost-of-living increases for most households, particularly younger and poorer households. Our estimates can be used to study the welfare consequences of dynamic stochastic shocks that affect households along different margins and time horizons. For example, we find that involuntary job loss is associated with a 20% reduction in money-metric utility for households younger than 60 years old. |
JEL: | C14 D0 D10 D11 E0 E01 E20 I3 J6 |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:nbr:nberwo:32567&r= |
By: | Duy Khanh Lam |
Abstract: | The standard approach for constructing a Mean-Variance portfolio involves estimating parameters for the model using collected samples. However, since the distribution of future data may not resemble that of the training set, the out-of-sample performance of the estimated portfolio is worse than one derived with true parameters, which has prompted several innovations for better estimation. Instead of treating the data without a timing aspect as in the common training-backtest approach, this paper adopts a perspective where data gradually and continuously reveal over time. The original model is recast into an online learning framework, which is free from any statistical assumptions, to propose a dynamic strategy of sequential portfolios such that its empirical utility, Sharpe ratio, and growth rate asymptotically achieve those of the true portfolio, derived with perfect knowledge of the future data. When the distribution of future data has a normal shape, the growth rate of wealth is shown to increase by lifting the portfolio along the efficient frontier through the calibration of risk aversion. Since risk aversion cannot be appropriately predetermined, another proposed algorithm updating this coefficient over time forms a dynamic strategy approaching the optimal empirical Sharpe ratio or growth rate associated with the true coefficient. The performance of these proposed strategies is universally guaranteed under specific stochastic markets. Furthermore, in stationary and ergodic markets, the so-called Bayesian strategy utilizing true conditional distributions, based on observed past market information during investment, almost surely does not perform better than the proposed strategies in terms of empirical utility, Sharpe ratio, or growth rate, which, in contrast, do not rely on conditional distributions. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.13486&r= |
By: | Umutcan Salman |
Abstract: | Echenique et. al. (2013) established the testable revealed preference restrictions for stable aggregate matching with transferable (TU) and non-transferable utility (NTU) and for extremal stable matchings. In this paper, we rephrase their restrictions in terms of properties on a corresponding bipartite graph. From this, we obtain a simple condition that verifies whether a given aggregate matching is rationalisable. For matchings that are not rationalisable, we provide a simple greedy algorithm that computes the minimum number of matches that needs to be removed to obtain a rationalisable matching. We also show that the related problem of finding the minimum number of types that we need to remove in order to obtain a rationalisable matching is NP-complete. |
Abstract: | We introduce a notion of fairness, inspired by the equality of opportunity literature, into the centralized school choice setting, endowed with a measure of the quality of matches between students and schools. In this framework, fairness considerations are made by a social evaluator based on the match quality distribution. We impose the standard notion of stability as minimal desideratum and study matchings that satisfy our notion of fairness, and an efficiency requirement based on aggregate match quality. To overcome some of the identified incompatibilities, we propose two alternative approaches. The first one is a linear programming solution to maximize fairness under stability constraints. The second approach weakens fairness and efficiency to define a class of opportunity egalitarian social welfare functions that evaluate stable matchings. We then describe an algorithm to find the stable matching that maximizes social welfare. We conclude with an illustration on the allocation of Italian high school students in 2021/2022. |
Abstract: | This paper studies the object allocation problem, which involves assigning objects to agents while taking into account both object capacities and agents’ preferences. I focus on two classes of allocations: the Pareto efficient and Individual rational allocations and the Pareto efficient and Weak-core stable allocations. The goal is to look at two optimality criteria within these classes: one that maximizes the number of individuals improving upon their initial endowment (MAXDIST), and the one that minimizes the number of individuals who need to change from their initial allocation to the final one (MINDIST). I present an efficient algorithm for addressing the MAXDIST problem for the first class of allocations (Pareto efficient and Individual rational). Next, I study a special case of this problem where priority is given to the most disadvantaged individuals. I establish NP-completeness results for the other problems. I also look at how the results change when restricting individual preferences to be dichotomous. Finally, I present an integer programming formulation to solve small to moderately-sized instances of the NP-hard problems. |
Keywords: | Revealed preference theory, two-sided matching markets, stability, computational complexity, matroid; Many-to-one matching, equality of opportunity, rotation, stability; Object allocation, Pareto-efficiency, Weak-core stability, Individual Rationality, Computational complexity |
Date: | 2024–06–24 |
URL: | https://d.repec.org/n?u=RePEc:ulb:ulbeco:2013/375261&r= |
By: | Zerbe, Richard |
Abstract: | Abstract This paper considers the evolution of cost-benefit analysis, CBA, and proposes a foundation for its current use and continued development, to be called benefit-cost analysis, BCA. In the trajectory from CBA to BCA elements of a new foundation include first a recognition that there is a Pareto justification for its use, not just a potential Pareto or KH justification. The Pareto justification applies to the whole use of BCA, rather than, as with KH, to individual projects. Second, the BCA recognizes, to a greater extent than CBA, its reliance on law to determine rights and reference points from which BCA determines gains or losses. Thus, in considering the role of law, the gain-loss disparity is recognized by BCA as is the role of law in determining rights. Third, BCA recognizes behavioral economics essentially unknown to CBA. Fourth, the proposed foundation for BCA recognizes that illegal goods or actions are not to be given standing so that the value of stolen goods to the criminal is zero, except in cases in which the very illegality of the law itself is at issue. Fifth, BCA recognizes from a theoretical viewpoint that moral and ethical sentiments should, aside from data limitations, be treated as other goods for which there is a willingness to pay or to accept. This applies to both utility weights relying on declining marginal utility with income and to equity weights relying on WTP or WTA measures. Six, BCA recognizes that actual compensation can improve welfare. Seventh, I suggest that a discount rate that combines the use of both the social rate of time preference and the opportunity cost of capital constitutes an appropriate discounting procedure. Seventh, the suggested foundation for BCA will reduce many existing criticisms of CBA. |
Keywords: | cost-benefit analysis; transition; history; moral sentiments; equity; immoral sentiments; discount rate; criticisms |
JEL: | D6 D61 D63 K0 N0 N01 N20 |
Date: | 2023–04 |
URL: | https://d.repec.org/n?u=RePEc:pra:mprapa:121294&r= |
By: | Sommervoll, Dag Einar (Centre for Land Tenure Studies, Norwegian University of Life Sciences); Holden, Stein T. (Centre for Land Tenure Studies, Norwegian University of Life Sciences) |
Abstract: | Many risk and time elicitation designs rely on choice lists that aim to capture a switch point. A choice list for a respondent typically contains two switch point defining choices; the other responses are dominated in the sense that the preferred option could be inferred from the switch point. While these dominant choices may be argued necessary in the data collection process, it is less evident that they should be included at an equal footing with switch point defining choices in the subsequent analysis. We illustrate this using the same data set and model framework as in the seminal paper Andersen et al. (2008). The inclusion of dominated choices has a significant effect on both discount rate and risk aversion estimates. In the case of discount rate estimation, including the near (far) future-dominated choices give higher (lower) discount rates. In the case of risk aversion estimates, including more dominated save option choices tend to give more risk aversion, but the picture is more mixed than in the discount rate case. |
Keywords: | choice lists; preference elicitation; maximum likelihood estimation; time preference; risk preference |
JEL: | C13 C81 C93 D91 |
Date: | 2024–07–01 |
URL: | https://d.repec.org/n?u=RePEc:hhs:nlsclt:2024_001&r= |
By: | Clément Staner |
Abstract: | This thesis studies, develops, and tests theories of decision-making under uncertainty. I focus my analysis on two different situations. The first concerns situations where the decision-maker might be incapable of assigning precise probabilities to some outcomes, because she lacks information to do so. In such settings, the decision-maker is said to face ambiguity. This distinction is essential as many people exhibit ambiguity aversion and prefer choice situations with known probabilities (risk) over choice situations where probabilities are not known (ambiguity). The models that try to capture those behaviours, will be the focus of Chapter 1. The second type of problem this thesis studies concerns situations where the decision-maker has to make an investment decision after experiencing an event whose outcome is below what she expected. I specifically focus on how negative emotions stemming from such events influence people's subsequent decisions. These dynamics will be the focus of chapters 2 and 3. |
Keywords: | Behavioural economics; Revealed Preference; Emotion; Decision making under uncertainty |
Date: | 2024–06–26 |
URL: | https://d.repec.org/n?u=RePEc:ulb:ulbeco:2013/375383&r= |
By: | Fausto Gozzi; Marta Leocata; Giulia Pucci |
Abstract: | This paper studies a model for the optimal control (by a centralized economic agent which we call the planner) of pollution diffusion over time and space. The controls are the investments in production and depollution and the goal is to maximize an intertemporal utility function. The main novelty is the fact that the spatial component has a network structure. Moreover, in such a time-space setting we also analyze the trade-off between the use of green or non-green technologies: this also seems to be a novelty in such a setting. Extending methods of previous papers, we can solve explicitly the problem in the case of linear costs of pollution. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.15338&r= |
By: | Kexin Chen; Kyunghyun Park; Hoi Ying Wong |
Abstract: | In a continuous-time economy, this study formulates the Epstein-Zin (EZ) preference for the discounted dividend (or cash payouts) of stockholders as an EZ singular control utility. We show that such a problem is well-defined and equivalent to the robust dividend policy set by the firm's executive in the sense of Maenhout's ambiguity-averse preference. While the firm's executive announces the expected future earnings in financial reports, they also signal the firm's confidence in the expected earnings through dividend or cash payouts. The robust dividend policy can then be characterized by a Hamilton-Jacobi-Bellman (HJB) variational inequality (VI). By constructing a novel shooting method for the HJB-VI, we theoretically prove that the robust dividend policy is a threshold strategy on the firm's surplus process. Therefore, dividend-caring investors can choose firms that match their preferences by examining stock's dividend policies and financial statements, whereas executives can make use of dividend to signal their confidence, in the form of ambiguity aversion, on realizing the earnings implied by their financial statements. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.12305&r= |
By: | Francesca Mariani; Maria Cristina Recchioni; Tai-Ho Wang; Roberto Giacalone |
Abstract: | An empirical analysis, suggested by optimal Merton dynamics, reveals some unexpected features of asset volumes. These features are connected to traders' belief and risk aversion. This paper proposes a trading strategy model in the optimal Merton framework that is representative of the collective behavior of heterogeneous rational traders. This model allows for the estimation of the average risk aversion of traders acting on a specific risky asset, while revealing the existence of a price of risk closely related to market price of risk and volume rate. The empirical analysis, conducted on real data, confirms the validity of the proposed model. |
Date: | 2024–06 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2406.05854&r= |
By: | Ruslan Shavshin (National Research University Higher School of Economics); Marina Sandomirskaia (National Research University Higher School of Economics) |
Abstract: | This paper proposes a model of a finite two-sided market with a limited arbitrary number of products per seller, where buyers are involved in a directed search for the appropriate purchase. The effect of friction, discovered for the models with a single product per seller, remains, though the competition intensifies. We derive an analytical formula for the case of an equal number of products for every seller and deduce that the equilibrium price decreases with the growth of availability and drops to marginal costs when two sellers are able to serve the whole set of buyers. However, the seller’s utility is a bell-shaped function of the number of products. This produces the controversial impact of market concentration on the various equilibrium characteristics. For the general model with different capacities across sellers, we formulate equilibrium conditions on prices, and clarify how the market power of a particular seller depends on its capacity. Numerical analysis is also applied to the related problem of endogenous capacities |
Keywords: | finite market, directed search, market inefficiency, market concentration, friction, quantity competition. |
JEL: | D43 L13 D82 D83 C72 |
Date: | 2024 |
URL: | https://d.repec.org/n?u=RePEc:hig:wpaper:267/ec/2024&r= |