|
on Microeconomics |
By: | Raphael Boleslavsky; Aaron Kolb |
Abstract: | A sender has a privately known preference over the action chosen by a receiver. The sender would like to influence the receiver's decision by providing information, in the form of a statistical experiment or test. The technology for information production is controlled by a monopolist intermediary, who offers a menu of tests and prices to screen the sender's type, possibly including a "threat" test to punish nonparticipation. We characterize the intermediary's optimal screening menu and the associated distortions, which we show may benefit the receiver. We compare the sale of persuasive information with other forms of influence -- overt bribery and controlling access. |
Date: | 2024–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2408.03689 |
By: | Bianchi, Milo; Yamashita, Takuro |
Abstract: | We analyze the optimal investment in a common infrastructure in a market with network externalities. Taking a dynamic mechanism design perspective, we contrast the level of investment and the associated payments across firms that a budget-constrained welfare-maximizing principal would set to those emerging in an unregulated market. We consider two market scenarios: first, a nascent market in which only one firm operates and an entrant may arrive at a later stage; second, a more mature market in which two firms already operate. In these settings, the principal needs to set access fees so as to provide enough incentives to invest in the infrastructure, while also avoiding wasteful investment. At the same time, the principal needs to coordinate investment and usage of the shared network given the various externalities that each firm exerts. We highlight the relative importance of these two aspects and how regulation can be designed so as to improve social welfare. We also highlight how the optimal timing of investment depends crucially on the regulator’s coordination power. |
Date: | 2024–08 |
URL: | https://d.repec.org/n?u=RePEc:tse:wpaper:129665 |
By: | Alexandre Arnout (Aix-Marseille University, CNRS, AMSE, Marseille France) |
Abstract: | I consider an electoral competition model where each candidate is associated with an exogenous initial position from which she can deviate to maximize her vote share, a strategy known as flip-flopping. Citizens have an intrinsic preference for consistent candidates, and abstain due to alienation, i.e. when their utility from their preferred candidate falls below a common exogenous threshold (termed the alienation threshold). I show how the alienation threshold shapes candidates’ flip-flopping strategy. When the alienation threshold is high, i.e. when citizens are reluctant to vote, there is no flip-flopping at equilibrium. When the alienation threshold is low, candidates flip-flop toward the center of the policy space. Surprisingly, I find a positive correlation between flip-flopping and voter turnout at equilibrium, despite voters’ preference for consistent candidates. Finally, I explore alternative models in which candidates’ objective function differs from vote share. I show that electoral competition can lead to polarization when candidates maximize their number of votes. |
Keywords: | flip-flopping, turnout, electoral competition, alienation, polarization |
JEL: | D72 C72 |
Date: | 2024–09 |
URL: | https://d.repec.org/n?u=RePEc:aim:wpaimx:2423 |
By: | Bianchi, Milo; Rhodes, Andrew |
Abstract: | We consider a model in which consumers live in isolated villages and need to send money to each other. Each village has (at most) one digital payment provider, which acts as a bridge to other villages. With fully rational consumers interoperability is beneficial: it raises financial inclusion, which in turn increases consumer surplus. With behavioural consumers who have imperfect information or incorrect beliefs about off-net fees, interoperability can reduce consumer welfare. Policies that cap transaction fees have an ambiguous effect on consumers, depending on how the cap is implemented, whether consumers are rational, and on how asymmetric providers are in terms of coverage. |
Date: | 2024–08 |
URL: | https://d.repec.org/n?u=RePEc:tse:wpaper:129664 |
By: | Rim Lahmandi-Ayed (CUT - Centre for Unframed Thinking - ESC [Rennes] - ESC Rennes School of Business); Didier Laussel (AMSE - Aix-Marseille Sciences Economiques - EHESS - École des hautes études en sciences sociales - AMU - Aix Marseille Université - ECM - École Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique, EHESS - École des hautes études en sciences sociales) |
Abstract: | div>We study a simple model in which two vertically differentiated firms compete in prices and mass advertising on an initially uninformed market. Consumers differ in their preference for quality.There is an upper bound on prices since consumers cannot spend more on the good than a fixed amount (say, their income). Depending on this income and on the ratio between the advertising cost and quality differential (relative advertising cost), either there is no equilibrium in pure strategies or there exists one of the following three types: (1) an interior equilibrium, where both firms have positive natural markets and charge prices lower than the consumer's income; (2) a constrained interior equilibrium, where both firms have positive natural markets, and the high-quality firm charges the consumer's income or (3) a corner equilibrium, where the low-quality firm has no natural market selling only to uninformed customers. We show that no corner equilibrium exists in which the high-quality firm would have a null natural market. At an equilibrium (whenever there exists one), the high-quality firm always advertises more, charges a higher price and makes a higher profit than the low-quality one. As the relative advertising cost goes to infinity, prices become equal and the advertising intensities converge to zero as well as the profits. Finally, the advertising intensities are, at least globally, increasing with the quality differential. Finally, in all cases, as the advertising parameter cost increases unboundedly, both prices converge increasingly towards the consumer's income. |
Keywords: | random advertising, advertising cost, vertical differentiation |
Date: | 2024–03–22 |
URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-04678637 |
By: | Aram Grigoryan; Markus M\"oller |
Abstract: | We introduce a framework where the announcements of a clearinghouse about the allocation process are opaque in the sense that there can be more than one outcome compatible with a realization of type reports. We ask whether desirable properties can be ensured under opacity in a robust sense. A property can be guaranteed under an opaque announcement if every mechanism compatible with it satisfies the property. We find an impossibility result: strategy-proofness cannot be guaranteed under any level of opacity. In contrast, in some environments, weak Maskin monotonicity and non-bossiness can be guaranteed under opacity. |
Date: | 2024–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2408.04509 |
By: | Guillaume Carlier (CEREMADE - CEntre de REcherches en MAthématiques de la DEcision - Université Paris Dauphine-PSL - PSL - Université Paris Sciences et Lettres - CNRS - Centre National de la Recherche Scientifique); Xavier Dupuis (IMB - Institut de Mathématiques de Bourgogne [Dijon] - UB - Université de Bourgogne - UBFC - Université Bourgogne Franche-Comté [COMUE] - CNRS - Centre National de la Recherche Scientifique); Jean-Charles Rochet (TSE-R - Toulouse School of Economics - UT Capitole - Université Toulouse Capitole - UT - Université de Toulouse - EHESS - École des hautes études en sciences sociales - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); John Thanassoulis (WBS - Warwick Business School - University of Warwick [Coventry], CEPR - Center for Economic Policy Research) |
Abstract: | We provide an algorithm for solving multidimensional screening problems which are intractable analytically. The algorithm is a primal-dual algorithm which al- ternates between optimising the primal problem of the surplus extracted by the principal and the dual problem of the optimal assignment to deliver to the agents for a given surplus. We illustrate the algorithm by solving (i) the generic monopolist price discrimination problem and (ii) an optimal tax problem covering income and savings taxes when citizens differ in multiple dimensions. |
Keywords: | Multidimensional screening, Algorithm, Numerical methods, Price discrimination, Optimal tax |
Date: | 2024–05–20 |
URL: | https://d.repec.org/n?u=RePEc:hal:journl:hal-04598698 |
By: | Fabian R. Pieroth; Tuomas Sandholm |
Abstract: | In practice, most auction mechanisms are not strategy-proof, so equilibrium analysis is required to predict bidding behavior. In many auctions, though, an exact equilibrium is not known and one would like to understand whether -- manually or computationally generated -- bidding strategies constitute an approximate equilibrium. We develop a framework and methods for estimating the distance of a strategy profile from equilibrium, based on samples from the prior and either bidding strategies or sample bids. We estimate an agent's utility gain from deviating to strategies from a constructed finite subset of the strategy space. We use PAC-learning to give error bounds, both for independent and interdependent prior distributions. The primary challenge is that one may miss large utility gains by considering only a finite subset of the strategy space. Our work differs from prior research in two critical ways. First, we explore the impact of bidding strategies on altering opponents' perceived prior distributions -- instead of assuming the other agents to bid truthfully. Second, we delve into reasoning with interdependent priors, where the type of one agent may imply a distinct distribution for other agents. Our main contribution lies in establishing sufficient conditions for strategy profiles and a closeness criterion for conditional distributions to ensure that utility gains estimated through our finite subset closely approximate the maximum gains. To our knowledge, ours is the first method to verify approximate equilibrium in any auctions beyond single-item ones. Also, ours is the first sample-based method for approximate equilibrium verification. |
Date: | 2024–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2408.11445 |
By: | Moise Blanchard; Patrick Jaillet |
Abstract: | We study the problem in which a central planner sequentially allocates a single resource to multiple strategic agents using their utility reports at each round, but without using any monetary transfers. We consider general agent utility distributions and two standard settings: a finite horizon $T$ and an infinite horizon with $\gamma$ discounts. We provide general tools to characterize the convergence rate between the optimal mechanism for the central planner and the first-best allocation if true agent utilities were available. This heavily depends on the utility distributions, yielding rates anywhere between $1/\sqrt T$ and $1/T$ for the finite-horizon setting, and rates faster than $\sqrt{1-\gamma}$, including exponential rates for the infinite-horizon setting as agents are more patient $\gamma\to 1$. On the algorithmic side, we design mechanisms based on the promised-utility framework to achieve these rates and leverage structure on the utility distributions. Intuitively, the more flexibility the central planner has to reward or penalize any agent while incurring little social welfare cost, the faster the convergence rate. In particular, discrete utility distributions typically yield the slower rates $1/\sqrt T$ and $\sqrt{1-\gamma}$, while smooth distributions with density typically yield faster rates $1/T$ (up to logarithmic factors) and $1-\gamma$. |
Date: | 2024–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2408.10066 |
By: | Tim J. Boonen; Yuyu Chen; Xia Han; Qiuqi Wang |
Abstract: | This paper explores optimal insurance solutions based on the Lambda-Value-at-Risk ($\Lambda\VaR$). If the expected value premium principle is used, our findings confirm that, similar to the VaR model, a truncated stop-loss indemnity is optimal in the $\Lambda\VaR$ model. We further provide a closed-form expression of the deductible parameter under certain conditions. Moreover, we study the use of a $\Lambda'\VaR$ as premium principle as well, and show that full or no insurance is optimal. Dual stop-loss is shown to be optimal if we use a $\Lambda'\VaR$ only to determine the risk-loading in the premium principle. Moreover, we study the impact of model uncertainty, considering situations where the loss distribution is unknown but falls within a defined uncertainty set. Our findings indicate that a truncated stop-loss indemnity is optimal when the uncertainty set is based on a likelihood ratio. However, when uncertainty arises from the first two moments of the loss variable, we provide the closed-form optimal deductible in a stop-loss indemnity. |
Date: | 2024–08 |
URL: | https://d.repec.org/n?u=RePEc:arx:papers:2408.09799 |
By: | Décamps, Jean-Paul; Mariotti, Thomas; Gensbittel, Fabien |
Abstract: | We prove the existence of a Markov-perfect equilibrium in randomized stopping times for a model of the war of attrition in which the underlying state variable follows a homogenous linear diffusion. We first prove that the space of Markovian randomized stopping times can be topologized as a compact absolute retract. This in turn enables us to use a powerful fixed-point theorem by Eilenberg and Montgomery [16] to prove our existence theorem. We illustrate our results with an example of a war of attrition that admits a mixed-strategy Markov-perfect equilibrium but no pure-strategy Markovperfect equilibrium. |
Keywords: | War of Attrition, Markovian Randomized Stopping Time, Markov-Perfect Equilibrium, Fixed-Point Theorem. |
Date: | 2024–08 |
URL: | https://d.repec.org/n?u=RePEc:tse:wpaper:129668 |