|
on Microeconomics |
By: | Claude Fluet; Thomas Lanzi |
Abstract: | Two opposed parties seek to infl‡uence an uninformed decision maker. They invest in acquiring information and select what to disclose. The decision maker then adjudicates. We compare this benchmark with a procedure allowing adversarial cross-examination. A cross-examiner tests the opponent in order to persuade the decision maker that the opponent is deceitful. How does the opportunity or threat of cross-examination affect the parties' ’behavior? How does it affect the quality of decision-making? We show that decision-making deteriorates because parties are less likely to acquire information and because cross-examination too often makes the truth appear as falsehood. Next, we consider a form of controlled cross-examination by permitting the cross-examined to be re-examined by his own advocate, i.e., counter-persuasion. More information then reaches the decision maker. Decision-making may or may not improve compared to the benchmark depending on how examination is able to trade off type 1 and 2 errors. |
Keywords: | : Bayesian persuasion, disclosure game, adversarial, redirect examination, procedural rules. |
JEL: | C72 D71 D82 D83 K41 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:lvl:crrecr:2108&r= |
By: | Petra Persson ⓡ; Nikita Roketskiy ⓡ; Samuel Lee |
Abstract: | We analyze the diffusion of rival information in a social network. In our model, rational agents can share information sequentially, unconstrained by an exogenous protocol or timing. We show how to compute the set of eventually informed agents for any network, and show that it is essentially unique under altruistic preferences. The relationship between network structure and information diffusion is complex because the former shapes both the charity and confidentiality of potential senders and receivers. |
JEL: | D83 D85 |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:29324&r= |
By: | Laura Blattner; Scott Nelson; Jann Spiess |
Abstract: | We characterize optimal oversight of algorithms in a world where an agent designs a complex prediction function but a principal is limited in the amount of information she can learn about the prediction function. We show that limiting agents to prediction functions that are simple enough to be fully transparent is inefficient as long as the bias induced by misalignment between principal's and agent's preferences is small relative to the uncertainty about the true state of the world. Algorithmic audits can improve welfare, but the gains depend on the design of the audit tools. Tools that focus on minimizing overall information loss, the focus of many post-hoc explainer tools, will generally be inefficient since they focus on explaining the average behavior of the prediction function rather than sources of mis-prediction, which matter for welfare-relevant outcomes. Targeted tools that focus on the source of incentive misalignment, e.g., excess false positives or racial disparities, can provide first-best solutions. We provide empirical support for our theoretical findings using an application in consumer lending. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.03443&r= |
By: | Sarit Markovich (Kellogg School of Management, Northwestern University, Evanston, IL, USA); Yaron Yehezkel (Coller School of Management, Tel-Aviv University, Ramat-Aviv, Israel) |
Abstract: | We consider a platform that collects data from users. Data has commercial benefit to the platform, personal benefit to the user, and public benefit to other users. We ask whether the platform, or users, should have the right to decide which data the platform commercializes. We find that when users differ in their disutility from the commercialization of their data and the public benefit of data is high (low), it is welfare enhancing to let the platform (users) control the data. In contrast, when heterogeneity is in the disutility from the commercialization of different data items, it is welfare enhancing to let users (the platform) control the data when the public benefit of data is high (low). Furthermore, we find that allowing the platform to compensate users for their data is not always welfare enhancing and competition does not necessarily result in the efficient outcome. |
Keywords: | data regulation, network externalities, platform competition |
JEL: | L1 |
Date: | 2021–09 |
URL: | http://d.repec.org/n?u=RePEc:net:wpaper:2108&r= |
By: | Atabek Atayev |
Abstract: | In markets with search frictions, consumers can acquire information about goods either through costly search or from friends via word-of-mouth (WOM) communication. How do sellers' market power react to a very large increase in the number of consumers' friends with whom they engage in WOM? The answer to the question depends on whether consumers are freely endowed with price information. If acquiring price quotes is costly, equilibrium prices are dispersed and the expected price is higher than the marginal cost of production. This implies that firms retain market power even if price information is disseminated among a very large number of consumers due to technological progress, such as social networking websites. |
Date: | 2021–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.00032&r= |
By: | Jovanovic, Dragan; Wey, Christian; Zhang, Mengxi |
Abstract: | This paper argues that it cannot be taken for granted that any merger that raises consumer surplus also increases social welfare. We assume a Cournot model with homogeneous goods, linear demand, and constant marginal costs, to show that a merger can raise consumer surplus while harming social welfare. Within this framework, we show that such an outcome depends on two conditions: the merger is between small firms (i.e., relatively inefficient firms) and it reduces concentration; that is, a constellation which can be characterized as a "runner-up" merger. |
Keywords: | Runner-up Mergers,Efficiencies,Oligopoly,welfare |
JEL: | K21 L13 L41 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:zbw:dicedp:371&r= |
By: | Mehmet Ekmekci (Department of Economics, Boston College); Alexander White (School of Economics and Management, Tsinghua University); Lingxuan Wu (Department of Economics, Harvard University) |
Abstract: | We study the effects of competition and interoperabilty in platform markets. To do so, we adopt an approach of competition in net fees, which is well-suited to situations where users pay additional charges, after joining, for on-platform interactions. Compared to other approaches, net fees expand the tractable scope to allow platform asymmetry and variable total demand. Regarding competition, our findings raise concerns, including possible dominance-inducing entry, which symmetric models overlook. Our results are more optimistic towards the helpfulness of policies that promote interoperability among platforms, but they urge caution when total demand variability is a significant factor. |
Keywords: | Platform Competition, Big Tech, Net Fees, Interoperability |
JEL: | D21 D43 D85 L13 |
Date: | 2021–09 |
URL: | http://d.repec.org/n?u=RePEc:net:wpaper:2113&r= |
By: | Nadine Chlaß (Department of Economics, Friedrich Schiller University Jena, Germany.); Lata Gangadharan (Department of Economics, Monash University,); Kristy Jones (Senior Economist, Australian Council of Trade Unions Behavioural Insights Team, Queensland,) |
Abstract: | Donations are often made through charitable intermediaries that can fund themselves from these same donations. After intermediation, only a fraction of the amount donated may reach the intended beneficiary. The price of charitable output is therefore higher after intermediation than if donors donated directly toward the end cause. At the same time, this price is hidden from donors since they cannot verify how much intermediaries pass on. We show that while donors reduce their donation in intermediation itself and also reduce their donation because they expect the price of charitable output to increase, both reactions are either fully or partly compensated by their ethical preferences for the recipient’s rights. Charitable output, therefore, can be a Giffen-good. |
Keywords: | charitable giving; altruism; intermediation; charitable institutions; moral judgment reasoning; experiment |
JEL: | C91 D64 L31 |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:mos:moswps:2021-14&r= |
By: | Ghosh, Meenakshi |
Abstract: | We model a situation where two sellers trade vertically and horizontally differentiated goods on a platform for which they are charged a commission fee. Sellers' costs are asymmetric due to differences in the fees charged by the platform and in their costs of production. Consumers purchase either a base good, or a bundle comprising of the base good and an add-on, from one of the sellers on the platform. Consumers differ in their brand preferences, valuations of quality and in their levels of sophistication. More specifically, we assume that there is a fraction of consumers who are naive, and do not observe or consider add-on prices - possibly because they do not foresee their demand for an add-on - until after they have committed to buying the base good from a seller. We examine how the interplay of these forces shapes consumer behavior, sellers' pricing strategies and cost pass-through, and platform fees and revenues. |
Keywords: | add-on pricing, consumer naivete, cost asymmetry, horizontal differentiation, platform fees, cost pass-through |
JEL: | D43 L11 L14 L15 |
Date: | 2021–10–01 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:109981&r= |
By: | Yuqing Kong |
Abstract: | In the setting where we want to aggregate people's subjective evaluations, plurality vote may be meaningless when a large amount of low-effort people always report "good" regardless of the true quality. "Surprisingly popular" method, picking the most surprising answer compared to the prior, handle this issue to some extent. However, it is still not fully robust to people's strategies. Here in the setting where a large number of people are asked to answer a small number of multi-choice questions (multi-task, large group), we propose an information aggregation method that is robust to people's strategies. Interestingly, this method can be seen as a rotated "surprisingly popular". It is based on a new clustering method, Determinant MaxImization (DMI)-clustering, and a key conceptual idea that information elicitation without ground-truth can be seen as a clustering problem. Of independent interest, DMI-clustering is a general clustering method that aims to maximize the volume of the simplex consisting of each cluster's mean multiplying the product of the cluster sizes. We show that DMI-clustering is invariant to any non-degenerate affine transformation for all data points. When the data point's dimension is a constant, DMI-clustering can be solved in polynomial time. In general, we present a simple heuristic for DMI-clustering which is very similar to Lloyd's algorithm for k-means. Additionally, we also apply the clustering idea in the single-task setting and use the spectral method to propose a new aggregation method that utilizes the second-moment information elicited from the crowds. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.00952&r= |
By: | Nyborg, Karine (Dept. of Economics, University of Oslo) |
Abstract: | I demonstrate a straightforward but apparently widely unrecognized implication of the standard requirements for perfect competition: an economy in which consumers can choose to learn is generally not perfectly competitive. In particular, if endogenous welfare relevant learning is feasible, the economy cannot be perfectly competitive unless identical learning choices by all consumers are guaranteed. If the new information is not shared with everyone, asymmetric information arise; if information is shared, externalities arise. The standard conditions for the two fundamental welfare theorems, thus, implicitly preclude heterogeneous welfare relevant learning decisions. |
Keywords: | Perfect competition; fundamental welfare theorems; learning; symmetric information; externalities |
JEL: | D41 D50 D60 D61 D62 D82 |
Date: | 2021–05–03 |
URL: | http://d.repec.org/n?u=RePEc:hhs:osloec:2021_002&r= |
By: | Elena L. Del Mercato (Centre d'Economie de la Sorbonne, Paris School of Economics); Van Quy Nguyen (Centre d'Economie de la Sorbonne) |
Abstract: | We consider a pure exchange economy with consumption externalities in preferences. Using the notion of competitive equilibrium à la Nash, we point out that a simple condition for restoring the Second Welfare Theorem is that the set of Pareto optimal allocations is included in that of internal Pareto optimal allocations. We provide the Social Redistribution assumption to ensure such inclusion. This assumption is weaker than other relevant assumptions that have been studied in the literature. We then introduce the differential counterpart of Social Redistribution, called Directional Social Redistribution. This assumption entails an interesting consequence that relates social marginal utilities and supporting prices at a Pareto optimal allocation. Finally, we show that, for Bergson-Samuelson utility functions, Directional Social Redistribution is ensured by a specific property of the Jacobian matrix, which has a natural interpretation in terms of externalities |
Keywords: | Other-regarding preferences; competitive equilibrium à la Nash; second welfare theorem; social redistribution |
JEL: | D11 D50 D62 |
Date: | 2021–07 |
URL: | http://d.repec.org/n?u=RePEc:mse:cesdoc:21029&r= |
By: | Heiko Karle; Heiner Schumacher; Rune Vølund |
Abstract: | We consider the Salop (1979) model of product differentiation and assume that consumers are uncertain about the qualities and prices of firms’ products. They can inspect all products at zero cost. A share of consumers is expectation-based loss averse. For these consumers, a purchase plan, which involves buying products of varying quality and price with positive probability, creates disutility from gain-loss sensations. Even at modest degrees of loss aversion they may refrain from inspecting all products and choose an individual default that is strictly dominated in terms of surplus. Firms’ strategic behavior exacerbates the scope for this effect. The model generates “scale-dependent psychological switching costs” that increase in the value of the transaction. We find empirical evidence for the predicted association between switching behavior and loss aversion in new survey data. |
Keywords: | switching costs, competition, loss aversion |
JEL: | D21 D83 L41 |
Date: | 2021 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_9313&r= |
By: | Papadopoulos, Konstantinos G.; Petrakis, Emmanuel; Skartados, Panagiotis |
Abstract: | In a two-tier industry with an upstream monopolist supplier and downstream competition with differentiated goods, we show that passive partial forward integration (PPFI) has ambiguous effects on competition and welfare. When vertical trading is conducted via linear tariffs, PPFI is pro-competitive and welfare-increasing. While under two-part tariffs, it is anti-competitive and welfare-decreasing. These hold irrespectively of the degree of product differentiation, the observability or secrecy of contract terms, the mode of downstream competition, and the distribution of bargaining power between firms. |
Keywords: | Partial Passive Forward Integration; Two-Part Tariffs; Linear Tariffs; Competition; Welfare |
JEL: | D43 L13 |
Date: | 2021–10–01 |
URL: | http://d.repec.org/n?u=RePEc:cte:werepe:33354&r= |
By: | Lucas B\"ottcher; Georgia Kernell |
Abstract: | Condorcet's jury theorem states that the correct outcome is reached in direct majority voting systems with sufficiently large electorates as long as each voter's independent probability of voting for that outcome is greater than 0.5. Yet, in situations where direct voting systems are infeasible, such as due to high implementation and infrastructure costs, hierarchical voting systems provide a reasonable alternative. We study differences in outcome precision between hierarchical and direct voting systems for varying group sizes, abstention rates, and voter competencies. Using asymptotic expansions of the derivative of the reliability function (or Banzhaf number), we first prove that indirect systems differ most from their direct counterparts when group size and number are equal to each other, and therefore to $\sqrt{N_{\rm d}}$, where $N_{\rm d}$ is the total number of voters in the direct system. In multitier systems, we prove that this difference is maximized when group size equals $\sqrt[n]{N_{\rm d}}$, where $n$ is the number of hierarchical levels. Second, we show that while direct majority rule always outperforms hierarchical voting for homogeneous electorates that vote with certainty, as group numbers and size increase, hierarchical majority voting gains in its ability to represent all eligible voters. Furthermore, when voter abstention and competency are correlated within groups, hierarchical systems often outperform direct voting, which we show by using a generating function approach that is able to analytically characterize heterogeneous voting systems. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.02298&r= |