|
on Computational Economics |
Issue of 2007‒01‒13
eleven papers chosen by |
By: | Marie Cottrell (SAMOS - Statistique Appliquée et MOdélisation Stochastique - [Université Panthéon-Sorbonne - Paris I], MATISSE - Modélisation Appliquée, Trajectoires Institutionnelles et Stratégies Socio-Économiques - [CNRS : UMR8595] - [Université Panthéon-Sorbonne - Paris I]); Patrice Gaubert (SAMOS - Statistique Appliquée et MOdélisation Stochastique - [Université Panthéon-Sorbonne - Paris I], MATISSE - Modélisation Appliquée, Trajectoires Institutionnelles et Stratégies Socio-Économiques - [CNRS : UMR8595] - [Université Panthéon-Sorbonne - Paris I], LEMMA - LEMMA - [Université du Littoral Côte d'Opale]) |
Abstract: | Pseudo panels constituted with repeated cross-sections are good substitutes to true panel data. But individuals grouped in a cohort are not the same for successive periods, and it results in a measurement error and inconsistent estimators. The solution is to constitute cohorts of large numbers of individuals but as homogeneous as possible. This paper explains a new way to do this: by using a self-organizing map, whose properties are well suited to achieve these objectives. It is applied to a set of Canadian surveys, in order to estimate income elasticities for 18 consumption functions.. |
Keywords: | Pseudo panels ; self-organizing maps; |
Date: | 2007–01–05 |
URL: | http://d.repec.org/n?u=RePEc:hal:papers:hal-00122817_v1&r=cmp |
By: | Paul Torrens (University College London) |
Abstract: | There are indications that the current generation of models used to simulate the geography of housing choice has reached the limits of its usefulness under existing specifications. The relative stasis in residential choice modeling--and urban simulation in general--contrasts with simulation efforts in other disciplines, where techniques, theories, and ideas drawn from computation and complexity studies are revitalizing the ways in which we conceptualize, understand, and model real-world phenomena. Many of these concepts and methodologies are applicable to housing choice simulation. Indeed, in many cases, ideas from computation and complexity studies--often clustered under the collective term of geocomputation, as they apply to geography--are ideally suited to the simulation of residential location dynamics. However, there exist several obstructions to their successful use for these puropses, particularly as regards the capacity of these methodologies to handle top-down dynamics in urban systems. This paper presents a framework for developing a hybrid model for urban geographic simulation generally and discusses some of the imposing barriers against innovation in this field. The framework infuses approaches derived from geocomputation and complexity with standard techniques that have been tried and tested in operational land-use and transport simulation. As a proof-of-concept exercise, a micro-model of residential location has been developed with a view to hybridization. The model mixes cellular automata and multi-agent approaches and is formulated so as to interface with meso-models at a higher scale. |
Date: | 2006–06–27 |
URL: | http://d.repec.org/n?u=RePEc:cdl:bphupl:1036&r=cmp |
By: | Yann Algan; Olivier Allais; Wouter J. Den Haan |
Abstract: | A new algorithm is developed to solve models with heterogeneous agents and aggregate uncertainty that avoids some disadvantages of the prevailing algorithm that strongly relies on simulation techniques and is easier to implement than existing algorithms. A key aspect of the algorithm is a new procedure that parameterizes the cross-sectional distribution, which makes it possible to avoid Monte Carlo integration. The paper also develops a new simulation procedure that not only avoids cross-sectional sampling variation but is also more than ten times faster than the standard procedure of simulating an economy with a large but finite number of agents. This procedure can help to improve the efficiency of the most popular algorithm in which simulation procedures play a key role. |
Date: | 2006 |
URL: | http://d.repec.org/n?u=RePEc:pse:psecon:2006-46&r=cmp |
By: | António Antunes (Banco de Portugal, Departamento de Estudos Economicos, and Faculdade de Economia, Universidade Nova de Lisboa); Tiago Cavalcanti (Departamento de Economia, Universidade Federal de Pernambuco, INOVA, Faculdade de Economia, Universi-dade Nova de Lisboa.); Anne Villamil (Department of Economics, University of Illinois at Urbana- Champaign) |
Abstract: | This paper establishes the existence of a stationary equilibrium and a procedure to compute solutions to a class of dynamic general equilibrium models with two important features. First, occupational choice is determined endogenously as a function of heterogeneous agent type, which is defined by an agent's managerial ability and capital bequest. Heterogeneous ability is exogenous and independent across generations. In contrast, bequests link generations and the distribution of bequests evolves endogenously. Second, there is a financial market for capital loans with a deadweight intermediation cost and a repayment incentive constraint. The incentive constraint induces a non-convexity. The paper proves that the competitive equilibrium can be characterized by the bequest distribution and factor prices, and uses the monotone mixing condition to ensure that the stationary bequest distribution that arises from the agent's optimal behavior across generations exists and is unique. The paper next constructs a direct, non-parametric approach to compute the stationary solution. The method reduces the domain of the policy function, thus reducing the computational complexity of the problem. |
Keywords: | Existence; Computation; Dynamic general equilibrium; Non-convexity |
JEL: | C62 C63 E60 G38 |
URL: | http://d.repec.org/n?u=RePEc:sca:scaewp:0611&r=cmp |
By: | Erin Mansur (University of California at Berkeley); John Quigley (University of California at Berkeley); Steven Raphael (University of California at Berkeley); Eugene Smolensky (University of California at Berkeley) |
Abstract: | In this paper, we use a general equilibrium simulation model of the housing market to assess the potential to reduce the incidence of homelessness of various housing-market policy interventions. A version of the model developed by Anas and Arnott is extended and adapted to study homelessness and is calibrated to the four largest metropolitan areas in California. Using data from the Census of Population and Housing for 1980 and 1990 and the American Housing Survey for various years, we explore several alternative simulations. First, we calibrate the model for each metropolitan area to observed housing market and income conditions in 1980 and assess how well the model predicts observed changes in rents during the decade of the 1980s. Next, using models calibrated to 1990 conditions, we assess the effects on homelessness of changes in the income distribution similar to those that occurred during the 1980s. Finally, we explore the welfare consequences and the effects on homelessness of three housing market policy interventions: extending housing vouchers to all low-income households, subsidizing all landlords, and subsidizing those landlords who supply low-income housing. Our results suggest that a very large fraction of homelessness can be eliminated through increased reliance upon well-known housing subsidy policies. |
Date: | 2006–06–27 |
URL: | http://d.repec.org/n?u=RePEc:cdl:bphupl:1017&r=cmp |
By: | Ennio Bilancini; Leonardo Boncinelli |
Abstract: | The Prisoner Dilemma is a typical structure of interaction in human societies. In spite of a long tradition dealing with the matter from different perspectives, the emergence of cooperation or defection still remains a controversial argument from both empirical and theoretical point of views. In this paper an innovative model is presented and analyzed in the attempt to provide a reasonable framing of the issue. A population of boundedly rational agents repeatedly chooses to cooperate or defect. Each agent’s action affects only her interacting mates, according to a network of relationships which is endogenously modifiable since agents are given the possibility to substitute undesired mates with unknown ones. Full cooperation, full defection and coexistence of both cooperation and defection in homogeneous clusters are possible outcomes of the model. A computer program is developed with the purpose of understanding the impact of parameters values on the type of outcome. Numerous simulations are run and the resulting evidence is analyzed and interpreted |
Keywords: | Prisoner Dilemma; cooperation; segregation; networks; simulation |
JEL: | C63 C88 D85 |
Date: | 2006–06 |
URL: | http://d.repec.org/n?u=RePEc:usi:wpaper:482&r=cmp |
By: | Jasmina Arifovic; Olena Kostyshyna |
Abstract: | We study individual evolutionary learning in the setup developed by Deissenberg and Gonzalez (2002). They study a version of the Kydland-Prescott model in which in each time period monetary authority optimizes weighted payoff function (with selfishness parameter as a weight on its own and agent's payoffs) with respect to inflation announcement, actual inflation and the selfishness parameter. And also each period agent makes probabilistic decision on whether to believe in monetary authority's announcement. The probability of how trustful the agent should be is updated using reinforcement learning. The inflation announcement is always different from the actual inflation, and the private agent chooses to believe in the announcement if the monetary authority is selfish at levels tolerable to the agent. As a result, both the agent and the monetary authority are better off in this model of optimal cheating. In our simulations, both the agent and the monetary authority adapt using a model of individual evolutionary learning (Arifovic and Ledyard, 2003): the agent learns about her probabilistic decision, and the monetary authority learns about what level of announcement to use and how selfish to be. We performed simulations with two different ways of payoffs computation - simple (selfishness weighted payoff from Deissenberg/Gonzales model) and "expected" (selfishness weighted payoffs in believe and not believe outcomes weighted by the probability of agent to believe). The results for the first type of simulations include those with very altruistic monetary authority and the agent that believes the monetary authority when it sets announcement of inflation at low levels (lower than critical value). In the simulations with "expected" payoffs, monetary authority learned to set announcement at zero that brought zero actual inflation. This Ramsey outcome gives the highest possible payoff to both the agent and the monetary authority. Both types of simulations can also explain changes in average inflation over longer time horizons. When monetary authority starts experimenting with its announcement or selfishness, it can happen that agent is better off by changing her believe (not believe) action into the opposite one that entails changes in actual inflation |
JEL: | C63 E5 |
Date: | 2005–11–11 |
URL: | http://d.repec.org/n?u=RePEc:sce:scecf5:422&r=cmp |
By: | Erling Holmøy (Statistics Norway) |
Abstract: | The paper analyses the fiscal effects of productivity shifts in the private sector. Within a stylized model with inelastic labour supply, it shows that productivity shifts in sectors producing non-traded goods (N-sector) are irrelevant for the tax rates necessary to meet the government budget constraint. Also productivity shifts in the traded goods sector (T-sector) have a neutral fiscal effect, provided that the wage dependency of the tax bases and government expenditures are equal. If the wage dependency of expenditures exceeds that of revenues, tax rates must be increased in order to restore the government budget constraint. Simulations on a CGE model of the Norwegian economy confirm the theoretical results, and demonstrate that productivty growth on balance has an adverse fiscal effect. Moreover, the necessary increase in the tax rates of a productivity improvement in the T-sector is three times as high as the corresponding effect of a comparable productivity shift in the N-sector. |
Keywords: | Fiscal sustainability; productivity growth; general equilibrium |
JEL: | H30 J18 |
Date: | 2006–10 |
URL: | http://d.repec.org/n?u=RePEc:ssb:dispap:487&r=cmp |
By: | Gabriel Weintraub; Lanier Benkard (Stanford University); Benjamin Van Roy |
Abstract: | We propose an approximation method for analyzing Ericson and Pakes (1995)-style dynamic models of imperfect competition. We develop a simple algorithm for computing an ``oblivious equilibrium,'' in which each firm is assumed to make decisions based only on its own state and knowledge of the long run average industry state, but where firms ignore current information about competitors' states. We prove that, as the market becomes large, if the equilibrium distribution of firm states obeys a certain ``light-tail'' condition, then oblivious equilibria closely approximate Markov perfect equilibria. We develop bounds that can be computed to assess the accuracy of the approximation for any given applied problem. Through computational experiments, we find that the method often generates useful approximations for industries with hundreds of firms and in some cases even tens of firms |
Keywords: | dynamic oligopoly, computational methods, industrial organization |
JEL: | C63 C73 L11 L13 |
Date: | 2006–12–03 |
URL: | http://d.repec.org/n?u=RePEc:red:sed006:6&r=cmp |
By: | Kerstin Press |
Abstract: | The present paper investigates the role of decentralisation for the adaptability of production networks in clusters. It develops a simulation model able to test to what extent decentralised, networked clusters with many small firms (Silicon Valley) can be more adjustable than those composed of fewer, large companies (Boston 128). The model finds that for limited degrees of product complexity, decentralisation increases cluster adaptability at the expense of greater instability. This increases the risk of firm failure. Moreover, it is shown that agent numbers matter greatly for the competitiveness of decentralised clusters. Only if they host more firms than integrated cluster types is their lead in performance maintained. As a result, an additional condition had to be met to allow the Silicon Valley type to outperform the Boston 128 one: Greater firm numbers and strong startup dynamics. |
Keywords: | Clusters, Adjustment, N/K model, Simulation, Decentralisation |
JEL: | L22 C63 R11 |
Date: | 2006–12 |
URL: | http://d.repec.org/n?u=RePEc:egu:wpaper:0610&r=cmp |
By: | Jorge M. S. Valente (LIACC/NIAAD, Faculdade de Economia, Universidade do Porto, Portugal) |
Abstract: | In this paper, we consider the single machine scheduling problem with linear earliness and quadratic tardiness costs, and no machine idle time. We propose several dispatching heuristics, and analyse their performance on a wide range of instances. The heuristics include simple scheduling rules, as well as a procedure that takes advantage of the strengths of these rules. We also consider linear early / quadratic tardy dispatching rules, and a greedy-type procedure. Extensive experiments were performed to determine appropriate values for the parameters required by some of the heuristics. The computational tests show that the best results are given by the linear early / quadratic tardy dispatching rule. This procedure is also quite efficient, and can quickly solve even very large instances. |
Keywords: | heuristics, scheduling, single machine, early penalties, quadratic tardy penalties, no machine idle time, dispatching rules |
Date: | 2006–12 |
URL: | http://d.repec.org/n?u=RePEc:por:fepwps:234&r=cmp |