|
on Computational Economics |
Issue of 2021‒10‒11
sixteen papers chosen by |
By: | Amine Assouel; Antoine Jacquier; Alexei Kondratyev |
Abstract: | Generative Adversarial Networks are becoming a fundamental tool in Machine Learning, in particular in the context of improving the stability of deep neural networks. At the same time, recent advances in Quantum Computing have shown that, despite the absence of a fault-tolerant quantum computer so far, quantum techniques are providing exponential advantage over their classical counterparts. We develop a fully connected Quantum Generative Adversarial network and show how it can be applied in Mathematical Finance, with a particular focus on volatility modelling. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.02742&r= |
By: | K. S. Naik |
Abstract: | Since the 1990s, there have been significant advances in the technology space and the e-Commerce area, leading to an exponential increase in demand for cashless payment solutions. This has led to increased demand for credit cards, bringing along with it the possibility of higher credit defaults and hence higher delinquency rates, over a period of time. The purpose of this research paper is to build a contemporary credit scoring model to forecast credit defaults for unsecured lending (credit cards), by employing machine learning techniques. As much of the customer payments data available to lenders, for forecasting Credit defaults, is imbalanced (skewed), on account of a limited subset of default instances, this poses a challenge for predictive modelling. In this research, this challenge is addressed by deploying Synthetic Minority Oversampling Technique (SMOTE), a proven technique to iron out such imbalances, from a given dataset. On running the research dataset through seven different machine learning models, the results indicate that the Light Gradient Boosting Machine (LGBM) Classifier model outperforms the other six classification techniques. Thus, our research indicates that the LGBM classifier model is better equipped to deliver higher learning speeds, better efficiencies and manage larger data volumes. We expect that deployment of this model will enable better and timely prediction of credit defaults for decision-makers in commercial lending institutions and banks. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.02206&r= |
By: | Victor Chernozhukov; Whitney K. Newey; Victor Quintas-Martinez; Vasilis Syrgkanis |
Abstract: | Many causal and policy effects of interest are defined by linear functionals of high-dimensional or non-parametric regression functions. $\sqrt{n}$-consistent and asymptotically normal estimation of the object of interest requires debiasing to reduce the effects of regularization and/or model selection on the object of interest. Debiasing is typically achieved by adding a correction term to the plug-in estimator of the functional, that is derived based on a functional-specific theoretical derivation of what is known as the influence function and which leads to properties such as double robustness and Neyman orthogonality. We instead implement an automatic debiasing procedure based on automatically learning the Riesz representation of the linear functional using Neural Nets and Random Forests. Our method solely requires value query oracle access to the linear functional. We propose a multi-tasking Neural Net debiasing method with stochastic gradient descent minimization of a combined Riesz representer and regression loss, while sharing representation layers for the two functions. We also propose a Random Forest method which learns a locally linear representation of the Riesz function. Even though our methodology applies to arbitrary functionals, we experimentally find that it beats state of the art performance of the prior neural net based estimator of Shi et al. (2019) for the case of the average treatment effect functional. We also evaluate our method on the more challenging problem of estimating average marginal effects with continuous treatments, using semi-synthetic data of gasoline price changes on gasoline demand. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.03031&r= |
By: | Rui (Aruhan); Shi |
Abstract: | As the economies we live in are evolving over time, it is imperative that economic agents in models form expectations that can adjust to changes in the environment. This exercise offers a plausible expectation formation model that connects to computer science, psychology and neural science research on learning and decision-making, and applies it to an economy with a policy regime change. Employing the actor-critic model of reinforcement learning, the agent born in a fresh environment learns through first interacting with the environment. This involves taking exploratory actions and observing the corresponding stimulus signals. This interactive experience is then used to update its subjective belief about the world. I show, through several simulation experiments, that the agent adjusts its subjective belief facing an increase of inflation target. Moreover, the subjective belief evolves according to the agent's experience in the world. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.02474&r= |
By: | Mahmoud Mahfouz; Tucker Balch; Manuela Veloso; Danilo Mandic |
Abstract: | Continuous double auctions such as the limit order book employed by exchanges are widely used in practice to match buyers and sellers of a variety of financial instruments. In this work, we develop an agent-based model for trading in a limit order book and show (1) how opponent modelling techniques can be applied to classify trading agent archetypes and (2) how behavioural cloning can be used to imitate these agents in a simulated setting. We experimentally compare a number of techniques for both tasks and evaluate their applicability and use in real-world scenarios. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.01325&r= |
By: | Steven DiSilvio (Anna); Yu (Anna); Luo; Anthony Ozerov |
Abstract: | Figgie is a card game that approximates open-outcry commodities trading. We design strategies for Figgie and study their performance and the resulting market behavior. To do this, we develop a flexible agent-based discrete-event market simulation in which agents operating under our strategies can play Figgie. Our simulation builds upon previous work by simulating latencies between agents and the market in a novel and efficient way. The fundamentalist strategy we develop takes advantage of Figgie's unique notion of asset value, and is, on average, the profit-maximizing strategy in all combinations of agent strategies tested. We develop a strategy, the "bottom-feeder", which estimates value by observing orders sent by other agents, and find that it limits the success of fundamentalists. We also find that chartist strategies implemented, including one from the literature, fail by going into feedback loops in the small Figgie market. We further develop a bootstrap method for statistically comparing strategies in a zero-sum game. Our results demonstrate the wide-ranging applicability of agent-based discrete-event simulations in studying markets. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.00879&r= |
By: | Steven Campbell; Yichao Chen; Arvind Shrivats; Sebastian Jaimungal |
Abstract: | Here, we develop a deep learning algorithm for solving Principal-Agent (PA) mean field games with market-clearing conditions -- a class of problems that have thus far not been studied and one that poses difficulties for standard numerical methods. We use an actor-critic approach to optimization, where the agents form a Nash equilibria according to the principal's penalty function, and the principal evaluates the resulting equilibria. The inner problem's Nash equilibria is obtained using a variant of the deep backward stochastic differential equation (BSDE) method modified for McKean-Vlasov forward-backward SDEs that includes dependence on the distribution over both the forward and backward processes. The outer problem's loss is further approximated by a neural net by sampling over the space of penalty functions. We apply our approach to a stylized PA problem arising in Renewable Energy Certificate (REC) markets, where agents may rent clean energy production capacity, trade RECs, and expand their long-term capacity to navigate the market at maximum profit. Our numerical results illustrate the efficacy of the algorithm and lead to interesting insights into the nature of optimal PA interactions in the mean-field limit of these markets. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.01127&r= |
By: | Mike Tsionas (Montpellier Business School Université de Montpellier, Montpellier Research in Management and Lancaster University Management School); Christopher F. Parmeter (Miami Herbert Business School, University of Miami, Miami FL); Valentin Zelenyuk (School of Economics and Centre for Efficiency and Productivity Analysis (CEPA) at The University of Queensland, Australia) |
Abstract: | The literature on firm efficiency has seen its share of research comparing and contrasting Data Envelopment Analysis (DEA) and Stochastic Frontier Analysis (SFA), the two workhorse estimators. These studies rely on both Monte Carlo experiments and actual data sets to examine a range of performance issues which can be used to elucidate insights on the benefits or weaknesses of one method over the other. As can be imagined, neither method is universally better than the other. The present paper proposes an alternative approach that is quite flexible in terms of functional form and distributional assumptions and it amalgamates the benefits of both DEA and SFA. Specifically, we bridge these two popular approaches via Bayesian Artificial Neural Networks. We examine the performance of this new approach using Monte Carlo experiments. The performance is found to be very good, comparable or often better than the current standards in the literature. To illustrate the new techniques, we provide an application of this approach to a recent data set of large US banks. |
Keywords: | Simulation; OR in Banking; Stochastic Frontier Models; Data Envelopment Analysis; Flexible Functional Forms. |
Date: | 2021–06 |
URL: | http://d.repec.org/n?u=RePEc:qld:uqcepa:162&r= |
By: | Stephanie Assad (Unknown); Emilio Calvano (Unknown); Giacomo Calzolari (Unknown); Robert Clark (Unknown); Vincenzo Denicolo (Unknown); Daniel Ershov (TSE - Toulouse School of Economics - UT1 - Université Toulouse 1 Capitole - Université Fédérale Toulouse Midi-Pyrénées - EHESS - École des hautes études en sciences sociales - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Justin Pappas Johnson (Unknown); Sergio Pastorello (Unknown); Andrew Rhodes (TSE - Toulouse School of Economics - UT1 - Université Toulouse 1 Capitole - Université Fédérale Toulouse Midi-Pyrénées - EHESS - École des hautes études en sciences sociales - CNRS - Centre National de la Recherche Scientifique - INRAE - Institut National de Recherche pour l’Agriculture, l’Alimentation et l’Environnement); Lei Xu (Unknown); Matthijs Wildenbeest (Unknown) |
Abstract: | Markets are being populated with new generations of pricing algorithms, powered with Artificial Intelligence, that have the ability to autonomously learn to operate. This ability can be both a source of efficiency and cause of concern for the risk that algorithms autonomously and tacitly learn to collude. In this paper we explore recent developments in the economic literature and discuss implications for policy. |
Keywords: | Platforms.,Algorithmic Pricing,Antitrust,Competition Policy,Artificial Intelligence,Collusion |
Date: | 2021–09 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-03360129&r= |
By: | Dimitriadou, Athanasia (University of Derby); Agrapetidou, Anna (Democritus University of Thrace, Department of Economics); Gogas, Periklis (Democritus University of Thrace, Department of Economics); Papadimitriou, Theophilos (Democritus University of Thrace, Department of Economics) |
Abstract: | Credit Rating Agencies (CRAs) have been around for more than 150 years. Their role evolved from mere information collectors and providers to quasi-official arbitrators of credit risk throughout the global financial system. They compiled information that -at the time- was too difficult and costly for their clients to gather on their own. After the 1929 big market crash, they started to play a more formal role. Since then, we see a growing reliance of investors on the CRAs ratings. After the global financial crisis of 2007, the CRAs became the focal point of criticism by economists, politicians, the media, market participants and official regulatory agencies. The reason was obvious: the CRAs failed to perform the job they were supposed to do financial markets, i.e. efficient, effective and prompt measuring and signaling of financial (default) risk. The main criticism was focusing on the “issuer-pays system”, the relatively loose regulatory oversight from the relevant government agencies, the fact that often ratings change ex-post and the limited liability of CRAs. Many changes were implemented to the operational framework of the CRAs, including public disclosure of CRA information. This is designed to facilitate "unsolicited" ratings of structured securities by rating agencies that are not paid by the issuers. This combined with the abundance of data and the availability of powerful new methodologies and inexpensive computing power can bring us to the new era of independent ratings: The not-for-profit Independent Credit Rating Agencies (ICRAs). These can either compete or be used as an auxiliary risk gauging mechanism free from the problems inherent in the traditional CRAs. This role can be assumed by either public or governmental authorities, national or international specialized entities or universities, research institutions, etc. Several factors facilitate today the transition to the ICRAs: the abundance data, cheaper and faster computer processing the progress in traditional forecasting techniques and the wide use of new forecasting techniques i.e. Machine Learning methodologies and Artificial Intelligence systems. |
Keywords: | Credit rating agencies; banking; forecasting; support vector machines; artificial intelligence |
JEL: | C02 C15 C40 C45 C54 E02 E17 E27 E44 E58 E61 G20 G23 G28 |
Date: | 2021–10–04 |
URL: | http://d.repec.org/n?u=RePEc:ris:duthrp:2021_009&r= |
By: | Manuel Arellano (CEMFI, Centro de Estudios Monetarios y Financieros); Stéphane Bonhomme (University of Chicago); Micole De Vera (CEMFI, Centro de Estudios Monetarios y Financieros); Laura Hospido (Banco de España); Siqi Wei (CEMFI, Centro de Estudios Monetarios y Financieros) |
Abstract: | In this paper we use administrative data from the social security to study income dynamics and income risk inequality in Spain between 2005 and 2018. We construct individual measures of income risk as functions of past employment history, income, and demographics. Focusing on males, we document that income risk is highly unequal in Spain: more than half of the economy has close to perfect predictability of their income, while some face considerable uncertainty. Income risk is inversely related to income and age, and income risk inequality increases markedly in the recession. These findings are robust to a variety of specifications, including using neural networks for prediction and allowing for individual unobserved heterogeneity. |
Keywords: | Spain, income dynamics, administrative data, income risk, inequality. |
JEL: | D31 E24 E31 J31 |
Date: | 2021–09 |
URL: | http://d.repec.org/n?u=RePEc:cmf:wpaper:wp2021_2109&r= |
By: | Felipe Nazare; Alexandre Street |
Abstract: | The solution of multistage stochastic linear problems (MSLP) represents a challenge for many applications. Long-term hydrothermal dispatch planning (LHDP) materializes this challenge in a real-world problem that affects electricity markets, economies, and natural resources worldwide. No closed-form solutions are available for MSLP and the definition of non-anticipative policies with high-quality out-of-sample performance is crucial. Linear decision rules (LDR) provide an interesting simulation-based framework for finding high-quality policies to MSLP through two-stage stochastic models. In practical applications, however, the number of parameters to be estimated when using an LDR may be close or higher than the number of scenarios, thereby generating an in-sample overfit and poor performances in out-of-sample simulations. In this paper, we propose a novel regularization scheme for LDR based on the AdaLASSO (adaptive least absolute shrinkage and selection operator). The goal is to use the parsimony principle as largely studied in high-dimensional linear regression models to obtain better out-of-sample performance for an LDR applied to MSLP. Computational experiments show that the overfit threat is non-negligible when using the classical non-regularized LDR to solve MSLP. For the LHDP problem, our analysis highlights the following benefits of the proposed framework in comparison to the non-regularized benchmark: 1) significant reductions in the number of non-zero coefficients (model parsimony), 2) substantial cost reductions in out-of-sample evaluations, and 3) improved spot-price profiles. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.03146&r= |
By: | 高橋, 宏承; Takahashi, Hirotsugu |
Abstract: | 自己組織化やオートポイエティックの観点から考えると,組織文化の浸透はマネジメントの介入によって促進されるだけでなく,自己強化的にも起こる.本研究ではシミュレーションを用いて,社会化を促すようなマネジメントの介入を抑えた組織であっても文化浸透が起こることを示した.また,自己強化サイクルの頑健性に関する追加的な分析で,自己強化サイクルは組織の初期段階のショックに脆弱であることが明らかになった., Organizational culture permeation is not only facilitated by management intervention, but also occurs in a self-reinforcing manner when considered from the perspective of self-organizing and autopoietic system. In this study, we used simulation to show that culture permeation is confirmed even in organizations where management interventions that promote socialization are suppressed. Additional analysis of the robustness of self-reinforcing cycles revealed that they are vulnerable to shocks in the early stages of an organization. |
Keywords: | 組織文化, シミュレーション, 自己組織化, 自己強化サイクル, 文化の頑健性, Organizational Culture, Simulation, Self-organization, Self-reinforcing Cycle, Robustness of Organizational Culture |
Date: | 2021–09 |
URL: | http://d.repec.org/n?u=RePEc:hit:hmicwp:246&r= |
By: | Balie, Jean; Valera, Harold Glenn A.; Narayanan Gopalakrishnan, Badri; Pede, Valerien O. |
Keywords: | Agricultural and Food Policy, International Development, Research Methods/Statistical Methods |
Date: | 2021–08 |
URL: | http://d.repec.org/n?u=RePEc:ags:aaea21:313939&r= |
By: | Alessandro Dalla Benetta (European Commission - JRC); Maciej Sobolewski (European Commission - JRC); Daniel Nepelski (European Commission - JRC) |
Abstract: | This report provides estimates of AI investments in EU27 in 2018 and for 2019. It considers AI as a general-purpose technology and, besides direct investments in the development and adoption of AI technologies, includes also investments in complementary assets and capabilities such as skills, data, product design and organisational capital among AI investments. According to the estimates, in 2019, EU invested between EUR 7.9 billion and EUR 9 billion in AI. Compared to 2018, this represents an increase by 39%. If the EU maintains a similar level of growth, by 2025 the AI investments will reach EUR 22.4 billion and surpass the EUR 20 billion target by over 10%. The EU AI investments concentrate in labour and human capital covered by the Skills investment target. Expenditures on AI-related Data and equipment account for 30%. R&D and Intangible assets account for 10% and 7% of the total EU AI investments respectively. The contribution of the European public sector is considerable and accounts for 41% of total AI investments in 2019. |
Keywords: | General Purpose Technology, GPT, Artificial Intelligence, AI, digital technologies, investments, intangibles, Europe |
Date: | 2021–09 |
URL: | http://d.repec.org/n?u=RePEc:ipt:iptwpa:jrc126477&r= |
By: | Lochner, Benjamin (Institute for Employment Research (IAB), Nuremberg, Germany) |
Abstract: | "The IAB Job Vacancy Survey asks German establishments, among other things, about their most recent hire. Unfortunately, a worker identifier that would allow the direct linking to administrative records is not available. This report describes an algorithm that allows to find reported hires in the administrative employment histories. Based on observable characteristics, the algorithm runs several plausibility checks that make sure that a valid and unique linkage is performed. With its default parameterization the algorithm finds around 70 percent of hires that were mergeable in the first place. The result is the identification of the most recent hire reported in the IAB Job Vacancy Survey in the Integrated Employment Biographies of the IAB." (Author's abstract, IAB-Doku) ((en)) |
Keywords: | Bundesrepublik Deutschland ; Datenfusion ; Integrierte Erwerbsbiografien ; Algorithmus ; Personaleinstellung ; IAB-Stellenerhebung |
Date: | 2019–12–10 |
URL: | http://d.repec.org/n?u=RePEc:iab:iabfme:201906(en)&r= |