|
on Computational Economics |
Issue of 2020‒12‒21
eleven papers chosen by |
By: | Mike Ludkovski |
Abstract: | We introduce mlOSP, a computational template for Machine Learning for Optimal Stopping Problems. The template is implemented in the R statistical environment and publicly available via a GitHub repository. mlOSP presents a unified numerical implementation of Regression Monte Carlo (RMC) approaches to optimal stopping, providing a state-of-the-art, open-source, reproducible and transparent platform. Highlighting its modular nature, we present multiple novel variants of RMC algorithms, especially in terms of constructing simulation designs for training the regressors, as well as in terms of machine learning regression modules. At the same time, mlOSP nests most of the existing RMC schemes, allowing for a consistent and verifiable benchmarking of extant algorithms. The article contains extensive R code snippets and figures, and serves the dual role of presenting new RMC features and as a vignette to the underlying software package. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.00729&r=all |
By: | Burka, Dávid; Puppe, Clemens; Szepesváry, László; Tasnádi, Attila |
Abstract: | Voting rules can be assessed from quite different perspectives: the axiomatic, the pragmatic, in terms of computational or conceptual simplicity, susceptibility to manipulation, and many others aspects. In this paper, we take the machine learning perspective and ask how 'well' a few prominent voting rules can be learned by a neural network. To address this question, we train the neural network to choosing Condorcet, Borda, and plurality winners, respectively. Remarkably, our statistical results show that, when trained on a limited (but still reasonably large) sample, the neural network mimics most closely the Borda rule, no matter on which rule it was previously trained. The main overall conclusion is that the necessary training sample size for a neural network varies significantly with the voting rule, and we rank a number of popular voting rules in terms of the sample size required. |
Keywords: | voting,social choice,neural networks,machine learning,Borda count |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:zbw:kitwps:145&r=all |
By: | Bojing Feng; Wenfang Xue; Bindang Xue; Zeyu Liu |
Abstract: | Credit rating is an analysis of the credit risks associated with a corporation, which reflect the level of the riskiness and reliability in investing. There have emerged many studies that implement machine learning techniques to deal with corporate credit rating. However, the ability of these models is limited by enormous amounts of data from financial statement reports. In this work, we analyze the performance of traditional machine learning models in predicting corporate credit rating. For utilizing the powerful convolutional neural networks and enormous financial data, we propose a novel end-to-end method, Corporate Credit Ratings via Convolutional Neural Networks, CCR-CNN for brevity. In the proposed model, each corporation is transformed into an image. Based on this image, CNN can capture complex feature interactions of data, which are difficult to be revealed by previous machine learning models. Extensive experiments conducted on the Chinese public-listed corporate rating dataset which we build, prove that CCR-CNN outperforms the state-of-the-art methods consistently. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.03744&r=all |
By: | Gao, Hang; Cheng, Shenyang; Zhang, Michael |
Abstract: | The model based variable speed limit (VSL) control has been proven effective to resolve capacity-drop and time delay at a single recurrent bottleneck in previous studies. This project applies VSL controls to the traffic corridors with multi-segment and multi-bottleneck with the objective of reducing fuel consumption and greenhouse gas emissions. Based on a comprehensive review of existing methods, we develop and compare two fuel consumption centered VSL control (FC-VSL) strategies: flow-based control versus density-based control. These control strategies are implemented in SUMO, a microscopic traffic simulation package, on a 10-mile long freeway section. Results show that the density-based control reduces fuel consumption and gas emissions significantly at the cost of slight increase of travel time. The flow-based control, in contrast, reduces congestion and emissions in the downstream segments but transfers the congestion to the segments upstream of the controlled segments, resulting in an overall performance that is worse than the density-based FC-VSL, and no better than imposing static speed limits. |
Keywords: | Engineering, Variable speed limit, traffic throughput, emissions and fuel consumptions, microscopic simulation, probe vehicles |
Date: | 2020–10–01 |
URL: | http://d.repec.org/n?u=RePEc:cdl:itsdav:qt6th037wz&r=all |
By: | Glyn Wittwer; Kym Anderson |
Abstract: | This paper describes a new empirical model of the world’s markets for alcoholic beverages and, to illustrate its usefulness, reports results from projections of those markets from 201618 to 2025 under various scenarios. It not only revises and updates a model of the world’s wine markets (Wittwer, Berger and Anderson, 2003) but also adds beer and spirits so as to capture the substitutability of those beverages among consumers. The model has some of the features of an economywide computable general equilibrium model, with international trade linking the markets of its 44 countries and seven residual regions. It is used to simulate prospects for these markets by 2025 (business-as-usual), which points to Asia’s rise. Then two alternative scenarios to 2025 are explored: one simulates the withdrawal of the United Kingdom from the European Union (EU); the other simulates the effects of the recent imposition of additional 25% tariffs on selected beverages imported by the United States from several EU member countries. Future applications of the model are discussed in the concluding section. |
Keywords: | CGE modeling; wine; beer; spirits; changes in beverage preferences; international trade in beverages; premiumization of alcohol markets |
JEL: | C53 F11 F17 Q13 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:pas:papers:2020-05&r=all |
By: | Majid Eskandarpour (LEM - Lille économie management - UMR 9221 - UA - Université d'Artois - UCL - Université catholique de Lille - Université de Lille - CNRS - Centre National de la Recherche Scientifique, IESEG - School of Management); Pierre Dejax (SLP - Systèmes Logistiques et de Production - LS2N - Laboratoire des Sciences du Numérique de Nantes - Université de Nantes - Faculté des Sciences et des Techniques - UN - Université de Nantes - ECN - École Centrale de Nantes - CNRS - Centre National de la Recherche Scientifique - IMT Atlantique - IMT Atlantique Bretagne-Pays de la Loire - IMT - Institut Mines-Télécom [Paris], IMT Atlantique - DAPI - Département Automatique, Productique et Informatique - IMT Atlantique - IMT Atlantique Bretagne-Pays de la Loire - IMT - Institut Mines-Télécom [Paris], LS2N - Laboratoire des Sciences du Numérique de Nantes - Université de Nantes - Faculté des Sciences et des Techniques - UN - Université de Nantes - ECN - École Centrale de Nantes - CNRS - Centre National de la Recherche Scientifique - IMT Atlantique - IMT Atlantique Bretagne-Pays de la Loire - IMT - Institut Mines-Télécom [Paris]); Olivier Péton (SLP - Systèmes Logistiques et de Production - LS2N - Laboratoire des Sciences du Numérique de Nantes - Université de Nantes - Faculté des Sciences et des Techniques - UN - Université de Nantes - ECN - École Centrale de Nantes - CNRS - Centre National de la Recherche Scientifique - IMT Atlantique - IMT Atlantique Bretagne-Pays de la Loire - IMT - Institut Mines-Télécom [Paris], IMT Atlantique - DAPI - Département Automatique, Productique et Informatique - IMT Atlantique - IMT Atlantique Bretagne-Pays de la Loire - IMT - Institut Mines-Télécom [Paris], LS2N - Laboratoire des Sciences du Numérique de Nantes - Université de Nantes - Faculté des Sciences et des Techniques - UN - Université de Nantes - ECN - École Centrale de Nantes - CNRS - Centre National de la Recherche Scientifique - IMT Atlantique - IMT Atlantique Bretagne-Pays de la Loire - IMT - Institut Mines-Télécom [Paris]) |
Abstract: | In this paper, we propose a bi-objective MILP formulation to minimize logistics costs as well as CO 2 emissions in a supply chain network design problem with multiple layers of facilities, technology levels and transportation mode decisions. The proposed model aims at investigating the trade-off between cost and CO 2 emissions through supply chain activities (i.e., raw material supply, manufacturing, warehousing, and transportation). To this end, a multi-directional local search (MDLS) metaheuristic is developed. The proposed method provides a limited set of non-dominated solutions ranging from a purely cost effective solution to a purely environmentally effective one. Each iteration of the MDLS consists in performing local searches from all non-dominated solutions. To do so, a Large Neighborhood Search (LNS) algorihtm is used. Extensive experiments based on randomly generated instances of various sizes and features are described. Three classic performance measures are used to compare the set of non-dominated solutions obtained by the MDLS algorithm and by directly solving the MILP model with the epsilon-constraint approach. This paper is concluded by managerial insights about the impact of using greener technology on the supply chain * Corresponding author. Olivier Péton, IMT Atlantique, |
Date: | 2019–10 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-02407741&r=all |
By: | Lara Marie Demajo; Vince Vella; Alexiei Dingli |
Abstract: | With the ever-growing achievements in Artificial Intelligence (AI) and the recent boosted enthusiasm in Financial Technology (FinTech), applications such as credit scoring have gained substantial academic interest. Credit scoring helps financial experts make better decisions regarding whether or not to accept a loan application, such that loans with a high probability of default are not accepted. Apart from the noisy and highly imbalanced data challenges faced by such credit scoring models, recent regulations such as the `right to explanation' introduced by the General Data Protection Regulation (GDPR) and the Equal Credit Opportunity Act (ECOA) have added the need for model interpretability to ensure that algorithmic decisions are understandable and coherent. An interesting concept that has been recently introduced is eXplainable AI (XAI), which focuses on making black-box models more interpretable. In this work, we present a credit scoring model that is both accurate and interpretable. For classification, state-of-the-art performance on the Home Equity Line of Credit (HELOC) and Lending Club (LC) Datasets is achieved using the Extreme Gradient Boosting (XGBoost) model. The model is then further enhanced with a 360-degree explanation framework, which provides different explanations (i.e. global, local feature-based and local instance-based) that are required by different people in different situations. Evaluation through the use of functionallygrounded, application-grounded and human-grounded analysis show that the explanations provided are simple, consistent as well as satisfy the six predetermined hypotheses testing for correctness, effectiveness, easy understanding, detail sufficiency and trustworthiness. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.03749&r=all |
By: | Sergio A. Pernice |
Abstract: | En este documento presentamos la técnica de Principal Component Analysis (PCA). Es parte de la serie de documentos sobre machine learning. Es parte del contenido del curso “Métodos de Machine Learning para Economistas” de la Maestría en Economía de la UCEMA. |
Keywords: | Principal component analysis, Análisis de componentes principales, aprendizaje no supervisado. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:cem:doctra:770&r=all |
By: | Lucian-Ionut Gavrila; Alexandru Popa |
Abstract: | The concept of clearing or netting, as defined in the glossaries of European Central Bank, has a great impact on the economy of a country influencing the exchanges and the interactions between companies. On short, netting refers to an alternative to the usual way in which the companies make the payments to each other: it is an agreement in which each party sets off amounts it owes against amounts owed to it. Based on the amounts two or more parties owe between them, the payment is substituted by a direct settlement. In this paper we introduce a set of graph algorithms which provide optimal netting solutions for the scale of a country economy. The set of algorithms computes results in an efficient time and is tested on invoice data provided by the Romanian Ministry of Economy. Our results show that classical graph algorithms are still capable of solving very important modern problems. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.05564&r=all |
By: | Kleinebrahm, Max; Torriti, Jacopo; McKenna, Russell; Ardone, Armin; Fichtner, Wolf |
Abstract: | Models simulating household energy demand based on different occupant and household types and their behavioral patterns have received increasing attention over the last years due the need to better understand fundamental characteristics that shape the demand side. Most of the models described in the literature are based on Time Use Survey data and Markov chains. Due to the nature of the underlying data and the Markov property, it is not sufficiently possible to consider long-term dependencies over several days in occupant behavior. An accurate mapping of long-term dependencies in behavior is of increasing importance, e.g. for the determination of flexibility potentials of individual households urgently needed to compensate supply-side fluctuations of renewable based energy systems. The aim of this study is to bridge the gap between social practice theory, energy related activity modelling and novel machine learning approaches. The weaknesses of existing approaches are addressed by combining time use survey data with mobility data, which provide information about individual mobility behavior over periods of one week. In social practice theory, emphasis is placed on the sequencing and repetition of practices over time. This suggests that practices have a memory. Transformer models based on the attention mechanism and Long short-term memory (LSTM) based neural networks define the state of the art in the field of natural language processing (NLP) and are for the first time introduced in this paper for the generation of weekly activity profiles. In a first step an autoregressive model is presented, which generates synthetic weekly mobility schedules of individual occupants and thereby captures long-term dependencies in mobility behavior. In a second step, an imputation model enriches the weekly mobility schedules with detailed information about energy relevant at home activities. The weekly activity profiles build the basis for multiple use cases one of which is modelling consistent electricity, heat and mobility demand profiles of households. The approach developed provides the basis for making high-quality weekly activity data available to the general public without having to carry out complex application procedures. |
Keywords: | activity modelling,mobility behavior,neural networks,synthetic data |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:zbw:kitiip:49&r=all |
By: | Michal Balcerak; Thomas Schmelzer |
Abstract: | Rather than directly predicting future prices or returns, we follow a more recent trend in asset management and classify the state of a market based on labels. We use numerous standard labels and even construct our own ones. The labels rely on future data to be calculated, and can be used a target for training a market state classifier using an appropriate set of market features, e.g. moving averages. The construction of those features relies on their label separation power. Only a set of reasonable distinct features can approximate the labels. For each label we use a specific neural network to classify the state using the market features from our feature space. Each classifier gives a probability to buy or to sell and combining all their recommendations (here only done in a linear way) results in what we call a trading strategy. There are many such strategies and some of them are somewhat dubious and misleading. We construct our own metric based on past returns but penalising for a low number of transactions or small capital involvement. Only top score-performance-wise trading strategies end up in final ensembles. Using the Bitcoin market we show that the strategy ensembles outperform both in returns and risk-adjusted returns in the out-of-sample period. Even more so we demonstrate that there is a clear correlation between the success achieved in the past (if measured in our custom metric) and the future. |
Date: | 2020–12 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2012.03078&r=all |