|
on Computational Economics |
Issue of 2021‒10‒18
fifteen papers chosen by |
By: | Mahdieh Yazdani |
Abstract: | In recent years several complaints about racial discrimination in appraising home values have been accumulating. For several decades, to estimate the sale price of the residential properties, appraisers have been walking through the properties, observing the property, collecting data, and making use of the hedonic pricing models. However, this method bears some costs and by nature is subjective and biased. To minimize human involvement and the biases in the real estate appraisals and boost the accuracy of the real estate market price prediction models, in this research we design data-efficient learning machines capable of learning and extracting the relation or patterns between the inputs (features for the house) and output (value of the houses). We compare the performance of some machine learning and deep learning algorithms, specifically artificial neural networks, random forest, and k nearest neighbor approaches to that of hedonic method on house price prediction in the city of Boulder, Colorado. Even though this study has been done over the houses in the city of Boulder it can be generalized to the housing market in any cities. The results indicate non-linear association between the dwelling features and dwelling prices. In light of these findings, this study demonstrates that random forest and artificial neural networks algorithms can be better alternatives over the hedonic regression analysis for prediction of the house prices in the city of Boulder, Colorado. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.07151&r= |
By: | Yufei Wu; Mahmoud Mahfouz; Daniele Magazzeni; Manuela Veloso |
Abstract: | The success of machine learning models is highly reliant on the quality and robustness of representations. The lack of attention on the robustness of representations may boost risks when using data-driven machine learning models for trading in the financial markets. In this paper, we focus on representations of the limit order book (LOB) data and discuss the opportunities and challenges of representing such data in an effective and robust manner. We analyse the issues associated with the commonly-used LOB representation for machine learning models from both theoretical and experimental perspectives. Based on this, we propose new LOB representation schemes to improve the performance and robustness of machine learning models and present a guideline for future research in this area. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.05479&r= |
By: | Meerza, Syed Imran Ali; Meerza, Syed Irfan Ali; Ahamed, Afsana |
Keywords: | Food Consumption/Nutrition/Food Safety, Agribusiness, Marketing |
Date: | 2021–08 |
URL: | http://d.repec.org/n?u=RePEc:ags:aaea21:314072&r= |
By: | Bernard Koch; Tim Sainburg; Pablo Geraldo; Song Jiang; Yizhou Sun; Jacob Gates Foster |
Abstract: | This review systematizes the emerging literature for causal inference using deep neural networks under the potential outcomes framework. It provides an intuitive introduction on how deep learning can be used to estimate/predict heterogeneous treatment effects and extend causal inference to settings where confounding is non-linear, time varying, or encoded in text, networks, and images. To maximize accessibility, we also introduce prerequisite concepts from causal inference and deep learning. The survey differs from other treatments of deep learning and causal inference in its sharp focus on observational causal estimation, its extended exposition of key algorithms, and its detailed tutorials for implementing, training, and selecting among deep estimators in Tensorflow 2 available at github.com/kochbj/Deep-Learning-for-Caus al-Inference. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.04442&r= |
By: | Jiafeng Chen; Xiaohong Chen; Elie Tamer |
Abstract: | We investigate the computational performance of Artificial Neural Networks (ANNs) in semi-nonparametric instrumental variables (NPIV) models of high dimensional covariates that are relevant to empirical work in economics. We focus on efficient estimation of and inference on expectation functionals (such as weighted average derivatives) and use optimal criterion-based procedures (sieve minimum distance or SMD) and novel efficient score-based procedures (ES). Both these procedures use ANN to approximate the unknown function. Then, we provide a detailed practitioner's recipe for implementing these two classes of estimators. This involves the choice of tuning parameters both for the unknown functions (that include conditional expectations) but also for the choice of estimation of the optimal weights in SMD and the Riesz representers used with the ES estimators. Finally, we conduct a large set of Monte Carlo experiments that compares the finite-sample performance in complicated designs that involve a large set of regressors (up to 13 continuous), and various underlying nonlinearities and covariate correlations. Some of the takeaways from our results include: 1) tuning and optimization are delicate especially as the problem is nonconvex; 2) various architectures of the ANNs do not seem to matter for the designs we consider and given proper tuning, ANN methods perform well; 3) stable inferences are more difficult to achieve with ANN estimators; 4) optimal SMD based estimators perform adequately; 5) there seems to be a gap between implementation and approximation theory. Finally, we apply ANN NPIV to estimate average price elasticity and average derivatives in two demand examples. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.06763&r= |
By: | Muhammad Apriandito Arya Saputra; Andry Alamsyah; Fajar Ibnu Fatihan |
Abstract: | Topline hotels are now shifting into the digital way in how they understand their customers to maintain and ensuring satisfaction. Rather than the conventional way which uses written reviews or interviews, the hotel is now heavily investing in Artificial Intelligence particularly Machine Learning solutions. Analysis of online customer reviews changes the way companies make decisions in a more effective way than using conventional analysis. The purpose of this research is to measure hotel service quality. The proposed approach emphasizes service quality dimensions reviews of the top-5 luxury hotel in Indonesia that appear on the online travel site TripAdvisor based on section Best of 2018. In this research, we use a model based on a simple Bayesian classifier to classify each customer review into one of the service quality dimensions. Our model was able to separate each classification properly by accuracy, kappa, recall, precision, and F-measure measurements. To uncover latent topics in the customer's opinion we use Topic Modeling. We found that the common issue that occurs is about responsiveness as it got the lowest percentage compared to others. Our research provides a faster outlook of hotel rank based on service quality to end customers based on a summary of the previous online review. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.06133&r= |
By: | Harold D Chiang; Yukun Ma; Joel Rodrigue; Yuya Sasaki |
Abstract: | This paper presents novel methods and theories for estimation and inference about parameters in econometric models using machine learning of nuisance parameters when data are dyadic. We propose a dyadic cross fitting method to remove over-fitting biases under arbitrary dyadic dependence. Together with the use of Neyman orthogonal scores, this novel cross fitting method enables root-$n$ consistent estimation and inference robustly against dyadic dependence. We illustrate an application of our general framework to high-dimensional network link formation models. With this method applied to empirical data of international economic networks, we reexamine determinants of free trade agreements (FTA) viewed as links formed in the dyad composed of world economies. We document that standard methods may lead to misleading conclusions for numerous classic determinants of FTA formation due to biased point estimates or standard errors which are too small. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.04365&r= |
By: | Stefan Nabernegg (University of Graz, Austria) |
Abstract: | One major barrier for the feasibility of national climate policies is limited public acceptance because of distributional concerns. In the literature, different approaches are used to investigate the incidence of climate policies across income groups. We apply three approaches of incidence analysis to the case of Austria, that vary in terms of data and computational intensity: (i) household fuel expenditure analysis, (ii) household carbon footprints and (iii) macroeconomic general equilibrium modelling with heterogeneous households. As concerns about heterogeneity within low-income groups (horizontal equity) were recently articulated as main objection for effective redistributive revenue recycling in the literature, we compare a pricing instrument of a fuel tax with two non-pricing instruments. We find that expenditure analysis, without considering embodied emissions in consumption, overestimates regressivity as well as within group variations of carbon pricing instruments. An economy-wide fuel tax without redistributive revenue recycling shows a slightly regressive distributional effect in the general equilibrium analysis, driven by the households use of income. This is well approximated by the carbon footprint analysis as income source effects play a minor role for this policy. For the two examples of non-pricing policies, we show that income source effects, which can be only evaluated in a closed macroeconomic model, strongly codetermine the mostly progressive distributional effect. Therefore we derive three general aspects that determine the incidence of climate policies: (i) the consumption patterns of households and the corresponding emission intensities of consumption, (ii) the existing distribution and composition of income, and (iii) the specific policy and policy design considered. For the feasibility of climate policy, we conclude that the evaluation as well as the clear communication of distributional effects is essential, as policy acceptance depends on the perceived individual outcome. |
Keywords: | policy incidence; carbon footprint; carbon pricing; climate change; Computable General Equilibrium; distribution; fuel tax; heterogenous households; Multi-Regional Input-Output simulation |
JEL: | C43 E01 E31 R31 |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:grz:wpaper:2021-12&r= |
By: | Manuel Arellano (CEMFI); Stéphane Bonhomme (University of Chicago); Micole De Vera (CEMFI); Laura Hospido (Banco de EspaÑa and iza); Siqi Wei (CEMFI) |
Abstract: | In this paper we use administrative data from the social security to study income dynamics and income risk inequality in Spain between 2005 and 2018. We construct individual measures of income risk as functions of past employment history, income, and demographics. Focusing on males, we document that income risk is highly unequal in Spain: more than half of the economy has close to perfect predictability of their income, while some face considerable uncertainty. Income risk is inversely related to income and age, and income risk inequality increases markedly in the recession. These findings are robust to a variety of specifications, including using neural networks for prediction and allowing for individual unobserved heterogeneity. |
Keywords: | Spain, income dynamics, administrative data, income risk, inequality |
JEL: | D31 E24 E31 J31 |
Date: | 2021–09 |
URL: | http://d.repec.org/n?u=RePEc:bde:wpaper:2136&r= |
By: | Gabriel Borrageiro; Nick Firoozye; Paolo Barucca |
Abstract: | We conduct a detailed experiment on major cash fx pairs, accurately accounting for transaction and funding costs. These sources of profit and loss, including the price trends that occur in the currency markets, are made available to our recurrent reinforcement learner via a quadratic utility, which learns to target a position directly. We improve upon earlier work, by casting the problem of learning to target a risk position, in an online learning context. This online learning occurs sequentially in time, but also in the form of transfer learning. We transfer the output of radial basis function hidden processing units, whose means, covariances and overall size are determined by Gaussian mixture models, to the recurrent reinforcement learner and baseline momentum trader. Thus the intrinsic nature of the feature space is learnt and made available to the upstream models. The recurrent reinforcement learning trader achieves an annualised portfolio information ratio of 0.52 with compound return of 9.3%, net of execution and funding cost, over a 7 year test set. This is despite forcing the model to trade at the close of the trading day 5pm EST, when trading costs are statistically the most expensive. These results are comparable with the momentum baseline trader, reflecting the low interest differential environment since the the 2008 financial crisis, and very obvious currency trends since then. The recurrent reinforcement learner does nevertheless maintain an important advantage, in that the model's weights can be adapted to reflect the different sources of profit and loss variation. This is demonstrated visually by a USDRUB trading agent, who learns to target different positions, that reflect trading in the absence or presence of cost. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.04745&r= |
By: | Romano, Stefania; Martinez-Heras, Jose; Raponi, Francesco Natalini; Guidi, Gregorio; Gottron, Thomas |
Abstract: | In carrying out its banking supervision tasks as part of the Single Supervisory Mechanism (SSM), the European Central Bank (ECB) collects and disseminates data on significant and less significant institutions. To ensure harmonised supervisory reporting standards, the data are represented through the European Banking Authority’s data point model, which defines all the relevant business concepts and the validation rules. For the purpose of data quality assurance and assessment, ECB experts may implement additional plausibility checks on the data. The ECB is constantly seeking ways to improve these plausibility checks in order to detect suspicious or erroneous values and to provide high-quality data for the SSM. JEL Classification: C18, C63, C81, E58, G28 |
Keywords: | machine learning, plausibility checks, quality assurance, supervisory data, validation rules |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbsps:202141&r= |
By: | Vîntu, Denis |
Abstract: | The article describes a dynamic general equilibrium for the Republic of Moldova, in the context of declining oil prices and COVID-19. We try to introduce an intergenerational model with the stochastic component, where we describe each self-employed agent, rather we try to adapt the model in a simulative tax reform, a transition from the progressive system that currently we have to a flat tax. For our hypothesis, it is assumed that there are 4 cohorts of population, selected by level of education (secondary, high school, university and lifelong learning) that pay taxes in a system based on social solidarity. Thus, the first conclusions can be drawn, namely that the tax system with 4 different rates 12, 15, 19 and 23% is the one that best approaches the Pareto type optimum, as opposed to the flat tax, which respects dynamic equilibrium. Public budget revenues are simulated in IS-LM-Laffer framework. And the forecast of budget accumulation is made using 4 distinct prediction models: naïve random walk, ARIMA, univariate model (AR) and vector error correction model (VECM). In addition, the main result is placed on the hypothesis that the empirical testing suggest that, unlike complicated models that have difficulty overcoming naïve random walk imitation, using techniques of associating and including monetary and fiscal indicators in linear regression, as well as adding structural shapes, some parameters of the models are quite significant. Of these, it seems that the closest to the economic reality of the country is the univariate model (AR), being also the most relevant for predicting the out-put gap, but also the stochastic component: the basic interest rate of the NBM's monetary policy. |
Keywords: | fiscal reform, monetary policy, cross-country convergence, prediction and forecasting methods, dynamic general equilibrium model, Pareto optimal balance, ARIMA modeling, time series analysis, Box – Jenkins method. |
JEL: | C10 C15 E23 E31 E52 E62 |
Date: | 2021–04–30 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:110113&r= |
By: | Leo Ardon; Nelson Vadori; Thomas Spooner; Mengda Xu; Jared Vann; Sumitra Ganesh |
Abstract: | We present a new financial framework where two families of RL-based agents representing the Liquidity Providers and Liquidity Takers learn simultaneously to satisfy their objective. Thanks to a parametrized reward formulation and the use of Deep RL, each group learns a shared policy able to generalize and interpolate over a wide range of behaviors. This is a step towards a fully RL-based market simulator replicating complex market conditions particularly suited to study the dynamics of the financial market under various scenarios. |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2110.06829&r= |
By: | Danielsson, Jon; Macrae, Robert; Uthemann, Andreas |
Abstract: | Artificial intelligence (AI) is rapidly changing how the financial system is operated, taking over core functions for both cost savings and operational efficiency reasons. AI will assist both risk managers and the financial authorities. However, it can destabilize the financial system, creating new tail risks and amplifying existing ones due to procyclicality, unknown-unknowns, the need for trust, and optimization against the system. |
Keywords: | ES/K002309/1; EP/P031730/1; UKRI fund |
JEL: | F3 G3 |
Date: | 2021–08–28 |
URL: | http://d.repec.org/n?u=RePEc:ehl:lserod:111601&r= |
By: | Zacharias Bragoudakis (Bank of Greece and National and Kapodistrian University of Athens); Dimitrios Panas (Tilburg University and Systemic RM) |
Abstract: | An essential dilemma in economics that has yielded ambiguous answers is whether governments should spend more in recessions. This paper provides an extension of the work of Ramey & Zubairy (2018) for the US economy according to which the government spending multipliers are below unity, especially when the economy experiences severe slack. Nonetheless, their work suffered from some limitations with respect to invertibility and weak instrument problem. The contribution of this paper is twofold: Firstly, it provides evidence that a triple lasso approach for the lag selection is a useful tool in removing the invertibility issues and the weak instrument problem. Secondly, the main results using a triple lasso approach suggest multipliers below unity for most cases with no evidence for differences between different states of the economy. Nevertheless, re-running the code in Ramey & Zubairy (2018), the case where WWII is excluded exhibits multipliers above unity, in both the military news and Blanchard-Perotti specifications, contradicting their baseline findings and providing evidence for a more effective government spending in recessions. |
Keywords: | government spending; fiscal multipliers; debiased machine learning; triple lasso |
JEL: | C52 E62 H50 N42 |
Date: | 2021–10 |
URL: | http://d.repec.org/n?u=RePEc:bog:wpaper:292&r= |