|
on Computational Economics |
Issue of 2017‒10‒08
ten papers chosen by |
By: | Theodoros Chatzivasileiadis |
Abstract: | The uncertainty and robustness of Computable General Equilibrium models can be assessed by conducting a Systematic Sensitivity Analysis. Different methods have been used in the literature for SSA of CGE models such as Gaussian Quadrature and Monte Carlo methods. This paper explores the use of Quasi-random Monte Carlo methods based on the Halton and Sobol' sequences as means to improve the efficiency over regular Monte Carlo SSA, thus reducing the computational requirements of the SSA. The findings suggest that by using low-discrepancy sequences, the number of simulations required by the regular MC SSA methods can be notably reduced, hence lowering the computational time required for SSA of CGE models. |
Date: | 2017–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1709.09755&r=cmp |
By: | Victor Chernozhukov; Denis Chetverikov; Mert Demirer; Esther Duflo; Christian Hansen; Whitney Newey; James Robins |
Abstract: | Most modern supervised statistical/machine learning (ML) methods are explicitly designed to solve prediction problems very well. Achieving this goal does not imply that these methods automatically deliver good estimators of causal parameters. Examples of such parameters include individual regression coefficients, average treatment effects, average lifts, and demand or supply elasticities. In fact, estimates of such causal parameters obtained via naively plugging ML estimators into estimating equations for such parameters can behave very poorly due to the regularization bias. Fortunately, this regularization bias can be removed by solving auxiliary prediction problems via ML tools. Specifically, we can form an orthogonal score for the target low-dimensional parameter by combining auxiliary and main ML predictions. The score is then used to build a de-biased estimator of the target parameter which typically will converge at the fastest possible 1/root(n) rate and be approximately unbiased and normal, and from which valid confidence intervals for these parameters of interest may be constructed. The resulting method thus could be called a "double ML" method because it relies on estimating primary and auxiliary predictive models. In order to avoid overfitting, our construction also makes use of the K-fold sample splitting, which we call cross-fitting. This allows us to use a very broad set of ML predictive methods in solving the auxiliary and main prediction problems, such as random forest, lasso, ridge, deep neural nets, boosted trees, as well as various hybrids and aggregators of these methods. |
Date: | 2016–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1608.00060&r=cmp |
By: | Bouchaud, Jean-Philippe; Gualdi, Stanislao; Tarzia, Marco; Zamponi, Francesco |
Abstract: | Which level of inflation should Central Banks be targeting? The authors investigate this issue in the context of a simplified Agent Based Model of the economy. Depending on the value of the parameters that describe the micro-behaviour of agents (in particular inflation anticipations), they find a surprisingly rich variety of behaviour at the macro-level. Without any monetary policy, our ABM economy can be in a high inflation/high output state, or in a low inflation/low output state. Hyper-inflation, stagflation, deflation and business cycles are also possible. The authors then introduce a Central Bank with a Taylor-rule-based inflation target, and study the resulting aggregate variables. The main result is that too low inflation targets are in general detrimental to a CB-controlled economy. One symptom is a persistent under-realisation of inflation, perhaps similar to the current macroeconomic situation. This predicament is alleviated by higher inflation targets that are found to improve both unemployment and negative interest rate episodes, up to the point where erosion of savings becomes unacceptable. The results are contrasted with the predictions of the standard DSGE model. |
Keywords: | Agent based models,monetary policy,inflation target,Taylor rule |
JEL: | E31 E32 E52 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:zbw:ifwedp:201764&r=cmp |
By: | Zachariah Peterson |
Abstract: | Kelly's Criterion is well known among gamblers and investors as a method for maximizing the returns one would expect to observe over long periods of betting or investing. These ideas are conspicuously absent from portfolio optimization problems in the financial and automation literature. This paper will show how Kelly's Criterion can be incorporated into standard portfolio optimization models. The model developed here combines risk and return into a single objective function by incorporating a risk parameter. This model is then solved for a portfolio of 10 stocks from a major stock exchange using a differential evolution algorithm. Monte Carlo calculations are used to verify the accuracy of the results obtained from differential evolution. The results show that evolutionary algorithms can be successfully applied to solve a portfolio optimization problem where returns are calculated by applying Kelly's Criterion to each of the assets in the portfolio. |
Date: | 2017–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1710.00431&r=cmp |
By: | Sang Il Lee; Seong Joon Yoo |
Abstract: | The purpose of this paper is to present a new approach to construct optimal portfolios and investment strategies based on stock return predictions. Recurrent Neural Networks (RNNs) are applied to stock return predictions. We build portfolios by adjusting potential return threshold levels used to classify assets, and obtain risk-return profiles examining their statistical properties. The results show conclusively that the thresholds control the level of risk-return trade-off, which can be usefully implemented in allocating assets across asset classes based on individual risk capacity and risk tolerance levels. Next, we obtain frontier lines representing the risk-reward relationship for portfolios built using three strategies (e.g., long, short, and long-short), and then determine the efficient frontier that maximizes returns at specific risk levels. We thus find optimal investment strategies using portfolios tailored to an investor's risk preference. |
Date: | 2017–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:1709.09822&r=cmp |
By: | David Schenck (StataCorp) |
Abstract: | Dynamic stochastic general equilibrium (DSGE) models are used in macroeconomics for policy analysis and forecasting. A DSGE model consists of a system of equations derived from economic theory. Some of these equations may be forward looking, in that expectations of future values of variables matter for the values of variables today. Expectations are handled in an internally consistent way, known as rational expectation. I describe the new dsge command, which estimates the parameters of linear DSGE models. I outline a typical DSGE model, estimate its parameters, discuss how to interpret dsge output, and describe the command's postestimation features. |
Date: | 2017–09–20 |
URL: | http://d.repec.org/n?u=RePEc:boc:csug17:05&r=cmp |
By: | Chen, Siyan; Desiderio, Saul |
Abstract: | Understanding what moves the Phillips curve is important to monetary policy. Because the Phillips curve has experienced over time movements similar to those characterizing the Beveridge curve, the authors jointly analyze the two phenomena. They do that through an agent-based macro model based on adaptive micro-foundations, which works fairly well in replicating a number of stylized facts, including the Beveridge curve, the Phillips curve and the Okun curve. By Monte Carlo experiments the authors explore the mechanisms behind the movements of the Beveridge curve and the Phillips curve. They discovered that shifts of the Beveridge curve are best explained by the intensity of worker reallocation. Reallocation also shifts the Phillips curve in the same direction, suggesting that it may be the reason behind the similarity of the patterns historically recorded for these two curves. This finding may shed new light on what moves the Phillips curve and might have direct implications for the conduction of monetary policy. |
Keywords: | Beveridge curve,Phillips curve,labor market dynamics,agent-based simulations,sensitivity analysis |
JEL: | C63 D51 E31 J30 J63 J64 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:zbw:ifwedp:201765&r=cmp |
By: | Xiaohong Chen (Cowles Foundation, Yale University); Timothy Christensen (New York University); Elie Tamer (Harvard University) |
Abstract: | In complicated/nonlinear parametric models, it is generally hard to know whether the model parameters are point identified. We provide computationally attractive procedures to construct confidence sets (CSs) for identified sets of full parameters and of subvectors in models defined through a likelihood or a vector of moment equalities or inequalities. These CSs are based on level sets of optimal sample criterion functions (such as likelihood or optimally-weighted or continuously-updated GMM criterions). The level sets are constructed using cutoffs that are computed via Monte Carlo (MC) simulations directly from the quasi-posterior distributions of the criterions. We establish new Bernstein-von Mises (or Bayesian Wilks) type theorems for the quasi-posterior distributions of the quasi-likelihood ratio (QLR) and profile QLR in partially-identified regular models and some non-regular models. These results imply that our MC CSs have exact asymptotic frequentist coverage for identified sets of full parameters and of subvectors in partially-identified regular models, and have valid but potentially conservative coverage in models with reduced-form parameters on the boundary. Our MC CSs for identified sets of subvectors are shown to have exact asymptotic coverage in models with singularities. We also provide results on uniform validity of our CSs over classes of DGPs that include point and partially identified models. We demonstrate good finite-sample coverage properties of our procedures in two simulation experiments. Finally, our procedures are applied to two non-trivial empirical examples: an airline entry game and a model of trade flows. |
Date: | 2016–05 |
URL: | http://d.repec.org/n?u=RePEc:cwl:cwldpp:2037r2&r=cmp |
By: | Serafín Frache (Banco Central del Uruguay); Jorge Ponce; Javier Garcia Cicco |
Abstract: | We develop a DSGE model for a small, open economy with a banking sector and endogenous default in order to perform a realistic assessment of macroprudential tools: countercyclical capital buffer (CCB) and dynamic provisions (DP). The model is estimated with data for Uruguay, where dynamic provisioning is in place since early 2000s. We find that (i) the source of the shock affecting the financial system matters, to select the appropriate indicator variable under the CCB rule, and to calibrate the size of the DP. Given a positive external shock, CCB (ii) generates buffers without major real effects; (iii) GDP as an indicator variable has quicker and stronger effects over bank capital; and (iv) the ratio of credit to GDP decreases, which discourages its use as an indicator variable. DP (v) generates buffers with real effects, and (vi) seems to outperform the CCB in terms of smoothing the cycle. |
Keywords: | Banking regulation, minimum capital requirement, countercyclical capital buffer, reserve requirement, dynamic loan loss provision, endogenous default, Basel III, DSGE, Uruguay |
JEL: | G21 G28 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:bku:doctra:2017001&r=cmp |
By: | Mareckova, Jana; Pohlmeier, Winfried |
Abstract: | We study the importance of noncognitive skills in explaining differences in the labor market performance of individuals by means of machine learning techniques. Unlike previous em- pirical approaches centering around the within-sample explanatory power of noncognitive skills our approach focuses on the out-of-sample forecasting and classification qualities of noncognitive skills. Moreover, we show that machine learning techniques can cope with the challenge of selecting the most relevant covariates from big data with a whopping number of covariates on personality traits. This enables us to construct new personality indices with larger predictive power. In our empirical application we study the role of noncognitive skills for individual earnings and unemployment based on the British Cohort Study (BCS). The longitudinal character of the BCS enables us to analyze predictive power of early childhood environment and early cognitive and noncognitive skills on adult labor market outcomes. The results of the analysis show that there is a potential of a long run in uence of early childhood variables on the earnings and unemployment. |
Keywords: | personality traits,machine learning |
JEL: | J24 J64 C38 |
Date: | 2017 |
URL: | http://d.repec.org/n?u=RePEc:zbw:vfsc17:168195&r=cmp |