|
on Risk Management |
Issue of 2020‒11‒23
23 papers chosen by |
By: | Ricardo GOTTSCHALK; Lavinia B. CASTRO; Jiajun XU |
Abstract: | History shows that financial crises have been a significant driver of banking regulation evolution, since the 1930s. Although Basel III made much progress in building a safer and less leveraged system, the SDGs, the climate crisis and the COVID crisis require bold action. Financial regulation on Development Banks should be discussed, considering not only a secure payment system but also a system that meets sustainable development goals. As the paper argues, these are not contradictory objectives. Development banks have unique characteristics to manage risk and can contribute to a more sustained growth path, which actually helps reducing overall financial instability. This paper is policy oriented and intends to promote a dialogue among governments, development banks and regulators. It aims to discuss the potential trade-offs of Basel III capital framework for National Development Banks regarding their ability to fulfil their developmental mandate. Do these banks deserve special treatment? What can regulators do to adapt Basel rules in order to reduce possible impacts? In particular, it discusses Basel III higher capital requirements, capital buffers, as well as the changes made on the treatment of market risk, concentration, liquidity risk and operational risks. The paper starts with a brief history of banking regulation and a summary of the main theoretical approaches that justifies it. It provides an analytical discussion of Basel III standards, considering NDBs’ characteristics and discusses how specific Basel III standards affect the ability of NDBs to fulfil their missions. The paper presents and compares three large non-retail-deposit-taking NDBs experience on Basel II implementation: BNDES of Brazil, CDB of China and KfW of Germany, drawing on both secondary information and interviews. The paper concludes that some of Basel III rules do not affect NDBs’ roles. However, some specific rules can constrain them in straightforward ways. The biggest constraint seems to come less from the levels of comprehensiveness and complexity of the framework, and more from tightening the levels of capital requirements and demanding better capital quality. Although in the three cases, capital has not been a binding constraint for them in regular times; it can become so in times of crises. The second area of contention for these big banks is the disincentive to the use of internal models, which, again, may imply more capital requirements and less risk adequacy. A third area is the new large exposure rule, which is problematic for all three banks, given their focus on large, infrastructure projects. A fourth area refers to the high-risk weights required for exposures to project finance and equity. These are financing modality and tools NDBs use extensively to support large and complex projects, and activities that involve innovation financing. A final area concerns changes in the method used for the calculation of operational risks.This Research Paper is published in the framework of the International Research Initiative on Public Development Banks working groups and released for the occasion of the 14th AFD International Research Conference on Development. It is part of the pilot research program “Realizing the Potential of Public Development Banks for Achieving Sustainable Development Goals”. This program was launched, along with the International Research Initiative on Public Development Banks (PDBs), by the Institute of New Structural Economics (INSE) at Peking University, and sponsored by the Agence française de développement (AFD), Ford Foundation and International Development Finance Club (IDFC).Have a look on the key findings for a quick overview of the research resultsSee the video pitch |
JEL: | Q |
Date: | 2020–10–29 |
URL: | http://d.repec.org/n?u=RePEc:avg:wpaper:en11687&r=all |
By: | Qing Yang; Zhenning Hong; Ruyan Tian; Tingting Ye; Liangliang Zhang |
Abstract: | In this paper, we document a novel machine learning based bottom-up approach for static and dynamic portfolio optimization on, potentially, a large number of assets. The methodology overcomes many major difficulties arising in current optimization schemes. For example, we no longer need to compute the covariance matrix and its inverse for mean-variance optimization, therefore the method is immune from the estimation error on this quantity. Moreover, no explicit calls of optimization routines are needed. Applications to a bottom-up mean-variance-skewness-kurtosis or CRRA (Constant Relative Risk Aversion) optimization with short-sale portfolio constraints in both simulation and real market (China A-shares and U.S. equity markets) environments are studied and shown to perform very well. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.00572&r=all |
By: | John Armstrong; Damiano Brigo; Alex S. L. Tse |
Abstract: | Previous literature shows that prevalent risk measures such as Value at Risk or Expected Shortfall are ineffective to curb excessive risk-taking by a tail-risk-seeking trader with S-shaped utility function in the context of portfolio optimisation. However, these conclusions hold only when the constraints are static in the sense that the risk measure is just applied to the terminal portfolio value. In this paper, we consider a portfolio optimisation problem featuring S-shaped utility and a dynamic risk constraint which is imposed throughout the entire trading horizon. Provided that the risk control policy is sufficiently strict relative to the asset performance, the trader's portfolio strategies and the resulting maximal expected utility can be effectively constrained by a dynamic risk measure. Finally, we argue that dynamic risk constraints might still be ineffective if the trader has access to a derivatives market. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.03314&r=all |
By: | Miryana Grigorova (School of Mathematics - University of Leeds - University of Leeds); Marie-Claire Quenez (LPSM (UMR_8001) - Laboratoire de Probabilités, Statistiques et Modélisations - UPD7 - Université Paris Diderot - Paris 7 - SU - Sorbonne Université - CNRS - Centre National de la Recherche Scientifique); Agnès Sulem (MATHRISK - Mathematical Risk Handling - UPEM - Université Paris-Est Marne-la-Vallée - ENPC - École des Ponts ParisTech - Inria de Paris - Inria - Institut National de Recherche en Informatique et en Automatique) |
Abstract: | This paper studies the superhedging prices and the associated superhedging strategies for European options in a non-linear incomplete market model with default. We present the seller's and the buyer's point of view. The underlying market model consists of a risk-free asset and a risky asset driven by a Brownian motion and a compensated default martingale. The portfolio processes follow non-linear dynamics with a non-linear driver f. By using a dynamic programming approach, we first provide a dual formulation of the seller's (superhedging) price for the European option as the supremum, over a suitable set of equivalent probability measures Q ∈ Q, of the f-evaluation/expectation under Q of the payoff. We also provide a characterization of the seller's (superhedging) price process as the minimal supersolution of a constrained BSDE with default and a characterization in terms of the minimal weak supersolution of a BSDE with default. By a form of symmetry, we derive corresponding results for the buyer. Our results rely on first establishing a non-linear optional and a non-linear predictable decomposition for processes which are $\mathcal{E}^f$-strong supermartingales under Q, for all Q ∈ Q. |
Keywords: | f-expectation,BSDEs with constraints,Non-linear pricing,Superhedging,Incomplete market,European options,Control problems with non-linear expectation,Non-linear optional decomposition,Pricing-hedging duality |
Date: | 2020–09–02 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-02025833&r=all |
By: | Vincenzo Candila; Giampiero M. Gallo; Lea Petrella |
Abstract: | Quantile regression is an efficient tool when it comes to estimate popular measures of tail risk such as the conditional quantile Value at Risk. In this paper we exploit the availability of data at mixed frequency to build a volatility model for daily returns with low-- (for macro--variables) and high--frequency (which may include an \virg{--X} term related to realized volatility measures) components. The quality of the suggested quantile regression model, labeled MF--Q--ARCH--X, is assessed in a number of directions: we derive weak stationarity properties, we investigate its finite sample properties by means of a Monte Carlo exercise and we apply it on financial real data. VaR forecast performances are evaluated by backtesting and Model Confidence Set inclusion among competitors, showing that the MF--Q--ARCH--X has a consistently accurate forecasting capability. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.00552&r=all |
By: | Romain Gauchon (SAF - Laboratoire de Sciences Actuarielle et Financière - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon); Stéphane Loisel (SAF - Laboratoire de Sciences Actuarielle et Financière - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon); Jean-Louis Rullière (SAF - Laboratoire de Sciences Actuarielle et Financière - UCBL - Université Claude Bernard Lyon 1 - Université de Lyon); Julien Trufin (Département de mathématiques Université Libre de Bruxelles - ULB - Université libre de Bruxelles) |
Abstract: | In this paper, we propose and study a risk model with two types of claims in which the insurer may invest into a prevention plan which decreases the intensity of large claims without impacting the small claims. We identify a necessary and sufficient condition for insurers to use prevention if there is no surplus. If, in addition, the severity of large claims dominates that of small claims by the harmonic mean residual life (HMRL) order, insurers invest more in prevention in the presence of a surplus. Finally, we characterize the asymptotic optimal prevention strategy when the initial surplus tends to infinity in the two main cases where both claim types are light-tailed and where one of them is light-tailed and the other one is heavy-tailed. |
Keywords: | Ruin theory,Prevention,Optimal prevention strategy,Insurance |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-02314914&r=all |
By: | Dietmar Pfeifer; Olena Ragulina |
Abstract: | We present a constructive approach to Bernstein copulas with an admissible discrete skeleton in arbitrary dimensions when the underlying marginal grid sizes are smaller than the number of observations. This prevents an overfitting of the estimated dependence model and reduces the simulation effort for Bernstein copulas a lot. In a case study, we compare different approaches of Bernstein and Gaussian copulas w.r.t. the estimation of risk measures in risk management. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.00909&r=all |
By: | Fricke, Daniel; Wilke, Hannes |
Abstract: | Investment funds are highly connected with each other, but also with the broader financial system. In this paper, we quantify potential vulnerabilities arising from funds' connectedness. While previous work exclusively focused on indirect connections (overlapping asset portfolios) between investment funds, we develop a macroprudential stress test that also includes direct connections (cross-holdings of fund shares). In our application for German investment funds, we find that these direct connections are very important from a financial stability perspective. Our main result is that the German fund sector's aggregate vulnerability can be substantial and tends to increase over time, suggesting that the fund sector can amplify adverse developments in global security markets. We also highlight spillover risks to the broader financial system, since fund sector losses would be largely borne by fund investors from the financial sector. Overall, we make an important step towards a more financial-system-wide view on fund sector vulnerabilities. |
Keywords: | asset management,investment funds,systemic risk,fire sales,liquidity risk,cross-holdings,spillover effects |
JEL: | G10 G11 G23 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:zbw:vfsc20:224511&r=all |
By: | Dimitriadis, Timo; Liu, Xiaochun; Schnaitmann, Julie |
Abstract: | We propose forecast encompassing tests for the Expected Shortfall (ES) jointly with the Value at Risk (VaR) based on flexible link (or combination) functions. Our setup allows testing encompassing for convex forecast combinations and for link functions which preclude crossings of the combined VaR and ES forecasts. As the tests based on these link functions involve parameters which are on the boundary of the parameter space under the null hypothesis, we derive and base our tests on nonstandard asymptotic theory on the boundary. Our simulation study shows that the encompassing tests based on our new link functions outperform tests based on unrestricted linear link functions for one-step and multi-step forecasts. We further illustrate the potential of the proposed tests in a real data analysis for forecasting VaR and ES of the S&P 500 index. |
Keywords: | asymptotic theory on the boundary,joint elicitability,multi-step ahead and aggregate forecasts,forecast evaluation and combinations |
JEL: | C12 C52 C58 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:zbw:hohdps:112020&r=all |
By: | Martin F. Hellwig (Max Planck Institute for Research on Collective Goods) |
Abstract: | The paper contains comments made on the Financial Stability Board’s (FSB) Consultation Report concerning the success of regulatory reforms since the global financial crisis of 2007-2009. According to these comments, the FSB’s assessment of the role of equity is too narrow, being phrased in terms of bankruptcy avoidance and risk taking incentives, without attention to debt overhang creating distortions in funding choices, as well as the systemic impact of ample equity reducing deleveraging needs after losses and equity contributing to smoothing of lending and asset purchases over time. The FSB’s treatment of systemic risk pays too little attention to mutual interdependence of different parts of the system that is not well captured by linear causal relationships. Finally, the comments point out that bank resolution of systemically important institutions is still not viable, for lack of political acceptance of single-point-of-entry procedures, for lack of funding of banks in resolution (in the EI), for lack of fiscal backstops (in the EU), and for lack of political acceptance of bank resolution with bail-in. |
Keywords: | Financial Stability Board, too-big-to-fail, systemic risk, banking regulation, bank resolution |
JEL: | G01 G18 G21 G28 K23 |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:mpg:wpaper:2020_24&r=all |
By: | Mariano Zeron; Ignacio Ruiz |
Abstract: | This paper presents how to use Chebyshev Tensors to compute dynamic sensitivities of financial instruments within a Monte Carlo simulation. Dynamic sensitivities are then used to compute Dynamic Initial Margin as defined by ISDA (SIMM). The technique is benchmarked against the computation of dynamic sensitivities obtained by using pricing functions like the ones found in risk engines. We obtain high accuracy and computational gains for FX swaps and Spread Options. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.04544&r=all |
By: | Thomas Bernhardt; Catherine Donnelly |
Abstract: | The number of people who receive a stable income for life from a closed pooled annuity fund is studied. Income stability is defined as keeping the income within a specified tolerance of the initial income in a fixed proportion of future scenarios. The focus is on quantifying the effect of the number of members, which drives the level of idiosyncratic longevity risk in the fund, on the income stability. To do this, investment returns are held constant and systematic longevity risk is omitted. An analytical expression that closely approximates the number of fund members who receive a stable income is derived and is seen to be independent of the mortality model. An application of the result is to calculate the length of time for which the pooled annuity fund can provide the desired level of income stability |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2010.16009&r=all |
By: | Adriana Grasso (Bank of Italy); Juan Passadore (Einaudi Institute for Economics and Finance (EIEF)); Facundo Piguillem (Einaudi Institute for Economics and Finance (EIEF) and CEPR) |
Abstract: | The recent debate about the falling share of labor income has brought attention to the trends in income shares, but less attention has been devoted to their variability. In this paper, we analyze how their fluctuations can be insured against between workers and capitalists, and the corresponding implications for financial markets. We study a neoclassical growth model with aggregate shocks that affect income shares and financial frictions that prevent firms from fully insuring idiosyncratic risk. We examine theoretically how aggregate risk sharing is distorted by the combination of idiosyncratic risk and moving shares. Accumulation of safe assets by firms and risky assets by households emerges naturally as a tool to insure income shares’ risk. We calibrate the model to the U.S. economy and show that low interest rates, rising capital shares, and accumulation of safe assets by firms and risky assets by households can be rationalized by persistent shocks to the labor share. |
Keywords: | income shares fluctuation, risk sharing, asset prices, corporate savings glut |
JEL: | E20 E32 E44 G11 |
Date: | 2020–06 |
URL: | http://d.repec.org/n?u=RePEc:bdi:wptemi:td_1283_20&r=all |
By: | Bernd Schwaab (European Central Bank); Xin Zhang (Sveriges Riksbank); Andre Lucas (Vrije Universiteit Amsterdam) |
Abstract: | A dynamic semi-parametric framework is proposed to study time variation in tail fatness of sovereign bond yield changes during the 2010--2012 euro area sovereign debt crisis measured at a high (15-minute) frequency. The framework builds on the Generalized Pareto Distribution (GPD) for modeling peaks over thresholds as in Extreme Value Theory, but casts the model in a conditional framework to allow for time-variation in the tail shape parameters. The score-driven updates used improve the expected Kullback-Leibler divergence between the model and the true data generating process on every step even if the GPD only fits approximately and the model is mis-sepcified, as will be the case in any finite sample. This is confirmed in simulations. Using the model, we find the ECB program had a beneficial impact on extreme upper tail quantiles, leaning against the risk of extremely adverse market outcomes while active. |
Keywords: | dynamic tail risk, observation-driven models, extreme value theory, European Central Bank (ECB), Securities Markets Programme (SMP) |
JEL: | C22 G11 |
Date: | 2020–11–10 |
URL: | http://d.repec.org/n?u=RePEc:tin:wpaper:20200076&r=all |
By: | Constandina Koki; Stefanos Leonardos; Georgios Piliouras |
Abstract: | In this paper, we consider a variety of multi-state Hidden Markov models for predicting and explaining the Bitcoin, Ether and Ripple returns in the presence of state (regime) dynamics. In addition, we examine the effects of several financial, economic and cryptocurrency specific predictors on the cryptocurrency return series. Our results indicate that the 4-states Non-Homogeneous Hidden Markov model has the best one-step-ahead forecasting performance among all the competing models for all three series. The superiority of the predictive densities, over the single regime random walk model, relies on the fact that the states capture alternating periods with distinct returns' characteristics. In particular, we identify bull, bear and calm regimes for the Bitcoin series, and periods with different profit and risk magnitudes for the Ether and Ripple series. Finally, we observe that conditionally on the hidden states, the predictors have different linear and non-linear effects. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.03741&r=all |
By: | Jeffrey Cohen; Clark Alexander |
Abstract: | We analyze 3,171 US common stocks to create an efficient portfolio based on the Chicago Quantum Net Score (CQNS) and portfolio optimization. We begin with classical solvers and incorporate quantum annealing. We add a simulated bifurcator as a new classical solver and the new D-Wave Advantage(TM) quantum annealing computer as our new quantum solver. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.01308&r=all |
By: | Toms, Darcy B. |
Keywords: | Public Economics |
Date: | 2020–10–22 |
URL: | http://d.repec.org/n?u=RePEc:ags:ctrf29:306039&r=all |
By: | Tae-Hwy Lee; Ekaterina Seregina |
Abstract: | Graphical models are a powerful tool to estimate a high-dimensional inverse covariance (precision) matrix, which has been applied for portfolio allocation problem. The assumption made by these models is a sparsity of the precision matrix. However, when the stock returns are driven by the common factors, this assumption does not hold. Our paper develops a framework for estimating a high-dimensional precision matrix which combines the benefits of exploring the factor structure of the stock returns and the sparsity of the precision matrix of the factor-adjusted returns. The proposed algorithm is called Factor Graphical Lasso (FGL). We study a high-dimensional portfolio allocation problem when the asset returns admit the approximate factor model. In high dimensions, when the number of assets is large relative to the sample size, the sample covariance matrix of the excess returns is subject to the large estimation uncertainty, which leads to unstable solutions for portfolio weights. To resolve this issue, we consider the decomposition of low-rank and sparse components. This strategy allows us to consistently estimate the optimal portfolio in high dimensions, even when the covariance matrix is ill-behaved. We establish consistency of the portfolio weights in a high-dimensional setting without assuming sparsity on the covariance or precision matrix of stock returns. Our theoretical results and simulations demonstrate that FGL is robust to heavy-tailed distributions, which makes our method suitable for financial applications. The empirical application uses daily and monthly data for the constituents of the S&P500 to demonstrate superior performance of FGL compared to the equal-weighted portfolio, index and some prominent precision and covariance-based estimators. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.00435&r=all |
By: | Toma\v{z} Fleischman; Paolo Dini |
Abstract: | The increasingly complex economic and financial environment in which we live makes the management of liquidity in payment systems and the economy in general a persistent challenge. New technologies are making it possible to address this challenge through alternative solutions that complement and strengthen existing payment systems. For example, the interbank balancing method can also be applied to private payment systems, complementary currencies, and trade credit clearing systems to provide better liquidity and risk management. In this paper we introduce the concept of a balanced payment system and demonstrate the effects of balancing on a small example. We show how to construct a balanced payment subsystem that can be settled in full and, therefore, that can be removed from the payment system to achieve liquidity-saving and payments gridlock resolution. We also briefly introduce a generalization of a payment system and of the method to balance it in the form of a specific application (Tetris Core Technologies), whose wider adoption could contribute to the financial stability of and better management of liquidity and risk for the whole economy. |
Date: | 2020–11 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.03517&r=all |
By: | Hartwig, Benny |
Abstract: | This paper investigates how the ordering of variables affects properties of the time-varying covariance matrix in the Cholesky multivariate stochastic volatility model. It establishes that systematically different dynamic restrictions are imposed when the ratio of volatilities is time-varying. Simulations demonstrate that estimated covariance matrices become more divergent when volatility clusters idiosyncratically. It is illustrated that this property is important for empirical applications. Specifically, alternative estimates on the evolution of U.S. systematic monetary policy and in ation-gap persistence indicate that conclusions may critically hinge on a selected ordering of variables. The dynamic correlation Cholesky multivariate stochastic volatility model is proposed as a robust alternative. |
Keywords: | Model uncertainty,Multivariate stochastic volatility,Dynamic correlations,Monetary policy,Structural VAR |
JEL: | C11 C32 E32 E52 |
Date: | 2020 |
URL: | http://d.repec.org/n?u=RePEc:zbw:vfsc20:224528&r=all |
By: | Viktor Stojkoski; Trifce Sandev; Lasko Basnarkov; Ljupco Kocarev; Ralf Metzler |
Abstract: | Classical option pricing schemes assume that the value of a financial asset follows a geometric Brownian motion (GBM). However, a growing body of studies suggest that a simple GBM trajectory is not an adequate representation for asset dynamics due to irregularities found when comparing its properties with empirical distributions. As a solution, we develop a generalisation of GBM where the introduction of a memory kernel critically determines the behavior of the stochastic process. We find the general expressions for the moments, log-moments, and the expectation of the periodic log returns, and obtain the corresponding probability density functions by using the subordination approach. Particularly, we consider subdiffusive GBM (sGBM), tempered sGBM, a mix of GBM and sGBM, and a mix of sGBMs. We utilise the resulting generalised GBM (gGBM) to examine the empirical performance of a selected group of kernels in the pricing of European call options. Our results indicate that the performance of a kernel ultimately depends on the maturity of the option and its moneyness. |
Date: | 2020–10 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2011.00312&r=all |
By: | Jérôme Lelong (DAO - Données, Apprentissage et Optimisation - LJK - Laboratoire Jean Kuntzmann - Inria - Institut National de Recherche en Informatique et en Automatique - CNRS - Centre National de la Recherche Scientifique - UGA [2020-....] - Université Grenoble Alpes [2020-....] - Grenoble INP [2020-....] - Institut polytechnique de Grenoble - Grenoble Institute of Technology [2020-....] - UGA [2020-....] - Université Grenoble Alpes [2020-....]); Zineb El Filali Ech-Chafiq (Natixis Asset Management, DAO - Données, Apprentissage et Optimisation - LJK - Laboratoire Jean Kuntzmann - Inria - Institut National de Recherche en Informatique et en Automatique - CNRS - Centre National de la Recherche Scientifique - UGA [2020-....] - Université Grenoble Alpes [2020-....] - Grenoble INP [2020-....] - Institut polytechnique de Grenoble - Grenoble Institute of Technology [2020-....] - UGA [2020-....] - Université Grenoble Alpes [2020-....]); Adil Reghai (Natixis Asset Management) |
Abstract: | Many pricing problems boil down to the computation of a high dimensional integral, which is usually estimated using Monte Carlo. In fact, the accuracy of a Monte Carlo estimator with M simulations is given by σ √ M. Meaning that its convergence is immune to the dimension of the problem. However, this convergence can be relatively slow depending on the variance σ of the function to be integrated. To resolve such a problem, one would perform some variance reduction techniques such as importance sampling, stratification, or control variates. In this paper, we will study two approaches for improving the convergence of Monte Carlo using Neural Networks. The first approach relies on the fact that many high dimensional financial problems are of low effective dimensions[15]. We expose a method to reduce the dimension of such problems in order to keep only the necessary variables. The integration can then be done using fast numerical integration techniques such as Gaussian quadrature. The second approach consists in building an automatic control variate using neural networks. We learn the function to be integrated (which incorporates the diffusion model plus the payoff function) in order to build a network that is highly correlated to it. As the network that we use can be integrated exactly, we can use it as a control variate. |
Date: | 2020–11–05 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02891798&r=all |
By: | Loann Desboulets (AMSE - Aix-Marseille Sciences Economiques - EHESS - École des hautes études en sciences sociales - AMU - Aix Marseille Université - ECM - École Centrale de Marseille - CNRS - Centre National de la Recherche Scientifique) |
Abstract: | This paper is devoted to practical use of the Manifold Selection method presented in Desboulets (2020). In a first part, I present an application on financial data. The data I use are continuous futures contracts underlying commodities. These are multivariate time series, for the period 1985-2020. Representing correlations in financial data as graphs is a common task, useful in Finance for risk assessment. However, these graphs are often too complex, and involve many small connections. Therefore, the graphs can be simplified using variable selection, to remove these small correlations. Here, I use Manifold Selection to build sparse graphical models. Non-linear manifolds can represent interconnected markets where the major drivers of prices are unobserved. The results indicate the market is more strongly interconnected when using non-linear manifold selection than when using linear graphical models. I also propose a new method for filling missing values in time series data. I run a simulation and show that the method performs well in case of several consecutive missing values. |
Keywords: | Non-parametric,Non-linear Manifolds,Variable Selection,Neural Networks |
Date: | 2020–11–03 |
URL: | http://d.repec.org/n?u=RePEc:hal:wpaper:hal-02986982&r=all |