|
on Forecasting |
By: | Vladimir Pyrlik; Pavel Elizarov; Aleksandra Leonova |
Abstract: | We assess the performance of selected machine learning algorithms (lasso, random forest, gradient boosting, and long short-term memory) in forecasting the daily realized volatility of returns of selected top stocks in the Russian stock market in comparison with a heterogeneous autoregressive realized volatility benchmark in 2018-2020. We seek to improve the predictive power of the models by including various economic indicators that carry information about future volatility. We find that lasso delivers a good combination of easy implementation and forecast precision. The other algorithms require fine-tuning and frequent re-training, otherwise they are likely to fail to outperform the benchmark often enough. Only the basic lagged log-RV values are significant explanatory variables in terms of the benchmark in-sample quality. Many economic indicators of mixed frequencies improve the predictive power of lasso though, including calendar and overnight effects, financial spillovers from local and global markets, and various macroeconomics indicators. |
Keywords: | heterogeneous autoregressive model; machine learning; lasso; gradient boosting; random forest; long short-term memory; realized volatility; Russian stock market; mixed-frequency data; |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:cer:papers:wp713&r= |
By: | Filip Stanek |
Abstract: | It is common practice to split time-series into in-sample and pseudo out-of-sample segments and to estimate the out-of-sample loss of a given statistical model by evaluating forecasting performance over the pseudo out-of-sample segment. We propose an alternative estimator of the out-of-sample loss which, contrary to conventional wisdom, utilizes both measured in-sample and out-of-sample performance via a carefully constructed system of affine weights. We prove that, provided that the time-series is stationary, the proposed estimator is the best linear unbiased estimator of the out-of-sample loss and outperforms the conventional estimator in terms of sampling variance. Applying the optimal estimator to Diebold-Mariano type tests of predictive ability leads to a substantial power gain without worsening finite sample level distortions. An extensive evaluation on real world time-series from the M4 forecasting competition confirms the superiority of the proposed estimator and also demonstrates a substantial robustness to the violation of the underlying assumption of stationarity. |
Keywords: | loss estimation; forecast evaluation; cross-validation; model selection; |
JEL: | C22 C52 C53 |
Date: | 2021–11 |
URL: | http://d.repec.org/n?u=RePEc:cer:papers:wp712&r= |
By: | Mestiri, Sami |
Abstract: | Bitcoin has received a lot of attention from both investors and analysts, as it forms the highest market capitalization in the cryptocurrency market. The use of parametric GARCH models to characterise the volatility of Bitcoin returns is widely observed in the empirical literature. In this paper, we consider an alternative approach involving non-parametric method to model and forecast Bitcoin return volatility. We show that the out-of-sample volatility forecast of the non-parametric GARCH model yields superior performance relative to an extensive class of parametric GARCH models. The improvement in forecasting accuracy of Bitcoin return volatility based on the non-parametric GARCH model suggests that this method offers an attractive and viable alternative to the commonly used parametric GARCH models. |
Keywords: | Bitcoin; volatility; GARCH; Nonparametric; Forecasting. |
JEL: | C14 C53 C58 |
Date: | 2021–12–13 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:111116&r= |
By: | Justin Dang (UCR); Aman Ullah (Department of Economics, University of California Riverside) |
Abstract: | This paper proposes a new combined semiparametric estimator of the conditional variance that takes the product of a parametric estimator and a nonparametric estimator based on machine learning. A popular kernel based machine learning algorithm, known as kernel regularized least squares estimator, is used to estimate the nonparametric component. We discuss how to estimate the semiparametric estimator using real data and how to use this estimator to make forecasts for the conditional variance.Simulations are conducted to show the dominance of the proposed estimator in terms of mean squared error. An empirical application using S&P 500 daily returns is analyzed, and the semiparametric estimator effectively forecasts future volatility. |
Keywords: | Conditional variance; Nonparametric estimator; Semiparametric models; Forecasting; Machine Learning |
JEL: | C01 C14 C51 |
Date: | 2021–01 |
URL: | http://d.repec.org/n?u=RePEc:ucr:wpaper:202204&r= |