|
on Forecasting |
By: | Hugo Inzirillo; Ludovic De Villelongue |
Abstract: | Deep learning is playing an increasingly important role in time series analysis. We focused on time series forecasting using attention free mechanism, a more efficient framework, and proposed a new architecture for time series prediction for which linear models seem to be unable to capture the time dependence. We proposed an architecture built using attention free LSTM layers that overcome linear models for conditional variance prediction. Our findings confirm the validity of our model, which also allowed to improve the prediction capacity of a LSTM, while improving the efficiency of the learning task. |
Date: | 2022–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2209.09548&r= |
By: | Andrei Dubovik (CPB Netherlands Bureau for Economic Policy Analysis); Adam Elbourne (CPB Netherlands Bureau for Economic Policy Analysis); Bram Hendriks (CPB Netherlands Bureau for Economic Policy Analysis); Mark Kattenberg (CPB Netherlands Bureau for Economic Policy Analysis) |
Abstract: | We compare machine learning techniques to a large Bayesian VAR for nowcasting and forecasting world merchandise trade. We focus on how the predictive performance of the machine learning models changes when they have access to a big dataset with 11,017 data series on key economic indicators. The machine learning techniques used include lasso, random forest and linear ensembles. We additionally compare the accuracy of the forecasts during and outside the Great Financial Crisis. We find no statistically significant differences in forecasting accuracy whether with respect to the technique, the dataset used - small or big - or the time period. |
JEL: | F17 C53 C55 |
Date: | 2022–10 |
URL: | http://d.repec.org/n?u=RePEc:cpb:discus:441&r= |
By: | Nghia Chu; Binh Dao; Nga Pham; Huy Nguyen; Hien Tran |
Abstract: | Predicting fund performance is beneficial to both investors and fund managers, and yet is a challenging task. In this paper, we have tested whether deep learning models can predict fund performance more accurately than traditional statistical techniques. Fund performance is typically evaluated by the Sharpe ratio, which represents the risk-adjusted performance to ensure meaningful comparability across funds. We calculated the annualised Sharpe ratios based on the monthly returns time series data for more than 600 open-end mutual funds investing in listed large-cap equities in the United States. We find that long short-term memory (LSTM) and gated recurrent units (GRUs) deep learning methods, both trained with modern Bayesian optimization, provide higher accuracy in forecasting funds' Sharpe ratios than traditional statistical ones. An ensemble method, which combines forecasts from LSTM and GRUs, achieves the best performance of all models. There is evidence to say that deep learning and ensembling offer promising solutions in addressing the challenge of fund performance forecasting. |
Date: | 2022–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2209.09649&r= |
By: | Roberto Baviera; Pietro Manzoni |
Abstract: | A Recurrent Neural Network that operates on several time lags, called an RNN(p), is the natural generalization of an Autoregressive ARX(p) model. It is a powerful forecasting tool when different time scales can influence a given phenomenon, as it happens in the energy sector where hourly, daily, weekly and yearly interactions coexist. The cost-effective BPTT is the industry standard as learning algorithm for RNNs. We prove that, when training RNN(p) models, other learning algorithms turn out to be much more efficient in terms of both time and space complexity. We also introduce a new learning algorithm, the Tree Recombined Recurrent Learning, that leverages on a tree representation of the unrolled network and appears to be even more effective. We present an application of RNN(p) models for power consumption forecasting on the hourly scale: experimental results demonstrate the efficiency of the proposed algorithm and the excellent predictive accuracy achieved by the selected model both in point and in probabilistic forecasting of the energy consumption. |
Date: | 2022–09 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2209.01378&r= |
By: | Mr. Yunhui Zhao; Yang Liu; Di Yang |
Abstract: | Inflation has been rising during the pandemic against supply chain disruptions and a multi-year boom in global owner-occupied house prices. We present some stylized facts pointing to house prices as a leading indicator of headline inflation in the U.S. and eight other major economies with fast-rising house prices. We then apply machine learning methods to forecast inflation in two housing components (rent and owner-occupied housing cost) of the headline inflation and draw tentative inferences about inflationary impact. Our results suggest that for most of these countries, the housing components could have a relatively large and sustained contribution to headline inflation, as inflation is just starting to reflect the higher house prices. Methodologically, for the vast majority of countries we analyze, machine-learning models outperform the VAR model, suggesting some potential value for incorporating such models into inflation forecasting. |
Keywords: | Housing Price Inflation; Rent; Owner-Occupied Housing; Machine Learning; Forecast; machine-learning model; machine learning method; housing boom; D. forecasting result; Inflation; Housing prices; Housing; Consumer price indexes; Global; Europe; Australia and New Zealand; North America; Caribbean;VAR model |
Date: | 2022–07–28 |
URL: | http://d.repec.org/n?u=RePEc:imf:imfwpa:2022/151&r= |