|
on Forecasting |
By: | Ryan Zischke; Gael M. Martin; David T. Frazier; Donald S. Poskitt |
Abstract: | We investigate the performance and sampling variability of estimated forecast combinations, with particular attention given to the combination of forecast distributions. Unknown parameters in the forecast combination are optimized according to criterion functions based on proper scoring rules, which are chosen to reward the form of forecast accuracy that matters for the problem at hand, and forecast performance is measured using the out-of-sample expectation of said scoring rule. Our results provide novel insights into the behavior of estimated forecast combinations. Firstly, we show that, asymptotically, the sampling variability in the performance of standard forecast combinations is determined solely by estimation of the constituent models, with estimation of the combination weights contributing no sampling variability whatsoever, at first order. Secondly, we show that, if computationally feasible, forecast combinations produced in a single step -- in which the constituent model and combination function parameters are estimated jointly -- have superior predictive accuracy and lower sampling variability than standard forecast combinations -- where constituent model and combination function parameters are estimated in two steps. These theoretical insights are demonstrated numerically, both in simulation settings and in an extensive empirical illustration using a time series of S&P500 returns |
Keywords: | forecast combination, forecast combination puzzle, probabilistic forecasting, scoring rules, S&P500 forecasting, two-stage estimation |
Date: | 2022 |
URL: | http://d.repec.org/n?u=RePEc:msh:ebswps:2022-6&r= |
By: | Bjoern Schulte-Tillman; Mawuli Segnon; Bernd Wilfling |
Abstract: | We propose four multiplicative-component volatility MIDAS models to disentangle short- and long-term volatility sources. Three of our models specify short-term volatility as Markov-switching processes. We establish statistical properties, covariance-stationarity conditions, and an estimation framework using regime-switching filter techniques. A simulation study shows the robustness of the estimates against several mis-specifications. An out-of-sample forecasting analysis with daily S&P500 returns and quarterly-sampled (macro)economic variables yields two major results. (i) Specific long-term variables in the MIDAS models significantly improve forecast accuracy (over the non-MIDAS benchmarks). (ii) We robustly find superior performance of one Markov-switching MIDAS specification (among a set of competitor models) when using the 'Term structure' as the long-term variable. |
Keywords: | MIDAS volatility modeling, Hierarchical hidden Markov models, Markov-switching, Forecasting, Model conï¬ dence sets |
JEL: | C51 C53 C58 E44 |
Date: | 2022–06 |
URL: | http://d.repec.org/n?u=RePEc:cqe:wpaper:9922&r= |
By: | Syed Abul, Basher; Perry, Sadorsky |
Abstract: | Bitcoin has grown in popularity and has now attracted the attention of individual and institutional investors. Accurate Bitcoin price direction forecasts are important for determining the trend in Bitcoin prices and asset allocation. This paper addresses several unanswered questions. How important are business cycle variables like interest rates, inflation, and market volatility for forecasting Bitcoin prices? Does the importance of these variables change across time? Are the most important macroeconomic variables for forecasting Bitcoin prices the same as those for gold prices? To answer these questions, we utilize tree-based machine learning classifiers, along with traditional logit econometric models. The analysis reveals several important findings. First, random forests predict Bitcoin and gold price directions with a higher degree of accuracy than logit models. Prediction accuracy for bagging and random forests is between 75% and 80% for a five-day prediction. For 10-day to 20-day forecasts bagging and random forests record accuracies greater than 85%. Second, technical indicators are the most important features for predicting Bitcoin and gold price direction, suggesting some degree of market inefficiency. Third, oil price volatility is important for predicting Bitcoin and gold prices indicating that Bitcoin is a substitute for gold in diversifying this type of volatility. By comparison, gold prices are more influenced by inflation than Bitcoin prices, indicating that gold can be used as a hedge or diversification asset against inflation. |
Keywords: | forecasting; machine learning; random forests; Bitcoin; gold; inflation |
JEL: | C58 E44 G17 |
Date: | 2022–06–06 |
URL: | http://d.repec.org/n?u=RePEc:pra:mprapa:113293&r= |
By: | Alessandro Giovannelli; Luca Mattia Rolla |
Abstract: | We compare the pseudo-real-time forecasting performance of two factor models for a large set of macroeconomic and financial time series: (i) The standard principal-component model used by Stock and Watson (2002) (ii) The factor model with martingale difference errors introduced by Lee and Shao (2018). The factor model with martingale difference error (FMMDE) makes it possible to retrieve a transformation of the original series so that the resulting variables can be partitioned into distinct groups according to whether they are conditionally mean independent upon past information or not. In terms of prediction, this feature of the model allows us to achieve optimal results (in the mean squared error sense) as dimension reduction is performed. Conducting a pseudo-real-time forecasting exercise based on a large dataset of macroeconomic and financial monthly time series for the U.S. economy, the results obtained from the empirical study suggest that FMMDE performs comparably to SW for short-term forecasting horizons. In contrast, for long-term forecasting horizons, FMMDE displays superior performance. These results are particularly evident for Output & Income, Labor Market, and Consumption sectors. |
Date: | 2022–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2205.10256&r= |
By: | German Rodikov; Nino Antulov-Fantulin |
Abstract: | Volatility models of price fluctuations are well studied in the econometrics literature, with more than 50 years of theoretical and empirical findings. The recent advancements in neural networks (NN) in the deep learning field have naturally offered novel econometric modeling tools. However, there is still a lack of explainability and stylized knowledge about volatility modeling with neural networks; the use of stylized facts could help improve the performance of the NN for the volatility prediction task. In this paper, we investigate how the knowledge about the "physics" of the volatility process can be used as an inductive bias to design or constrain a cell state of long short-term memory (LSTM) for volatility forecasting. We introduce a new type of $\sigma$-LSTM cell with a stochastic processing layer, design its learning mechanism and show good out-of-sample forecasting performance. |
Date: | 2022–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2205.07022&r= |
By: | Surbhi Bhatia (Independent Researcher); Manish K. Singh (Department of Humanities and Social Sciences Indian Institute of Technology Roorkee and XKDR Forum) |
Abstract: | Using bankruptcy filings under the new Insolvency and Bankruptcy Code (2016), we investigate the effect of firm characteristics and balance sheet variables on the forecast of one-year-ahead default for Indian manufacturing firms. We compare traditional discriminant analysis and logistic regression models with state-of-the-art variable selection technique-the least absolute shrinkage and selection operator, and the unsupervised techniques of variable selection-to identify key predictive variables. Our findings suggest that the ratios considered as important by Altman (1968) still hold relevance for the prediction of default, no matter the technique applied for variables selection. We find cash to current liability (a liquidity measure) as an additional robust and significant predictor of default. In terms of predictive accuracy, the reduced-form multivariate discriminant analysis model used in Altman (1968) performs at par with the more advanced econometric specification for both in-sample and full-sample default prediction. |
JEL: | C53 G17 G32 G33 |
Date: | 2022–06 |
URL: | http://d.repec.org/n?u=RePEc:anf:wpaper:12&r= |
By: | Hajdini, Ina (Federal Reserve Bank of Cleveland); Kurmann, Andre (Drexel University) |
Abstract: | This paper shows that in the presence of Markov regime shifts, Full Information Rational Expectations (FIRE) models lead to predictable, regime-dependent forecast errors. More generally, regime shifts imply that ex-post forecast error regressions display waves of over-and under-reaction to current information over rolling sample windows. Using survey-based forecast data of macroeconomic aggregates, we confirm the existence of such waves. We then estimate a medium-scale DSGE model with regime shifts in the aggressiveness of monetary policy on U.S. data to assess the quantitative importance of the proposed mechanism. Despite the assumption of FIRE, simulated data conditional on the estimated sequence of regime realizations generates ex-post forecast error predictability consistent with reduced-form regressions from the existing literature and large waves of over- and under-reaction across subsamples. Hence, predictabiliy of ex-post forecast errors is neither a sufficient condition to reject FIRE nor by itself a good metric to test alternative theories of expectations formation. |
Keywords: | Full-information Rational Expectations; Markov Regime Shifts; Forecasting Errors; Waves of Over- and Under-Reaction; Survey of Professional Forecasters |
JEL: | C53 E37 |
Date: | 2022–05–27 |
URL: | http://d.repec.org/n?u=RePEc:ris:drxlwp:2022_005&r= |
By: | George Kapetanios; Fotis Papailias |
Abstract: | This technical report aims to present a generalised framework for assessing the predictive content of ONS real time indicators in both dimensions: (i) individual predictors (i.e. variable-by-variable), and (ii) using machine learning techniques to build variable selection models. The evaluation is done on a nowcasting basis (h = 0). Simple correlation and predictive power scores are included as well as best subset selection, penalised regressions, random forests and principal components. |
Keywords: | factor models, nowcasting, penalised regression, variable selection |
JEL: | C53 E37 |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:nsr:escoet:escoe-tr-15&r= |
By: | Nicole Koenigstein |
Abstract: | The growth of machine-readable data in finance, such as alternative data, requires new modeling techniques that can handle non-stationary and non-parametric data. Due to the underlying causal dependence and the size and complexity of the data, we propose a new modeling approach for financial time series data, the $\alpha_{t}$-RIM (recurrent independent mechanism). This architecture makes use of key-value attention to integrate top-down and bottom-up information in a context-dependent and dynamic way. To model the data in such a dynamic manner, the $\alpha_{t}$-RIM utilizes an exponentially smoothed recurrent neural network, which can model non-stationary times series data, combined with a modular and independent recurrent structure. We apply our approach to the closing prices of three selected stocks of the S\&P 500 universe as well as their news sentiment score. The results suggest that the $\alpha_{t}$-RIM is capable of reflecting the causal structure between stock prices and news sentiment, as well as the seasonality and trends. Consequently, this modeling approach markedly improves the generalization performance, that is, the prediction of unseen data, and outperforms state-of-the-art networks such as long short-term memory models. |
Date: | 2022–03 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2205.01639&r= |
By: | George Kapetanios; Fotis Papailias |
Abstract: | National statistics offices and similar institutions often produce country indices which are based on the aggregation of large number of disaggregate series. In some cases these disaggregate series are also published and, therefore, are available to be used for further research. In other cases the disaggregate series are available only for in-house purposes and are still under research on whether more indices could be extracted. This report is concerned with the very specific task of comparing gains in nowcasting using a single aggregate variable/index versus the full use of all the available disaggregate indices. This approach should be viewed as part of an overall dataset assessment framework where our aim is to assist the applied statistician on whether a novel dataset of time series could be useful to economics researchers |
Keywords: | factor models, neural networks, nowcasting, penalised regression, support vector regression |
JEL: | C53 E37 |
Date: | 2022–05 |
URL: | http://d.repec.org/n?u=RePEc:nsr:escoet:escoe-tr-17&r= |
By: | Philipp Ratz |
Abstract: | Artificial Neural Networks (ANN) have been employed for a range of modelling and prediction tasks using financial data. However, evidence on their predictive performance, especially for time-series data, has been mixed. Whereas some applications find that ANNs provide better forecasts than more traditional estimation techniques, others find that they barely outperform basic benchmarks. The present article aims to provide guidance as to when the use of ANNs might result in better results in a general setting. We propose a flexible nonparametric model and extend existing theoretical results for the rate of convergence to include the popular Rectified Linear Unit (ReLU) activation function and compare the rate to other nonparametric estimators. Finite sample properties are then studied with the help of Monte-Carlo simulations to provide further guidance. An application to estimate the Value-at-Risk of portfolios of varying sizes is also considered to show the practical implications. |
Date: | 2022–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2205.07101&r= |
By: | Dimitar Haralampiev Popov (University of National and World Economy, Department of Management, Sofia, Bulgaria) |
Abstract: | Due to their important place in an economy, small and medium enterprises (SMEs) viability is the focus of numerous scientific studies, European and national programs. One of the most widely used viability prediction model is Altman’s Z-score. Altman’s classical models are not suitable for all situations, though. SMEs’ large nominal number in an economy presents another challenge to researchers. One possible solution to this issue is to use data mining tools that can lead to new knowledge discovery. Data mining is the result of a natural evolution of information technology. The cross industry standard process for data mining (CRISP-DM) is a methodological framework for researching large amounts of data. This paper aims to outline the characteristics of Altman’s Z-score and CRISP-DM, and propose combining them into a methodology for predicting SMEs’ viability. |
Keywords: | Altman Z-score, Data mining, CRISP-DM, SMEs, Bulgaria |
JEL: | M10 P12 C38 C55 |
Date: | 2022–06 |
URL: | http://d.repec.org/n?u=RePEc:sko:wpaper:bep-2022-04&r= |
By: | Nick Arnosti |
Abstract: | This paper introduces a unified framework for stable matching, which nests the traditional definition of stable matching in finite markets and the continuum definition of stable matching from Azevedo and Leshno (2016) as special cases. Within this framework, I identify a novel continuum model, which makes individual-level probabilistic predictions. This new model always has a unique stable outcome, which can be found using an analog of the Deferred Acceptance algorithm. The crucial difference between this model and that of Azevedo and Leshno (2016) is that they assume that the amount of student interest at each school is deterministic, whereas my proposed alternative assumes that it follows a Poisson distribution. As a result, this new model accurately predicts the simulated distribution of cutoffs, even for markets with only ten schools and twenty students. This model generates new insights about the number and quality of matches. When schools are homogeneous, it provides upper and lower bounds on students' average rank, which match results from Ashlagi, Kanoria and Leshno (2017) but apply to more general settings. This model also provides clean analytical expressions for the number of matches in a platform pricing setting considered by Marx and Schummer (2021). |
Date: | 2022–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2205.12881&r= |
By: | Joop Age Harm Adema; Cevat Giray Aksoy; Panu Poutvaara |
Abstract: | In this paper, we present theory and global evidence on how mobile internet access affects desire and plans to emigrate. Our theory predicts that mobile internet access increases desire and plans to emigrate. Our empirical analysis combines survey data on 617,402 individuals from 2,120 subnational districts in 112 countries with data on worldwide 3G mobile internet rollout from 2008 to 2018. We show that an increase in mobile internet access increases the desire and plans to emigrate. Instrumenting 3G rollout with pre-existing 2G infrastructure suggests that the effects are causal. The effect on the desire to emigrate is particularly strong in high-income countries and for above-median-income individuals in lower-middle-income countries. In line with our theory, an important mechanism appears to be that access to the mobile internet lowers the cost of acquiring information on potential destinations. In addition to this, increased internet access reduces perceived material well-being and trust in government. Using municipal-level data from Spain, we also document that 3G rollout increased actual emigration flows. |
Keywords: | migration aspirations, migration intentions, internet access |
JEL: | F20 L86 D83 |
Date: | 2022 |
URL: | http://d.repec.org/n?u=RePEc:ces:ceswps:_9758&r= |
By: | Thomas Epper (IÉSEG School Of Management [Puteaux], LEM - Laboratoire d'Economie et de Management - UNS - Université Nice Sophia Antipolis (... - 2019) - COMUE UCA - COMUE Université Côte d'Azur (2015-2019) - CNRS - Centre National de la Recherche Scientifique - UCA - Université Côte d'Azur, KU - University of Copenhagen = Københavns Universitet); Ernst Fehr (KU - University of Copenhagen = Københavns Universitet); Kristoffer Balle Hvidberg (KU - University of Copenhagen = Københavns Universitet); Claus Thustrup Kreiner (KU - University of Copenhagen = Københavns Universitet); Soren Leth-Petersen (KU - University of Copenhagen = Københavns Universitet); Gregers Nytoft Rasmussen (KU - University of Copenhagen = Københavns Universitet) |
Abstract: | Understanding who commits crime and why is a key topic in social science and important for the design of crime prevention policy. In theory, people who commit crime face different social and economic incentives for criminal activity than other people, or they evaluate the costs and benefits of crime differently because they have different preferences. Empirical evidence on the role of preferences is scarce. Theoretically, risk-tolerant, impatient, and self-interested people are more prone to commit crime than risk-averse, patient, and altruistic people. We test these predictions with a unique combination of data where we use incentivized experiments to elicit the preferences of young men and link these experimental data to their criminal records. In addition, our data allow us to control extensively for other characteristics such as cognitive skills, socioeconomic background, and self-control problems. We find that preferences are strongly associated with actual criminal behavior. Impatience and, in particular, risk tolerance are still strong predictors when we include the full battery of controls. Crime propensities are 8 to 10 percentage points higher for the most risk-tolerant individuals compared to the most risk averse. This effect is half the size of the effect of cognitive skills, which is known to be a very strong predictor of criminal behavior. Looking into different types of crime, we find that preferences significantly predict property offenses, while self-control problems significantly predict violent, drug, and sexual offenses. |
Keywords: | crime,risk preference,time preference,self-control,altruism |
Date: | 2022 |
URL: | http://d.repec.org/n?u=RePEc:hal:journl:hal-03550163&r= |
By: | Johannes S. Kunz (Monash Business School); Carol Propper (Imperial College London) |
Abstract: | In the large literature on the spatial-level correlates of COVID-19, the association between quality of hospital care and outcomes has received little attention to date. To examine whether county-level mortality is correlated with measures of hospital performance, we assess daily cumulative deaths and pre-crisis measures of hospital quality, accounting for state fixed-effects and potential confounders. As a measure of quality, we use the pre-pandemic adjusted five-year penalty rates for excess 30-day readmissions following pneumonia admissions for the hospitals accessible to county residents based on ambulance travel patterns. Our adjustment corrects for socio-economic status and down-weighs observations based on small samples. We find that a one-standard-deviation increase in the quality of local hospitals is associated with a 2% lower death rate (relative to the mean of 20 deaths per 10,000 people) one and a half years after the first recorded death. |
Keywords: | COVID-19, County-level Deaths, Hospital Quality, Health Care Systems |
JEL: | H51 I11 I18 |
Date: | 2022–06 |
URL: | http://d.repec.org/n?u=RePEc:mhe:chemon:2022-01&r= |