|
on Computational Economics |
Issue of 2023‒08‒28
twenty-two papers chosen by |
By: | Tessa Bauman; Bruno Ga\v{s}perov; Stjepan Begu\v{s}i\'c; Zvonko Kostanj\v{c}ar |
Abstract: | Goal-based investing is an approach to wealth management that prioritizes achieving specific financial goals. It is naturally formulated as a sequential decision-making problem as it requires choosing the appropriate investment until a goal is achieved. Consequently, reinforcement learning, a machine learning technique appropriate for sequential decision-making, offers a promising path for optimizing these investment strategies. In this paper, a novel approach for robust goal-based wealth management based on deep reinforcement learning is proposed. The experimental results indicate its superiority over several goal-based wealth management benchmarks on both simulated and historical market data. |
Date: | 2023–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2307.13501&r=cmp |
By: | V\'elez Jim\'enez; Rom\'an Alberto; Lecuanda Ontiveros; Jos\'e Manuel; Edgar Possani |
Abstract: | This paper presents a novel approach for optimizing betting strategies in sports gambling by integrating Von Neumann-Morgenstern Expected Utility Theory, deep learning techniques, and advanced formulations of the Kelly Criterion. By combining neural network models with portfolio optimization, our method achieved remarkable profits of 135.8% relative to the initial wealth during the latter half of the 20/21 season of the English Premier League. We explore complete and restricted strategies, evaluating their performance, risk management, and diversification. A deep neural network model is developed to forecast match outcomes, addressing challenges such as limited variables. Our research provides valuable insights and practical applications in the field of sports betting and predictive modeling. |
Date: | 2023–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2307.13807&r=cmp |
By: | Park, Youngjun; Han, Sumin |
Abstract: | Rapid advancements in deep learning technology have shown great promise in helping us better understand the spatio-temporal characteristics of human mobility in urban areas. There exist two main approaches to spatial deep learning models for urban space - a convolutional neural network (CNN) which originated from visual data like satellite image, and a graph convolutional network (GCN) which is based on the urban topologies such as road network and regional boundaries. However, compared to language-based models that have recently achieved notable success, deep learning models for urban space still need further development. In this study, we propose a novel approach that addresses the trajectories of a trip as sentences of a language and adapts techniques like word embedding from natural language processing to gain insights into human mobility patterns in urban areas. Our approach involves processing sequences of spatial units that are generated by a human agent's trajectory, treating them as akin to word sequences in a language. Specifically, we represent individual trajectories as sequences of spatial vector units using 50×50 meters grid cells to divide the urban area. This representation captures the spatio-temporal changes of the trip, and enables us to employ natural language processing techniques, such as word embeddings and attention mechanisms, to analyze the urban trajectory sequences. Additionally, we leverage word embedding models from language processing to acquire compressed representations of the trajectory. These compressed representations contain richer information about the features, while minimizing the computational load. |
Date: | 2023–06–17 |
URL: | http://d.repec.org/n?u=RePEc:osf:osfxxx:guf3z&r=cmp |
By: | Chinn, Menzie D.; Meunier, Baptiste; Stumpner, Sebastian |
Abstract: | We nowcast world trade using machine learning, distinguishing between tree-based methods (random forest, gradient boosting) and their regression-based counterparts (macroeconomic random forest, linear gradient boosting). While much less used in the literature, the latter are found to outperform not only the tree-based techniques, but also more “traditional” linear and non-linear techniques (OLS, Markov-switching, quantile regression). They do so significantly and consistently across different horizons and real-time datasets. To further improve performances when forecasting with machine learning, we propose a flexible three-step approach composed of (step 1) pre-selection, (step 2) factor extraction and (step 3) machine learning regression. We find that both pre-selection and factor extraction significantly improve the accuracy of machine-learning-based predictions. This three-step approach also outperforms workhorse benchmarks, such as a PCA-OLS model, an elastic net, or a dynamic factor model. Finally, on top of high accuracy, the approach is flexible and can be extended seamlessly beyond world trade. JEL Classification: C53, C55, E37 |
Keywords: | big data, factor model, forecasting, large dataset, pre-selection |
Date: | 2023–08 |
URL: | http://d.repec.org/n?u=RePEc:ecb:ecbwps:20232836&r=cmp |
By: | Bryan T. Kelly; Dacheng Xiu |
Abstract: | We survey the nascent literature on machine learning in the study of financial markets. We highlight the best examples of what this line of research has to offer and recommend promising directions for future research. This survey is designed for both financial economists interested in grasping machine learning tools, as well as for statisticians and machine learners seeking interesting financial contexts where advanced methods may be deployed. |
JEL: | C33 C4 C45 C55 C58 G1 G10 G11 G12 G17 |
Date: | 2023–07 |
URL: | http://d.repec.org/n?u=RePEc:nbr:nberwo:31502&r=cmp |
By: | Josef Teichmann; Hanna Wutte |
Abstract: | Introduced in the late 90s, the passport option gives its holder the right to trade in a market and receive any positive gain in the resulting traded account at maturity. Pricing the option amounts to solving a stochastic control problem that for $d>1$ risky assets remains an open problem. Even in a correlated Black-Scholes (BS) market with $d=2$ risky assets, no optimal trading strategy has been derived in closed form. In this paper, we derive a discrete-time solution for multi-dimensional BS markets with uncorrelated assets. Moreover, inspired by the success of deep reinforcement learning in, e.g., board games, we propose two machine learning-powered approaches to pricing general options on a portfolio value in general markets. These approaches prove to be successful for pricing the passport option in one-dimensional and multi-dimensional uncorrelated BS markets. |
Date: | 2023–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2307.14887&r=cmp |
By: | Paolo Andreini (Independent Researcher); Cosimo Izzo (Independent Researcher); Giovanni Ricco (CREST, Ecole Polytechnique, University of Warwick, OFCE-SciencesPo, CEPR) |
Abstract: | A novel deep neural network framework – that we refer to as Deep Dynamic Factor Model (D2FM) –, is able to encode the information available, from hundreds of macroeconomic and financial time-series into a handful of unobserved latent states. While similar in spirit to traditional dynamic factor models (DFMs), differently from those, this new class of models allows for nonlinearities between factors and observables due to the autoencoder neural network structure. However, by design, the latent states of the model can still be interpreted as in a standard factor model. Both in a fully real-time out-of-sample nowcasting and forecasting exercise with US data and in a Monte Carlo experiment, the D2FM improves over the performances of a state-of-the-art DFM. |
Keywords: | Machine Learning, Deep Learning, Autoencoders, Real-Time data, Time-Series, Forecasting, Nowcasting, Latent Component Models, Factor Models |
JEL: | C22 C52 C53 C55 |
Date: | 2023–05–20 |
URL: | http://d.repec.org/n?u=RePEc:crs:wpaper:2023-08&r=cmp |
By: | Ivan Letteri |
Abstract: | Volatility-based trading strategies have attracted a lot of attention in financial markets due to their ability to capture opportunities for profit from market dynamics. In this article, we propose a new volatility-based trading strategy that combines statistical analysis with machine learning techniques to forecast stock markets trend. The method consists of several steps including, data exploration, correlation and autocorrelation analysis, technical indicator use, application of hypothesis tests and statistical models, and use of variable selection algorithms. In particular, we use the k-means++ clustering algorithm to group the mean volatility of the nine largest stocks in the NYSE and NasdaqGS markets. The resulting clusters are the basis for identifying relationships between stocks based on their volatility behaviour. Next, we use the Granger Causality Test on the clustered dataset with mid-volatility to determine the predictive power of a stock over another stock. By identifying stocks with strong predictive relationships, we establish a trading strategy in which the stock acting as a reliable predictor becomes a trend indicator to determine the buy, sell, and hold of target stock trades. Through extensive backtesting and performance evaluation, we find the reliability and robustness of our volatility-based trading strategy. The results suggest that our approach effectively captures profitable trading opportunities by leveraging the predictive power of volatility clusters, and Granger causality relationships between stocks. The proposed strategy offers valuable insights and practical implications to investors and market participants who seek to improve their trading decisions and capitalize on market trends. It provides valuable insights and practical implications for market participants looking to. |
Date: | 2023–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2307.13422&r=cmp |
By: | Zhiyu Cao; Zihan Chen; Prerna Mishra; Hamed Amini; Zachary Feinstein |
Abstract: | Financial contagion has been widely recognized as a fundamental risk to the financial system. Particularly potent is price-mediated contagion, wherein forced liquidations by firms depress asset prices and propagate financial stress, enabling crises to proliferate across a broad spectrum of seemingly unrelated entities. Price impacts are currently modeled via exogenous inverse demand functions. However, in real-world scenarios, only the initial shocks and the final equilibrium asset prices are typically observable, leaving actual asset liquidations largely obscured. This missing data presents significant limitations to calibrating the existing models. To address these challenges, we introduce a novel dual neural network structure that operates in two sequential stages: the first neural network maps initial shocks to predicted asset liquidations, and the second network utilizes these liquidations to derive resultant equilibrium prices. This data-driven approach can capture both linear and non-linear forms without pre-specifying an analytical structure; furthermore, it functions effectively even in the absence of observable liquidation data. Experiments with simulated datasets demonstrate that our model can accurately predict equilibrium asset prices based solely on initial shocks, while revealing a strong alignment between predicted and true liquidations. Our explainable framework contributes to the understanding and modeling of price-mediated contagion and provides valuable insights for financial authorities to construct effective stress tests and regulatory policies. |
Date: | 2023–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2307.14322&r=cmp |
By: | Masanori Hirano; Kentaro Minami; Kentaro Imajo |
Abstract: | Deep hedging is a deep-learning-based framework for derivative hedging in incomplete markets. The advantage of deep hedging lies in its ability to handle various realistic market conditions, such as market frictions, which are challenging to address within the traditional mathematical finance framework. Since deep hedging relies on market simulation, the underlying asset price process model is crucial. However, existing literature on deep hedging often relies on traditional mathematical finance models, e.g., Brownian motion and stochastic volatility models, and discovering effective underlying asset models for deep hedging learning has been a challenge. In this study, we propose a new framework called adversarial deep hedging, inspired by adversarial learning. In this framework, a hedger and a generator, which respectively model the underlying asset process and the underlying asset process, are trained in an adversarial manner. The proposed method enables to learn a robust hedger without explicitly modeling the underlying asset process. Through numerical experiments, we demonstrate that our proposed method achieves competitive performance to models that assume explicit underlying asset processes across various real market data. |
Date: | 2023–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2307.13217&r=cmp |
By: | Chuheng Zhang; Yitong Duan; Xiaoyu Chen; Jianyu Chen; Jian Li; Li Zhao |
Abstract: | Optimized trade execution is to sell (or buy) a given amount of assets in a given time with the lowest possible trading cost. Recently, reinforcement learning (RL) has been applied to optimized trade execution to learn smarter policies from market data. However, we find that many existing RL methods exhibit considerable overfitting which prevents them from real deployment. In this paper, we provide an extensive study on the overfitting problem in optimized trade execution. First, we model the optimized trade execution as offline RL with dynamic context (ORDC), where the context represents market variables that cannot be influenced by the trading policy and are collected in an offline manner. Under this framework, we derive the generalization bound and find that the overfitting issue is caused by large context space and limited context samples in the offline setting. Accordingly, we propose to learn compact representations for context to address the overfitting problem, either by leveraging prior knowledge or in an end-to-end manner. To evaluate our algorithms, we also implement a carefully designed simulator based on historical limit order book (LOB) data to provide a high-fidelity benchmark for different algorithms. Our experiments on the high-fidelity simulator demonstrate that our algorithms can effectively alleviate overfitting and achieve better performance. |
Date: | 2023–05 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2307.11685&r=cmp |
By: | Soohan Kim; Jimyeong Kim; Hong Kee Sul; Youngjoon Hong |
Abstract: | The purpose of this research is to devise a tactic that can closely track the daily cumulative volume-weighted average price (VWAP) using reinforcement learning. Previous studies often choose a relatively short trading horizon to implement their models, making it difficult to accurately track the daily cumulative VWAP since the variations of financial data are often insignificant within the short trading horizon. In this paper, we aim to develop a strategy that can accurately track the daily cumulative VWAP while minimizing the deviation from the VWAP. We propose a method that leverages the U-shaped pattern of intraday stock trade volumes and use Proximal Policy Optimization (PPO) as the learning algorithm. Our method follows a dual-level approach: a Transformer model that captures the overall(global) distribution of daily volumes in a U-shape, and a LSTM model that handles the distribution of orders within smaller(local) time intervals. The results from our experiments suggest that this dual-level architecture improves the accuracy of approximating the cumulative VWAP, when compared to previous reinforcement learning-based models. |
Date: | 2023–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2307.10649&r=cmp |
By: | Ambrois, Matteo; Butticè, Vincenzo; Caviggioli, Federico; Cerulli, Giovanni; Croce, Annalisa; De Marco, Antonio; Giordano, Andrea; Resce, Giuliano; Toschi, Laura; Ughetto, Elisa; Zinilli, Antonio |
Abstract: | This working paper uses machine learning to identify Cleantech companies in the Orbis database, based on self-declared business descriptions. Identifying Cleantech companies is challenging, as there is no universally accepted definition of what constitutes Cleantech. This novel approach allows to scale-up the identification process by training an algorithm to mimic (human) expert assessment in order to identify Cleantech companies in a large dataset containing information on millions of European companies. The resulting dataset is used to construct a mapping of Cleantech companies in Europe and thereby provide a new perspective on the functioning of the EU cleantech sector. The paper serves as an introductory chapter to a series of analyses that will result from the CLEU project, a collaboration between the universities of Politecnico di Torino, Politecnico di Milano and Università degli Studi di Bologna. Notably, the project aims to deepen our understanding of the financing needs of the EU Cleantech sector. It was funded by the EIB's University Research Sponsorship (EIBURS) programme and supervised by the EIF's Research and Market Analysis Division. |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:zbw:eifwps:202391&r=cmp |
By: | NAKAMURA Ryohei; NAGAMUNE Takeshi; HAYASHI Syuusei |
Abstract: | As a preliminary step to conducting a self-organization simulation of the concentration and dispersion of the information and communications industry, we will quantify the spatial concentration of the information and communications industry in large cities in Japan. Spatial analysis of the information and communications industry in Sapporo, Sendai, Hiroshima, and Fukuoka, which are regional core cities, in addition to the 23 wards of Tokyo, was conducted using Chocho data from the "Economic Census." As a result of detecting spatial autocorrelation in small area units in each city, hotspots indicating concentration of information and communications business establishments were detected in the city center of each city. At the same time, we were able to confirm the influence of the economy of agglomeration, which is the premise of the self-organization model, and also recognized that the information and communications industry is an industry that is suitable candidate for simulation of the self-organization phenomenon. Krugman (1996) first formulated the self-organization phenomenon in the city and clarified the emergence principle of the peripheral city, but it was limited to numerical simulation. After that, Kumar et al. (2007) used actual data to show the possibility of applying Krugman's self-organization model to predict the concentration and dispersion of firms. In this paper, we examined whether the self-organization model is effective for reproducing and predicting the accumulation and dispersion of the information and communications industry in Japanese cities by using the agent-based model. |
Date: | 2023–08 |
URL: | http://d.repec.org/n?u=RePEc:eti:rdpsjp:23027&r=cmp |
By: | Francesco Mandelli; Marco Pinciroli; Michele Trapletti; Edoardo Vittori |
Abstract: | In this paper, we focus on finding the optimal hedging strategy of a credit index option using reinforcement learning. We take a practical approach, where the focus is on realism i.e. discrete time, transaction costs; even testing our policy on real market data. We apply a state of the art algorithm, the Trust Region Volatility Optimization (TRVO) algorithm and show that the derived hedging strategy outperforms the practitioner's Black & Scholes delta hedge. |
Date: | 2023–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2307.09844&r=cmp |
By: | Damian Ślusarczyk (University of Warsaw, Faculty of Economic Sciences); Robert Ślepaczuk (University of Warsaw, Quantitative Finance Research Group, Department of Quantitative Finance, Faculty of Economic Sciences) |
Abstract: | We aim to answer the question of whether using forecasted stock returns based on machine learning and time series models in a mean-variance portfolio framework yields better results than relying on historical returns. Nevertheless, the problem of the efficient stock selection has been tested for more than 50 years, the issue of adequate construction of mean-variance portfolio framework and incorporating forecasts of returns in it has not been solved yet. Stock returns portfolios were created using ’raw’ historical returns and forecasted return based on ARIMA-GARCH and the XGBoost models. Two optimization problems were concerned: global maximum information ratio and global mini-mum variance. Then strategies were compared with two benchmarks – an equally weighted portfolio and buy and hold on the DJIA index. Strategies were tested on Dow Jones Industrial Average stocks in the period from 2007-01-01 to 2022-12-31 and daily data was used. The main portfolio performance metrics were information ratio* and information ratio**. The results showed that using forecasted returns we can enhance our portfolio selection based on Markowitz framework, but it is not a universal solution, and we have to control all the parameters and hyperparameters of selected models. |
Keywords: | Algorithmic Investment Strategies, Markowitz framework, portfolio optimization, forecasting, ARIMA, GARCH, XGBoost, minimum variance |
JEL: | C4 C14 C45 C53 C58 G13 |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:war:wpaper:2023-17&r=cmp |
By: | Yuanhao Gong |
Abstract: | Training and deploying the large language models requires a large mount of computational resource because the language models contain billions of parameters and the text has thousands of tokens. Another problem is that the large language models are static. They are fixed after the training process. To tackle these issues, in this paper, we propose to train and deploy the dynamic large language model on blockchains, which have high computation performance and are distributed across a network of computers. A blockchain is a secure, decentralized, and transparent system that allows for the creation of a tamper-proof ledger for transactions without the need for intermediaries. The dynamic large language models can continuously learn from the user input after the training process. Our method provides a new way to develop the large language models and also sheds a light on the next generation artificial intelligence systems. |
Date: | 2023–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2307.10549&r=cmp |
By: | Xiao-Yang Liu; Guoxuan Wang; Daochen Zha |
Abstract: | Large language models (LLMs) have demonstrated remarkable proficiency in understanding and generating human-like texts, which may potentially revolutionize the finance industry. However, existing LLMs often fall short in the financial field, which is mainly attributed to the disparities between general text data and financial text data. Unfortunately, there is only a limited number of financial text datasets available (quite small size), and BloombergGPT, the first financial LLM (FinLLM), is close-sourced (only the training logs were released). In light of this, we aim to democratize Internet-scale financial data for LLMs, which is an open challenge due to diverse data sources, low signal-to-noise ratio, and high time-validity. To address the challenges, we introduce an open-sourced and data-centric framework, \textit{Financial Generative Pre-trained Transformer (FinGPT)}, that automates the collection and curation of real-time financial data from >34 diverse sources on the Internet, providing researchers and practitioners with accessible and transparent resources to develop their FinLLMs. Additionally, we propose a simple yet effective strategy for fine-tuning FinLLM using the inherent feedback from the market, dubbed Reinforcement Learning with Stock Prices (RLSP). We also adopt the Low-rank Adaptation (LoRA, QLoRA) method that enables users to customize their own FinLLMs from open-source general-purpose LLMs at a low cost. Finally, we showcase several FinGPT applications, including robo-advisor, sentiment analysis for algorithmic trading, and low-code development. FinGPT aims to democratize FinLLMs, stimulate innovation, and unlock new opportunities in open finance. The codes are available at https://github.com/AI4Finance-Foundation/FinGPT and https://github.com/AI4Finance-Foundation /FinNLP |
Date: | 2023–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2307.10485&r=cmp |
By: | Koefer, Franziska; Lemken, Ivo; Pauls, Jan |
Abstract: | Fairness is a crucial concept in the context of artificial intelligence (AI) ethics and policy. It is an incremental component in existing ethical principle frameworks, especially for algorithm-enabled decision systems. Yet, unwanted biases in algorithms persist due to the failure of practitioners to consider the social context in which algorithms operate. Recent initiatives have led to the development of ethical principles, guidelines and codes to guide organisations through the development, implementation and use of fair AI. However, practitioners still struggle with the various interpretations of abstract fairness principles, making it necessary to ask context-specific questions to create organisational awareness of fairness-related risks and how AI affects them. This paper argues that there is a gap between the potential and actual realised value of AI. We propose a framework that analyses the challenges throughout a typical AI product life cycle while focusing on the critical question of how rather broadly defined fairness principles may be translated into day-to-day practical solutions at the organisational level. We report on an exploratory case study of a social impact microfinance organisation that is using AI-enabled credit scoring to support the screening process of particularly financially marginalised entrepreneurs. This paper highlights the importance of considering the strategic role of the organisation when developing and evaluating fair algorithm-enabled decision systems. The paper concludes that the framework, introduced in this paper, provides a set of questions that can guide thinking processes inside organisations when aiming to implement fair AI systems. |
Date: | 2023 |
URL: | http://d.repec.org/n?u=RePEc:zbw:eifwps:202388&r=cmp |
By: | Valerio Capraro; Roberto Di Paolo; Veronica Pizziol |
Abstract: | Generative artificial intelligence holds enormous potential to revolutionize decision-making processes, from everyday to high-stake scenarios. However, as many decisions carry social implications, for AI to be a reliable assistant for decision-making it is crucial that it is able to capture the balance between self-interest and the interest of others. We investigate the ability of three of the most advanced chatbots to predict dictator game decisions across 78 experiments with human participants from 12 countries. We find that only GPT-4 (not Bard nor Bing) correctly captures qualitative behavioral patterns, identifying three major classes of behavior: self-interested, inequity-averse, and fully altruistic. Nonetheless, GPT-4 consistently overestimates other-regarding behavior, inflating the proportion of inequity-averse and fully altruistic participants. This bias has significant implications for AI developers and users. |
Date: | 2023–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2307.12776&r=cmp |
By: | Steve Phelps; Rebecca Ranson |
Abstract: | AI Alignment is often presented as an interaction between a single designer and an artificial agent in which the designer attempts to ensure the agent's behavior is consistent with its purpose, and risks arise solely because of conflicts caused by inadvertent misalignment between the utility function intended by the designer and the resulting internal utility function of the agent. With the advent of agents instantiated with large-language models (LLMs), which are typically pre-trained, we argue this does not capture the essential aspects of AI safety because in the real world there is not a one-to-one correspondence between designer and agent, and the many agents, both artificial and human, have heterogeneous values. Therefore, there is an economic aspect to AI safety and the principal-agent problem is likely to arise. In a principal-agent problem conflict arises because of information asymmetry together with inherent misalignment between the utility of the agent and its principal, and this inherent misalignment cannot be overcome by coercing the agent into adopting a desired utility function through training. We argue the assumptions underlying principal-agent problems are crucial to capturing the essence of safety problems involving pre-trained AI models in real-world situations. Taking an empirical approach to AI safety, we investigate how GPT models respond in principal-agent conflicts. We find that agents based on both GPT-3.5 and GPT-4 override their principal's objectives in a simple online shopping task, showing clear evidence of principal-agent conflict. Surprisingly, the earlier GPT-3.5 model exhibits more nuanced behaviour in response to changes in information asymmetry, whereas the later GPT-4 model is more rigid in adhering to its prior alignment. Our results highlight the importance of incorporating principles from economics into the alignment process. |
Date: | 2023–07 |
URL: | http://d.repec.org/n?u=RePEc:arx:papers:2307.11137&r=cmp |
By: | Rodier, Caroline PhD; Horn, Abigail PhD; Zhang, Yunwan MSc; Kaddoura, Ihab PhD; Müller, Sebastian MSc |
Abstract: | This study used a simulation to examine nonpharmaceutical interventions (NPIs) that could have been implemented early in a COVID-19 surge to avoid a large wave of infections, deaths, and an overwhelmed hospital system. The authors integrated a dynamic agent-based travel model with an infection dynamic model. Both models were developed with and calibrated to local data from Los Angeles County (LAC), resulting in a synthetic population of 10 million agents with detailed socio-economic and activity-based characteristics representative of the County’s population. The study focused on the time of the second wave of COVID-19 in LAC (November 1, 2020, to February 10, 2021), before vaccines were introduced. The model accounted for mandated and self-imposed interventions at the time, by incorporating mobile device data providing observed reductions in activity patterns from pre-pandemic norm, and it represented multiple employment categories with literature-informed contact distributions. The combination of NPIs—such as masks, antigen testing, and reduced contact intensity—were the most effective, among the least restrictive, means to reduce infections. The findings may be relevant to public health policy interventions in the community and at the workplace. The study demonstrates that investments in activity-based travel models, including detailedindividual-level socio-demographic characteristics and activity behaviors, can facilitate the evaluation of NPIs to reduce infectious disease epidemics, including COVID-19. The framework developed is generalizable across SARS-COV-2 variants, or even other viral infections, with minimal modifications to the modeling infrastructure. |
Keywords: | Engineering, COVID-19, communicable diseases, virus transmission, public health, simulation, intelligent agents |
Date: | 2023–07–01 |
URL: | http://d.repec.org/n?u=RePEc:cdl:itsdav:qt5f78h654&r=cmp |