A review of attacker-defender games: Current state and paths forward
In this article, we review the literature which proposes attacker-defender games to protect against strategic adversarial threats. More specifically, we follow the systematic literature review methodology to collect and review 127 journal articles that have been published over the past 15 years. We start by briefly discussing the common application areas that are addressed in the literature, although our focus in this review lies heavily in the approaches that have been adopted to model and solve attacker-defender games. In studying these approaches, we begin by analyzing the following features of the proposed game formulations: the sequence of moves, number of players, nature of decision variables and objective functions, and time horizons. We then analyze the common assumptions of perfect rationality, risk neutrality, and complete information that are enforced within the majority of the articles, and report on state-of-the-art research which has begun relaxing these assumptions. We find that relaxing these assumptions presents further challenges, such as enforcing new assumptions regarding how uncertainties are modeled, and issues with intractability when models are reformulated to account for considerations such as risk preferences. Finally, we examine the methods that have been adopted to solve attacker-defender games. We find that the majority of the articles obtain closed-form solutions to their models, while there are also many articles that developed novel solution algorithms and heuristics. Upon synthesizing and analyzing the literature, we expose open questions in the field, and present promising future research directions that can advance current knowledge.
Risk-based allocation of COVID-19 personal protective equipment under supply shortages
The COVID-19 outbreak put healthcare systems across the globe under immense pressure to meet the unprecedented demand for critical supplies and personal protective equipment (PPE). The traditional cost-effective supply chain paradigm failed to respond to the increased demand, putting healthcare workers (HCW) at a much higher infection risk relative to the general population. Recognizing PPE shortages and high infection risk for HCWs, the World Health Organization (WHO) recommends allocations based on ethical principles. In this paper, we model the infection risk for HCWs as a function of usage and use it as the basis for distribution planning that balances government procurement decisions, hospitals' PPE usage policies, and WHO ethical allocation guidelines. We propose an infection risk model that integrates PPE allocation decisions with disease progression estimates to quantify infection risk among HCWs. The proposed risk function is used to derive closed-form allocation decisions under WHO ethical guidelines in both deterministic and stochastic settings. The modelling is then extended to dynamic distribution planning. Although nonlinear, we reformulate the resulting model to make it solvable using off-the-shelf software. The risk function successfully accounts for virus prevalence in space and in time and leads to allocations that are sensitive to the differences between regions. Comparative analysis shows that the allocation policies lead to significantly different levels of infection risk, especially under high virus prevalence. The best-outcome allocation policy that aims to minimize the total infected cases outperforms other policies under this objective and that of minimizing the maximum number of infections per period.
Fair-split distribution of multi-dose vaccines with prioritized age groups and dynamic demand: The case study of COVID-19
The emergence of the SARS-CoV-2 virus and new viral variations with higher transmission and mortality rates have highlighted the urgency to accelerate vaccination to mitigate the morbidity and mortality of the COVID-19 pandemic. For this purpose, this paper formulates a new multi-vaccine, multi-depot location-inventory-routing problem for vaccine distribution. The proposed model addresses a wide variety of vaccination concerns: prioritizing age groups, fair distribution, multi-dose injection, dynamic demand, etc. To solve large-size instances of the model, we employ a Benders decomposition algorithm with a number of acceleration techniques. To monitor the dynamic demand of vaccines, we propose a new adjusted susceptible-infectious-recovered (SIR) epidemiological model, where infected individuals are tested and quarantined. The solution to the optimal control problem dynamically allocates the vaccine demand to reach the endemic equilibrium point. Finally, to illustrate the applicability and performance of the proposed model and solution approach, the paper reports extensive numerical experiments on a real case study of the vaccination campaign in France. The computational results show that the proposed Benders decomposition algorithm is 12 times faster, and its solutions are, on average, 16% better in terms of quality than the Gurobi solver under a limited CPU time. In terms of vaccination strategies, our results suggest that delaying the recommended time interval between doses of injection by a factor of 1.5 reduces the unmet demand up to 50%. Furthermore, we observed that the mortality is a convex function of fairness and an appropriate level of fairness should be adapted through the vaccination.
The hammer and the jab: Are COVID-19 lockdowns and vaccinations complements or substitutes?
The COVID-19 pandemic has devastated lives and economies around the world. Initially a primary response was locking down parts of the economy to reduce social interactions and, hence, the virus' spread. After vaccines have been developed and produced in sufficient quantity, they can largely replace broad lock downs. This paper explores how lockdown policies should be varied during the year or so gap between when a vaccine is approved and when all who wish have been vaccinated. Are vaccines and lockdowns substitutes during that crucial time, in the sense that lockdowns should be reduced as vaccination rates rise? Or might they be complementary with the prospect of imminent vaccination increasing the value of stricter lockdowns, since hospitalization and death averted then may be permanently prevented, not just delayed? We investigate this question with a simple dynamic optimization model that captures both epidemiological and economic considerations. In this model, increasing the rate of vaccine deployment may increase or reduce the optimal total lockdown intensity and duration, depending on the values of other model parameters. That vaccines and lockdowns can act as either substitutes or complements even in a relatively simple model casts doubt on whether in more complicated models or the real world one should expect them to always be just one or the other. Within our model, for parameter values reflecting conditions in developed countries, the typical finding is to ease lockdown intensity gradually after substantial shares of the population have been vaccinated, but other strategies can be optimal for other parameter values. Reserving vaccines for those who have not yet been infected barely outperforms simpler strategies that ignore prior infection status. For certain parameter combinations, there are instances in which two quite different policies can perform equally well, and sometimes very small increases in vaccine capacity can tip the optimal solution to one that involves much longer and more intense lockdowns.
Behavioral Analytics for Myopic Agents
Many multi-agent systems have a single coordinator providing incentives to a large number of agents. Two challenges faced by the coordinator are a finite budget from which to allocate incentives, and an initial lack of knowledge about the utility function of the agents. Here, we present a behavioral analytics approach for solving the coordinator's problem when the agents make decisions by maximizing utility functions that depend on prior system states, inputs, and other parameters that are initially unknown. Our behavioral analytics framework involves three steps: first, we develop a model that describes the decision-making process of an agent; second, we use data to estimate the model parameters for each agent and predict their future decisions; and third, we use these predictions to optimize a set of incentives that will be provided to each agent. The framework and approaches we propose in this paper can then adapt incentives as new information is collected. Furthermore, we prove that the incentives computed by this approach are asymptotically optimal with respect to a loss function that describes the coordinator's objective. We optimize incentives with a decomposition scheme, where each sub-problem solves the coordinator's problem for a single agent, and the master problem is a pure integer program. We conclude with a simulation study to evaluate the effectiveness of our approach for designing a personalized weight loss program. The results show that our approach maintains efficacy of the program while reducing its costs by up to 60%, while adaptive heuristics provide substantially less savings.
A multiple criteria approach for building a pandemic impact assessment composite indicator: The case of COVID-19 in Portugal
The COVID-19 pandemic has caused major damage and disruption to social, economic, and health systems (among others). In addition, it has posed unprecedented challenges to public health and policy/decision-makers who have been responsible for designing and implementing measures to mitigate its strong negative impact. The Portuguese health authorities have used decision analysis techniques to assess the impact of the pandemic and implemented measures for counties, regions, or across the entire country. These decision tools have been subject to some criticism and many stakeholders requested novel approaches. In particular, those which considered the dynamic changes in the pandemic's behaviour due to new virus variants and vaccines. A multidisciplinary team formed by researchers from the COVID-19 Committee of Instituto Superior Técnico at the University of Lisbon (CCIST analyst team) and physicians from the Crisis Office of the Portuguese Medical Association (GCOM expert team) collaborated to create a new tool to help politicians and decision-makers to fight the pandemic. This paper presents the main steps that led to the building of a pandemic impact assessment composite indicator applied to the specific case of COVID-19 in Portugal. A multiple criteria approach based on an additive multi-attribute value theory aggregation model was used to build the pandemic assessment composite indicator. The parameters of the additive model were devised based on an interactive socio-technical and co-constructive process between the CCIST and GCOM team members. The deck of cards method was the adopted technical tool to assist in the assessment the value functions as well as in the assessment of the criteria weights. The final tool was presented at a press conference and had a powerful impact on the Portuguese and on the main health decision-making stakeholders in the country. In this paper, a completed mathematical and graphical description of this tool is presented.
On the impact of resource relocation in facing health emergencies
The outbreak of SARS-CoV-2 and the corresponding surge in patients with severe symptoms of COVID-19 put a strain on health systems, requiring specialized material and human resources, often exceeding the locally available ones. Motivated by a real emergency response system employed in Northern Italy, we propose a mathematical programming approach for rebalancing the health resources among a network of hospitals in a large geographical area. It is meant for tactical planning in facing foreseen peaks of patients requiring specialized treatment. Our model has a clean combinatorial structure. At the same time, it considers the handling of patients by a dedicated home healthcare service, and the efficient exploitation of resource sharing. We introduce mathematical programming heuristic based on decomposition methods and column generation to drive very large-scale neighborhood search. We evaluate its embedding in a multi-objective optimization framework. We experiment on real world data of the COVID-19 in Northern Italy during 2020, whose aggregation and post processing is made openly available to the community. Our approach proves to be effective in tackling realistic instances, thus making it a reliable basis for actual decision support tools.
Building viable stockpiles of personnel protective equipment
Many stockpiled personnel protective equipment (PPE) were of no use during COVID-19 because they have expired. The need for rethinking past approaches of building PPE stockpiles without planning for their timely rotation has become clear. We develop a game-theoretic pandemic preparedness model for single and multiple PPE products for a budget-constrained governmental organization (GO) supplied by a manufacturer. The GO maximizes preparedness, measured by the service rate of PPE, whereas the manufacturer maximizes profit. The manufacturer supplies the PPE stockpile in the first year. Thereafter, the manufacturer buys back a quantity of older PPE from the GO annually and sells the GO the same quantity of new PPE. The manufacturer sells older PPE in the market place. We find that this approach induces the manufacturer to rotate inventory in the stockpile. Joint determination of the stockpile size and its rotation results in no waste from expired PPE and is better than separately determining the stockpile size and then determining how to rotate it. Using insights from the single PPE model, we examine the optimal budget allocation among multiple PPE products. We also consider the effect of spot market prices of PPE during a pandemic on the optimal stockpile sizes. We find that spot market prices of PPE can have a significant effect on the optimal stockpile sizes. We examine the performance of the proposed approach in a manufacturers-distributor-GO supply chain and with an option for the GO to invest in the manufacturer's volume flexibility and show its effectiveness.
Optimal timing of non-pharmaceutical interventions during an epidemic
In response to the recent outbreak of the SARS-CoV-2 virus governments have aimed to reduce the virus's spread through, , non-pharmaceutical intervention. We address the question when such measures should be implemented and, once implemented, when to remove them. These issues are viewed through a real-options lens and we develop an SIRD-like continuous-time Markov chain model to analyze a sequence of options: the option to intervene and introduce measures and, after intervention has started, the option to remove these. Measures can be imposed multiple times. We implement our model using estimates from empirical studies and, under fairly general assumptions, our main conclusions are that: (1) measures should be put in place not long after the first infections occur; (2) if the epidemic is discovered when there are many infected individuals already, then it is optimal never to introduce measures; (3) once the decision to introduce measures has been taken, these should stay in place until the number of susceptible or infected members of the population is close to zero; (4) it is never optimal to introduce a tier system to phase-in measures but it is optimal to use a tier system to phase-out measures; (5) a more infectious variant may reduce the duration of measures being in place; (6) the risk of infections being brought in by travelers should be curbed even when no other measures are in place. These results are robust to several variations of our base-case model.
Seriation Using Tree-penalized Path Length
Given a sample of data points and an by dissimilarity matrix, data seriation methods produce a linear ordering of the objects, putting similar objects nearby in the ordering. One may visualize the reordered dissimilarity matrix with a heat map and thus understand the structure of the data, while still displaying the full matrix of dissimilarities. Good orderings produce heat maps that are easy to read and allow for clear interpretation. We consider two popular seriation methods, minimizing path length by solving the Traveling Salesman Problem (TSP), and Optimal Leaf Ordering (OLO), which minimizes path length among all orderings consistent with a given tree structure. Learning from the strengths and weaknesses of the two methods, we introduce a new hybrid seriation method, tree-penalized Path Length (tpPL). The objective is a linear combination of path length and the extent of violations of the tree structure, with a parameter that transitions the optimal paths smoothly from TSP to OLO. We present a detailed study over 44 synthetic datasets which are designed to bring out the strengths and weaknesses of the three methods, finding that the hybrid nature of tpPL enables it to overcome the weaknesses of TSP and OLO.
Dynamic planning of a two-dose vaccination campaign with uncertain supplies
The ongoing COVID-19 pandemic has led public health authorities to face the unprecedented challenge of planning a global vaccination campaign, which for most protocols entails the administration of two doses, separated by a bounded but flexible time interval. The partial immunity already offered by the first dose and the high levels of uncertainty in the vaccine supplies have been characteristic of most of the vaccination campaigns implemented worldwide and made the planning of such interventions extremely complex. Motivated by this compelling challenge, we propose a stochastic optimization framework for optimally scheduling a two-dose vaccination campaign in the presence of uncertain supplies, taking into account constraints on the interval between the two doses and on the capacity of the healthcare system. The proposed framework seeks to maximize the vaccination coverage, considering the different levels of immunization obtained with partial (one dose only) and complete vaccination (two doses). We cast the optimization problem as a convex second-order cone program, which can be efficiently solved through numerical techniques. We demonstrate the potential of our framework on a case study calibrated on the COVID-19 vaccination campaign in Italy. The proposed method shows good performance when unrolled in a sliding-horizon fashion, thereby offering a powerful tool to help public health authorities calibrate the vaccination campaign, pursuing a trade-off between efficacy and the risk associated with shortages in supply.
Pandemic portfolio choice
COVID-19 has taught us that a pandemic can significantly increase biometric risk and at the same time trigger crashes of the stock market. Taking these potential co-movements of financial and non-financial risks into account, we study the portfolio problem of an agent who is aware that a future pandemic can affect her health and personal finances. The corresponding stochastic dynamic optimization problem is complex: It is characterized by a system of Hamilton-Jacobi-Bellman equations which are coupled with optimality conditions that are only given implicitly. We prove that the agent's value function and optimal policies are determined by the unique global solution to a system of non-linear ordinary differential equations. We show that the optimal portfolio strategy is significantly affected by the mere threat of a potential pandemic.
A stochastic inventory model of COVID-19 and robust, real-time identification of carriers at large and infection rate via asymptotic laws
A critical operations management problem in the ongoing COVID-19 pandemic is cognizance of (a) the number of all carriers at large (CaL) conveying the SARS-CoV-2, including asymptomatic ones and (b) the infection rate (IR). Both are random and unobservable, affecting the spread of the disease, patient arrivals to health care units (HCUs) and the number of deaths. A novel, inventory perspective of COVID-19 is proposed, with random inflow, random losses and retrials (recurrent cases) and delayed/distributed exit, with randomly varying fractions of the exit distribution. A minimal construal, it enables representation of COVID-19 evolution with close fit of national incidence profiles, including single and multiple pattern outbreaks, oscillatory, periodic or non-periodic evolution, followed by retraction, leveling off, or strong resurgence. Furthermore, based on asymptotic laws, the minimum number of variables that must be monitored for identifying CaL and IR is determined and a real-time identification method is presented. The method is data-driven, utilizing the entry rate to HCUs and scaled, or dimensionless variables, including the mean residence time of symptomatic carriers in CaL and the mean residence time in CaL of patients entering HCUs. As manifested by several robust case studies of national COVID-19 incidence profiles, it provides efficient identification in real-time under unbiased monitoring error, without relying on any model. The propagation factor, a stochastic process, is reconstructed from the identified trajectories of CaL and IR, enabling evaluation of control measures. The results are useful towards the design of policies restricting COVID-19 and encumbrance to HCUs and mitigating economic contraction.
Modularity maximization to design contiguous policy zones for pandemic response
The health and economic devastation caused by the COVID-19 pandemic has created a significant global humanitarian disaster. Pandemic response policies guided by geospatial approaches are appropriate additions to traditional epidemiological responses when addressing this disaster. However, little is known about finding the optimal set of locations or jurisdictions to create policy coordination zones. In this study, we propose optimization models and algorithms to identify coordination communities based on the natural movement of people. To do so, we develop a mixed-integer quadratic-programming model to maximize the modularity of detected communities while ensuring that the jurisdictions within each community are contiguous. To solve the problem, we present a heuristic and a column-generation algorithm. Our computational experiments highlight the effectiveness of the models and algorithms in various instances. We also apply the proposed optimization-based solutions to identify coordination zones within North Carolina and South Carolina, two highly interconnected states in the U.S. Results of our case study show that the proposed model detects communities that are significantly better for coordinating pandemic related policies than the existing geopolitical boundaries.
Resource planning strategies for healthcare systems during a pandemic
We study resource planning strategies, including the integrated healthcare resources' allocation and sharing as well as patients' transfer, to improve the response of health systems to massive increases in demand during epidemics and pandemics. Our study considers various types of patients and resources to provide access to patient care with minimum capacity extension. Adding new resources takes time that most patients don't have during pandemics. The number of patients requiring scarce healthcare resources is uncertain and dependent on the speed of the pandemic's transmission through a region. We develop a multi-stage stochastic program to optimize various strategies for planning limited and necessary healthcare resources. We simulate uncertain parameters by deploying an agent-based continuous-time stochastic model, and then capture the uncertainty by a forward scenario tree construction approach. Finally, we propose a data-driven rolling horizon procedure to facilitate decision-making in real-time, which mitigates some critical limitations of stochastic programming approaches and makes the resulting strategies implementable in practice. We use two different case studies related to COVID-19 to examine our optimization and simulation tools by extensive computational results. The results highlight these strategies can significantly improve patient access to care during pandemics; their significance will vary under different situations. Our methodology is not limited to the presented setting and can be employed in other service industries where urgent access matters.
Introduction to the special issue on the role of operational research in future epidemics/ pandemics
In this special issue, 23 research papers are published focusing on COVID-19 and operational research solution techniques. First, we detail the process from advertising the call for papers to the point where the best papers are accepted. Then, we provide a summary of each paper focusing on applications, solution techniques and insights for practitioners and policy makers. To provide a holistic view for readers, we have clustered the papers into different groups: transmission, propagation and forecasting, non-pharmaceutical intervention, healthcare network configuration, healthcare resource allocation, hospital operations, vaccine and testing kits, and production and manufacturing. Then, we introduce other possible subjects that can be considered for future research.
Estimating causal effects with optimization-based methods: A review and empirical comparison
In the absence of randomized controlled and natural experiments, it is necessary to balance the distributions of (observable) covariates of the treated and control groups in order to obtain an unbiased estimate of a causal effect of interest; otherwise, a different effect size may be estimated, and incorrect recommendations may be given. To achieve this balance, there exist a wide variety of methods. In particular, several methods based on optimization models have been recently proposed in the causal inference literature. While these optimization-based methods empirically showed an improvement over a limited number of other causal inference methods in their relative ability to balance the distributions of covariates and to estimate causal effects, they have not been thoroughly compared to each other and to other noteworthy causal inference methods. In addition, we believe that there exist several unaddressed opportunities that operational researchers could contribute with their advanced knowledge of optimization, for the benefits of the applied researchers that use causal inference tools. In this review paper, we present an overview of the causal inference literature and describe in more detail the optimization-based causal inference methods, provide a comparative analysis of the prevailing optimization-based methods, and discuss opportunities for new methods.
Designing an optimal sequence of non-pharmaceutical interventions for controlling COVID-19
The COVID-19 pandemic has had an unprecedented impact on global health and the economy since its inception in December, 2019 in Wuhan, China. Non-pharmaceutical interventions (NPI) like lockdowns and curfews have been deployed by affected countries for controlling the spread of infections. In this paper, we develop a Mixed Integer Non-Linear Programming (MINLP) epidemic model for computing the optimal sequence of NPIs over a planning horizon, considering shortages in doctors and hospital beds, under three different lockdown scenarios. We analyse two strategies - centralised (homogeneous decisions at the national level) and decentralised (decisions differentiated across regions), for two objectives separately - minimization of infections and deaths, using actual pandemic data of France. We linearize the quadratic constraints and objective functions in the MINLP model and convert it to a Mixed Integer Linear Programming (MILP) model. A major result that we show analytically is that under the epidemic model used, the optimal sequence of NPIs always follows a decreasing severity pattern. Using this property, we further simplify the MILP model into an Integer Linear Programming (ILP) model, reducing computational time up to 99%. Our numerical results show that a decentralised strategy is more effective in controlling infections for a given severity budget, yielding up to 20% lesser infections, 15% lesser deaths and 60% lesser shortages in healthcare resources. These results hold without considering logistics aspects and for a given level of compliance of the population.
The importance of eliciting stakeholders' system boundary perceptions for problem structuring and decision-making
Differences in system boundaries and problem framings are unavoidable in multi-organisational decision-making. Unstructured problems, such as the grand challenges, are characterised by the existence of multiple actors with different perspectives and conflicting interests, and they require a coordinated effort from multiple organisations. Within this context, this paper aims to understand stakeholders' perceptions of system boundaries and problem framings, and their potential effects on decision-making by systematically comparing different stakeholder groups' causal maps around the same shared concern. Bridging notions from Operational Research, System Dynamics and Organisational Studies, the comparison is based on a novel type of thematic analysis of Causal Loop Diagrams (CLDs) built with each stakeholder group on their perceptions of a given system. The proposed integrated approach combines qualitative with quantitative analysis, such as the centrality of the variables and the structure of the CLDs. Such CLDs comparison provides an intuitive way to visualise differences and similarities of the thematic clusters of variables, underlining factors influencing the shared concern. This could be considered a starting point for more shared understanding as well as more integrated holistic perceptions of the system and, consequently, a more systemic decision-making. Furthermore, for the sake of replicability, this paper also presents a qualitative participatory System Dynamics modelling process aimed to define the key aspects of a problem for each group of stakeholders to support a collaborative multi-organisational decision-making process. The research is based on the activities carried out for an urban regeneration case study in Thamesmead, London, United Kingdom.
Parallel Subgradient Algorithm with Block Dual Decomposition for Large-scale Optimization
This paper studies computational approaches for solving large-scale optimization problems using a Lagrangian dual reformulation, solved by parallel sub-gradient methods. Since there are many possible reformulations for a given problem, an important question is: Which reformulation leads to the fastest solution time? One approach is to detect a block diagonal structure in the constraint matrix, and reformulate the problem by dualizing the constraints outside of the blocks; the approach is defined herein as block dual decomposition. Main advantage of such a reformulation is that the Lagrangian relaxation has a block diagonal constraint matrix, thus decomposable into smaller sub-problems that can solved in parallel. We show that the block decomposition can critically affect convergence rate of the sub-gradient method. We propose various decomposition methods that use domain knowledge or apply algorithms using knowledge about the structure in the constraint matrix or the dependence in the decision variables, towards reducing the computational effort to solve large-scale optimization problems. In particular, we introduce a block decomposition approach that reduces the number of dualized constraints by utilizing a community detection algorithm. We present empirical experiments on an extensive set of problem instances including a real application. We illustrate that if the number of the dualized constraints in the decomposition increases, the computational effort within each iteration of the sub-gradient method decreases while the number of iterations required for convergence increases. The key message is that it is crucial to employ prior knowledge about the structure of the problem when solving large scale optimization problems using dual decomposition.
On sparse ensemble methods: An application to short-term predictions of the evolution of COVID-19
Since the seminal paper by Bates and Granger in 1969, a vast number of ensemble methods that combine different base regressors to generate a unique one have been proposed in the literature. The so-obtained regressor method may have better accuracy than its components, but at the same time it may overfit, it may be distorted by base regressors with low accuracy, and it may be too complex to understand and explain. This paper proposes and studies a novel Mathematical Optimization model to build a sparse ensemble, which trades off the accuracy of the ensemble and the number of base regressors used. The latter is controlled by means of a regularization term that penalizes regressors with a poor individual performance. Our approach is flexible to incorporate desirable properties one may have on the ensemble, such as controlling the performance of the ensemble in critical groups of records, or the costs associated with the base regressors involved in the ensemble. We illustrate our approach with real data sets arising in the COVID-19 context.
