scholarly journals Inferring global-scale temporal latent topics from news reports to predict public health interventions for COVID-19

Author(s):  
Zhi Wen ◽  
Guido Powell ◽  
Imane Chafi ◽  
David L Buckeridge ◽  
Yue Li

Abstract The COVID-19 global pandemic has highlighted the importance of non-pharmacological interventions (NPI) for controlling epidemics of emerging infectious diseases. Despite the importance of NPI, their implementation has been monitored in an ad hoc and uncoordinated manner, mainly through the manual efforts of volunteers. Given the absence of systematic NPI tracking, authorities and researchers are limited in their ability to quantify the effectiveness of NPI and guide decisions regarding their use during the progression of a global pandemic. To address this issue, we propose 3-stage machine learning framework called EpiTopics to facilitate the surveillance of NPI by mining the vast amount of unlabelled news reports about these interventions. Building on topic modeling, our method characterizes online government reports and media articles related to COVID-19 as a mixture of latent topics. Our key contribution is the use of transfer-learning to address the limited number of NPI-labelled documents and topic modelling to support interpretation of the results. At stage 1, we trained a modified version of the unsupervised dynamic embedded topic model (DETM) on 1.2 million international news reports related to COVID-19. At stage 2, we used the trained DETM to infer topic mixture from a small set of 2000 NPI-labelled WHO documents as the input features for predicting NPI labels on each document. At stage 3, we supply the inferred country-level temporal topics from the DETM to the pretrained document-level NPI classifier to predict country-level NPIs. We identified 25 interpretable topics, over 4 distinct and coherent COVID-related themes. These topics contributed to significant improvements in predicting the NPIs labelled in the WHO documents and in predicting country-level NPIs. Together, our work lay the machine learning methodological foundation for future research in global-scale surveillance of public health interventions. The EpiTopics code is available at GitHub: https://github.com/li-lab-mcgill/covid-npi.

2021 ◽  
Author(s):  
Zhi Wen ◽  
Guido Powell ◽  
Imane Chafi ◽  
David Buckeridge ◽  
Yue Li

The COVID-19 global pandemic has highlighted the importance of non-pharmacological interventions (NPI) for controlling epidemics of emerging infectious diseases. Despite the importance of NPI, their implementation has been monitored in an ad hoc and uncoordinated manner, mainly through the manual efforts of volunteers. Given the absence of systematic NPI tracking, authorities and researchers are limited in their ability to quantify the effectiveness of NPI and guide decisions regarding their use during the progression of a global pandemic. To address this issue, we propose 3-stage machine learning framework called EpiTopics to facilitate the surveillance of NPI by mining the vast amount of unlabelled news reports about these interventions. Building on topic modeling, our method characterizes online government reports and media articles related to COVID-19 as a mixture of latent topics. Our key contribution is the use of transfer-learning to address the limited number of NPI-labelled documents and topic modelling to support interpretation of the results. At stage 1, we trained a modified version of the unsupervised dynamic embedded topic model (DETM) on 1.2 million international news reports related to COVID-19. At stage 2, we used the trained DETM to infer topic mixture from a small set of 2000 NPI-labelled WHO documents as the input features for predicting NPI labels on each document. At stage 3, we supply the inferred country-level temporal topics from the DETM to the pretrained document-level NPI classifier to predict country-level NPIs. We identified 25 interpretable topics, over 4 distinct and coherent COVID-related themes. These topics contributed to significant improvements in predicting the NPIs labelled in the WHO documents and in predicting country-level NPIs. Together, our work lay the machine learning methodological foundation for future research in global-scale surveillance of public health interventions. The EpiTopics code is available at GitHub: https://github.com/li-lab-mcgill/covid-npi.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Benjamin Hanckel ◽  
Mark Petticrew ◽  
James Thomas ◽  
Judith Green

Abstract Background Qualitative Comparative Analysis (QCA) is a method for identifying the configurations of conditions that lead to specific outcomes. Given its potential for providing evidence of causality in complex systems, QCA is increasingly used in evaluative research to examine the uptake or impacts of public health interventions. We map this emerging field, assessing the strengths and weaknesses of QCA approaches identified in published studies, and identify implications for future research and reporting. Methods PubMed, Scopus and Web of Science were systematically searched for peer-reviewed studies published in English up to December 2019 that had used QCA methods to identify the conditions associated with the uptake and/or effectiveness of interventions for public health. Data relating to the interventions studied (settings/level of intervention/populations), methods (type of QCA, case level, source of data, other methods used) and reported strengths and weaknesses of QCA were extracted and synthesised narratively. Results The search identified 1384 papers, of which 27 (describing 26 studies) met the inclusion criteria. Interventions evaluated ranged across: nutrition/obesity (n = 8); physical activity (n = 4); health inequalities (n = 3); mental health (n = 2); community engagement (n = 3); chronic condition management (n = 3); vaccine adoption or implementation (n = 2); programme implementation (n = 3); breastfeeding (n = 2), and general population health (n = 1). The majority of studies (n = 24) were of interventions solely or predominantly in high income countries. Key strengths reported were that QCA provides a method for addressing causal complexity; and that it provides a systematic approach for understanding the mechanisms at work in implementation across contexts. Weaknesses reported related to data availability limitations, especially on ineffective interventions. The majority of papers demonstrated good knowledge of cases, and justification of case selection, but other criteria of methodological quality were less comprehensively met. Conclusion QCA is a promising approach for addressing the role of context in complex interventions, and for identifying causal configurations of conditions that predict implementation and/or outcomes when there is sufficiently detailed understanding of a series of comparable cases. As the use of QCA in evaluative health research increases, there may be a need to develop advice for public health researchers and journals on minimum criteria for quality and reporting.


Author(s):  
Anil Babu Payedimarri ◽  
Diego Concina ◽  
Luigi Portinale ◽  
Massimo Canonico ◽  
Deborah Seys ◽  
...  

Artificial Intelligence (AI) and Machine Learning (ML) have expanded their utilization in different fields of medicine. During the SARS-CoV-2 outbreak, AI and ML were also applied for the evaluation and/or implementation of public health interventions aimed to flatten the epidemiological curve. This systematic review aims to evaluate the effectiveness of the use of AI and ML when applied to public health interventions to contain the spread of SARS-CoV-2. Our findings showed that quarantine should be the best strategy for containing COVID-19. Nationwide lockdown also showed positive impact, whereas social distancing should be considered to be effective only in combination with other interventions including the closure of schools and commercial activities and the limitation of public transportation. Our findings also showed that all the interventions should be initiated early in the pandemic and continued for a sustained period. Despite the study limitation, we concluded that AI and ML could be of help for policy makers to define the strategies for containing the COVID-19 pandemic.


2020 ◽  
Author(s):  
Robin Qiu

<p>This is a short article, focusing on promoting more study on SEIR modeling by leveraging rich data and machine learning. We believe that this is extremely critical as many regions at the country or state/provincial levels have been struggling with their public health intervention policies on fighting the COVID-19 pandemic. Some recent published papers on mitigation measures show promising SEIR modeling results, which could shred the light for other policymakers at different community levels. We present our perspective on this research direction. Hopefully, we can stimulate more studies and help the world win this “war” against the invisible enemy “coronavirus” sooner rather than later. </p>


2018 ◽  
Vol 5 ◽  
Author(s):  
Anushree Dave ◽  
Julie Cumin ◽  
Ryoa Chung ◽  
Matthew Hunt

On November 7th, 2014 the Humanitarian Health Ethics Workshop was held at McGill University, in Montreal. Co-hosted by the Montreal Health Equity Research Consortium and the Humanitarian Health Ethics Network, the event included six presentations and extensive discussion amongst participants, including researchers from Canada, Haiti, India, Switzerland and the US. Participants had training in disciplines including anthropology, bioethics, medicine, occupational therapy, philosophy, physical therapy, political science, public administration and public health. The objective of the workshop was to create a forum for discussion amongst scholars and practitioners interested in the ethics of healthcare delivery, research and public health interventions during humanitarian crises. This review is a summary of the presentations given, key themes that emerged during the day’s discussions, and avenues for future research that were identified.


PLoS ONE ◽  
2020 ◽  
Vol 15 (12) ◽  
pp. e0243622
Author(s):  
David S. Campo ◽  
Joseph W. Gussler ◽  
Amanda Sue ◽  
Pavel Skums ◽  
Yury Khudyakov

Persons who inject drugs (PWID) are at increased risk for overdose death (ODD), infections with HIV, hepatitis B (HBV) and hepatitis C virus (HCV), and noninfectious health conditions. Spatiotemporal identification of PWID communities is essential for developing efficient and cost-effective public health interventions for reducing morbidity and mortality associated with injection-drug use (IDU). Reported ODDs are a strong indicator of the extent of IDU in different geographic regions. However, ODD quantification can take time, with delays in ODD reporting occurring due to a range of factors including death investigation and drug testing. This delayed ODD reporting may affect efficient early interventions for infectious diseases. We present a novel model, Dynamic Overdose Vulnerability Estimator (DOVE), for assessment and spatiotemporal mapping of ODDs in different U.S. jurisdictions. Using Google® Web-search volumes (i.e., the fraction of all searches that include certain words), we identified a strong association between the reported ODD rates and drug-related search terms for 2004–2017. A machine learning model (Extremely Random Forest) was developed to produce yearly ODD estimates at state and county levels, as well as monthly estimates at state level. Regarding the total number of ODDs per year, DOVE’s error was only 3.52% (Median Absolute Error, MAE) in the United States for 2005–2017. DOVE estimated 66,463 ODDs out of the reported 70,237 (94.48%) during 2017. For that year, the MAE of the individual ODD rates was 4.43%, 7.34%, and 12.75% among yearly estimates for states, yearly estimates for counties, and monthly estimates for states, respectively. These results indicate suitability of the DOVE ODD estimates for dynamic IDU assessment in most states, which may alert for possible increased morbidity and mortality associated with IDU. ODD estimates produced by DOVE offer an opportunity for a spatiotemporal ODD mapping. Timely identification of potential mortality trends among PWID might assist in developing efficient ODD prevention and HBV, HCV, and HIV infection elimination programs by targeting public health interventions to the most vulnerable PWID communities.


Author(s):  
Wenbao Wang ◽  
Yiqin Chen ◽  
Qi Wang ◽  
Ping Cai ◽  
Ye He ◽  
...  

AbstractCOVID-19 has become a global pandemic. However, the impact of the public health interventions in China needs to be evaluated. We established a SEIRD model to simulate the transmission trend of China. In addition, the reduction of the reproductive number was estimated under the current forty public health interventions policies. Furthermore, the infection curve, daily transmission replication curve, and the trend of cumulative confirmed cases were used to evaluate the effects of the public health interventions. Our results showed that the SEIRD curve model we established had a good fit and the basic reproductive number is 3.38 (95% CI, 3.25–3.48). The SEIRD curve show a small difference between the simulated number of cases and the actual number; the correlation index (H2) is 0.934, and the reproductive number (R) has been reduced from 3.38 to 0.5 under the current forty public health interventions policies of China. The actual growth curve of new cases, the virus infection curve, and the daily transmission replication curve were significantly going down under the current public health interventions. Our results suggest that the current public health interventions of China are effective and should be maintained until COVID-19 is no longer considered a global threat.


2020 ◽  
Author(s):  
Robin Qiu

<p>This is a short article, focusing on promoting more study on SEIR modeling by leveraging rich data and machine learning. We believe that this is extremely critical as many regions at the country or state/provincial levels have been struggling with their public health intervention policies on fighting the COVID-19 pandemic. Some recent published papers on mitigation measures show promising SEIR modeling results, which could shred the light for other policymakers at different community levels. We present our perspective on this research direction. Hopefully, we can stimulate more studies and help the world win this “war” against the invisible enemy “coronavirus” sooner rather than later. </p>


2020 ◽  
Author(s):  
Kwak Gloria Hyunjung ◽  
Lowell Ling ◽  
Pan Hui

Abstract Rationale: Unprecedented public health measures have been used during this coronavirus 2019 (COVID-19) pandemic but with a cost to economic and social disruption. It is a challenge to implement timely and appropriate public health interventions.Objectives: This study evaluates the timing and intensity of public health policies in each country and territory in the COVID-19 pandemic, and whether machine learning can help them to find better global health strategies.Methods: Population and COVID-19 epidemiological data between 21st January 2020 to 7th April 2020 from 183 countries and 78 territories were included with the implemented public health interventions. We used deep reinforcement learning, and the model was trained to try to find the optimal public health strategies with maximizing total reward on controlling spread of COVID-19. The results proposed by the model were analyzed against the actual timing and intensity of lockdown and travel restrictions.Measurements and Main Results: Early implementation of the actual lockdown and travel restriction policies were associated with gradually groups of less severe crisis severity, relative to local index case date in each country or territory, not to 31st December 2019. However, our model suggested to initiate at least minimal intensity of lockdown or travel restriction even before index cases in each country and territory. In addition, the model mostly recommended a combination of lockdown and travel restrictions and higher intensity policies than the implemented policies by government, but did not always encourage rapid full lockdown and full border closures.Conclusion: Compared to actual government implementation, our model mostly recommended earlier and higher intensity of lockdown and travel restrictions. Machine learning may be used as a decision support tool for implementation of public health interventions during COVID-19 and future pandemics.


Sign in / Sign up

Export Citation Format

Share Document