Crowdsourcing for machine learning in public health surveillance: lessons learned from Amazon Mechanical Turk (Preprint)

2021 ◽  
Author(s):  
Zahra Shakeri Hossein Abad ◽  
Joon Lee ◽  
Gregory P. Butler ◽  
Wendy Thompson

BACKGROUND Crowdsourcing services such as Amazon Mechanical Turk (AMT) allow researchers to use the collective intelligence of a wide range of online users for labour-intensive tasks. Since the manual verification of the quality of the collected results is difficult due to the large volume of data and the quick turnaround time of the process, many questions remain to be explored regarding the reliability of these resources for developing digital public health systems. OBJECTIVE The main objective of this study is to explore and evaluate the application of crowdsourcing, in general, and AMT, in specific, for developing digital public health surveillance systems. METHODS We collected 296,166 crowd-generated labels for 98,722 tweets, labelled by 610 AMT workers, to develop machine learning (ML) models for detecting behaviours related to physical activity, sedentary behaviour, and sleep quality (PASS) among Twitter users. To infer the ground truth labels and explore the quality of these labels, we studied four statistical consensus methods that are agnostic of task features and only focus on worker labelling behaviour. Moreover, to model the meta-information associated with each labelling task and leverage the potentials of context-sensitive data in the truth inference process, we developed seven ML models, including traditional classifiers (offline and active), a deep-learning-based classification model, and a hybrid convolutional neural network (CNN) model. RESULTS While most of the crowdsourcing-based studies in public health have often equated majority vote with quality, the results of our study using a truth set of 9,000 manually labelled tweets show that consensus-based inference models mask underlying uncertainty in the data and overlook the importance of task meta-information. Our evaluations across three PASS datasets show that truth inference is a context-sensitive process, and none of the studied methods in this paper was consistently superior to others in predicting the truth label. We also found that the performance of the ML models trained on crowd-labelled data is sensitive to the quality of these labels, and poor-quality labels lead to incorrect assessment of these models. Finally, we provide a set of practical recommendations to improve the quality and reliability of crowdsourced data. CONCLUSIONS Findings indicate the importance of the quality of crowd-generated labels in developing machine learning models designed for decision-making purposes, such as public health surveillance decisions. A combination of inference models outlined and analyzed in this work could be used to quantitatively measure and improve the quality of crowd-generated labels for training ML models. CLINICALTRIAL Not Applicable

2021 ◽  
Author(s):  
Zahra Shakeri Hossein Abad ◽  
Wendy Thompson ◽  
Gregory Butler ◽  
Joon Lenn

Background: Crowdsourcing services such as Amazon Mechanical Turk (AMT) allow researchers to use the collective intelligence of a wide range of online users for labour-intensive tasks. Since the manual verification of the quality of the collected results is difficult due to the large volume of data and the quick turnaround time of the process, many questions remain to be explored regarding the reliability of these resources for developing digital public health systems.Objective: The main objective of this study is to explore and evaluate the application of crowdsourcing, in general, and AMT, in specific, for developing digital public health surveillance systems.Methods: We collected 296,166 crowd-generated labels for 98,722 tweets, labelled by 610 AMT workers, to develop machine learning (ML) models for detecting behaviours related to physical activity, sedentary behaviour, and sleep quality (PASS) among Twitter users. To infer the ground truth labels and explore the quality of these labels, we studied four statistical consensus methods that are agnostic of task features and only focus on worker labelling behaviour. Moreover, to model the meta-information associated with each labelling task and leverage the potentials of context-sensitive data in the truth inference process, we developed seven ML models, including traditional classifiers (offline and active), a deep-learning-based classification model, and a hybrid convolutional neural network (CNN) model.Results: While most of the crowdsourcing-based studies in public health have often equated majority vote with quality, the results of our study using a truth set of 9,000 manually labelled tweets show that consensus-based inference models mask underlying uncertainty in the data and overlook the importance of task meta-information. Our evaluations across three PASS datasets show that truth inference is a context-sensitive process, and none of the studied methods in this paper was consistently superior to others in predicting the truth label. We also found that the performance of the ML models trained on crowd-labelled data is sensitive to the quality of these labels, and poor-quality labels lead to incorrect assessment of these models. Finally, we provide a set of practical recommendations to improve the quality and reliability of crowdsourced data.Conclusion: Findings indicate the importance of the quality of crowd-generated labels in developing machine learning models designed for decision-making purposes, such as public health surveillance decisions. A combination of inference models outlined and analyzed in this work could be used to quantitatively measure and improve the quality of crowd-generated labels for training ML models.


2021 ◽  
Vol 79 (1) ◽  
Author(s):  
Romana Haneef ◽  
Sofiane Kab ◽  
Rok Hrzic ◽  
Sonsoles Fuentes ◽  
Sandrine Fosse-Edorh ◽  
...  

Abstract Background The use of machine learning techniques is increasing in healthcare which allows to estimate and predict health outcomes from large administrative data sets more efficiently. The main objective of this study was to develop a generic machine learning (ML) algorithm to estimate the incidence of diabetes based on the number of reimbursements over the last 2 years. Methods We selected a final data set from a population-based epidemiological cohort (i.e., CONSTANCES) linked with French National Health Database (i.e., SNDS). To develop this algorithm, we adopted a supervised ML approach. Following steps were performed: i. selection of final data set, ii. target definition, iii. Coding variables for a given window of time, iv. split final data into training and test data sets, v. variables selection, vi. training model, vii. Validation of model with test data set and viii. Selection of the model. We used the area under the receiver operating characteristic curve (AUC) to select the best algorithm. Results The final data set used to develop the algorithm included 44,659 participants from CONSTANCES. Out of 3468 variables from SNDS linked to CONSTANCES cohort were coded, 23 variables were selected to train different algorithms. The final algorithm to estimate the incidence of diabetes was a Linear Discriminant Analysis model based on number of reimbursements of selected variables related to biological tests, drugs, medical acts and hospitalization without a procedure over the last 2 years. This algorithm has a sensitivity of 62%, a specificity of 67% and an accuracy of 67% [95% CI: 0.66–0.68]. Conclusions Supervised ML is an innovative tool for the development of new methods to exploit large health administrative databases. In context of InfAct project, we have developed and applied the first time a generic ML-algorithm to estimate the incidence of diabetes for public health surveillance. The ML-algorithm we have developed, has a moderate performance. The next step is to apply this algorithm on SNDS to estimate the incidence of type 2 diabetes cases. More research is needed to apply various MLTs to estimate the incidence of various health conditions.


2020 ◽  
Vol 110 (S3) ◽  
pp. S326-S330
Author(s):  
Erika Bonnevie ◽  
Jaclyn Goldbarg ◽  
Allison K. Gallegos-Jeffrey ◽  
Sarah D. Rosenberg ◽  
Ellen Wartella ◽  
...  

Objectives. To report on vaccine opposition and misinformation promoted on Twitter, highlighting Twitter accounts that drive conversation. Methods. We used supervised machine learning to code all Twitter posts. We first identified codes and themes manually by using a grounded theoretical approach and then applied them to the full data set algorithmically. We identified the top 50 authors month-over-month to determine influential sources of information related to vaccine opposition. Results. The data collection period was June 1 to December 1, 2019, resulting in 356 594 mentions of vaccine opposition. A total of 129 Twitter authors met the qualification of a top author in at least 1 month. Top authors were responsible for 59.5% of vaccine-opposition messages. We identified 10 conversation themes. Themes were similarly distributed across top authors and all other authors mentioning vaccine opposition. Top authors appeared to be highly coordinated in their promotion of misinformation within themes. Conclusions. Public health has struggled to respond to vaccine misinformation. Results indicate that sources of vaccine misinformation are not as heterogeneous or distributed as it may first appear given the volume of messages. There are identifiable upstream sources of misinformation, which may aid in countermessaging and public health surveillance.


2020 ◽  
Author(s):  
Patrick James Ward ◽  
April M Young

BACKGROUND Public health surveillance is critical to detecting emerging population health threats and improvements. Surveillance data has increased in size and complexity, posing challenges to data management and analysis. Natural language processing (NLP) and machine learning (ML) are valuable tools for analysis of unstructured data involving free-text and have been used in innovative ways to examine a variety of health outcomes. OBJECTIVE Given the cross-disciplinary applications of NLP and ML, research on their applications in surveillance have been disseminated in a variety of outlets. As such, the aim of this narrative review was to describe the current state of NLP and ML use in surveillance science and to identify directions in future research. METHODS Information was abstracted from articles describing the use of natural language processing and machine learning in public health surveillance identified through a PubMed search. RESULTS Twenty-two articles met review criteria, 12 involving traditional surveillance data sources and 10 involving online media sources for surveillance. Traditional surveillance sources analyzed with NLP and ML consisted primarily of death certificates (n=6), hospital data (n=5), and online media sources (e.g., Twitter) (n=8). CONCLUSIONS The reviewed articles demonstrate the potential of NLP and ML to enhance surveillance data through improving timeliness of surveillance, identifying cases in the absence of standardized case definitions, and enabling mining of social media for public health surveillance.


2021 ◽  
Author(s):  
Zahra Shakeri Hossein Abad ◽  
Gregory P. Butler ◽  
Wendy Thompson ◽  
Joon Lee

BACKGROUND Advances in automated data processing and machine learning (ML) models, together with the unprecedented growth in the number of social media users who publicly share and discuss health-related information, have made public health surveillance (PHS) one of the long-lasting social media applications. However, the existing PHS systems feeding on social media data have not been widely deployed in national surveillance systems, which appears to stem from the lack of practitioners and the public’s trust in social media data. More robust and reliable datasets over which supervised machine learning models can be trained and tested reliably is a significant step toward overcoming this hurdle. OBJECTIVE The health implications of daily behaviours (physical activity, sedentary behaviour, and sleep (PASS)), as an evergreen topic in PHS, are widely studied through traditional data sources such as surveillance surveys and administrative databases, which are often several months out of date by the time they are utilized, costly to collect, and thus limited in quantity and coverage. In this paper, we present LPHEADA, a multicountry and fully Labelled digital Public HEAlth DAtaset of tweets originated in Australia, Canada, the United Kingdom (UK), or the United States (US). METHODS We collected the data of this study from Twitter using the Twitter livestream application programming interface (API) between 28th November 2018 to 19th June 2020. To obtain PASS-related tweets for manual annotation, we iteratively used regular expressions, unsupervised natural language processing, domain-specific ontologies and linguistic analysis. We used Amazon Mechanical Turk (AMT) to label the collected data to self-reported PASS categories and implemented a quality control pipeline to monitor and manage the validity of crow-generated labels. Moreover, we used ML, latent semantic analysis, linguistic analysis, and label inference analysis to validate different components of the dataset. RESULTS LPHEADA contains 366,405 crowd-generated labels (three labels per tweet) for 122,135 PASS-related tweets, labelled by 708 unique annotators on AMT. In addition to crowd-generated labels, LPHEADA provides details about the three critical components of any PHS system: place, time, and demographics (gender, age range) associated with each tweet. CONCLUSIONS Publicly available datasets for digital PASS surveillance are usually isolated and only provide labels for small subsets of the data. We believe that the novelty and comprehensiveness of the dataset provided in this study will help develop, evaluate, and deploy digital PASS surveillance systems. LPHEADA will be an invaluable resource for both public health researchers and practitioners.


2006 ◽  
Vol 11 (11) ◽  
pp. 7-8 ◽  
Author(s):  
G Krause ◽  
J Benzler ◽  
G Reiprich ◽  
R Görgen

Surveillance systems for infectious diseases build the basis for effective public health measures in the prevention and control of infectious diseases. Assessing and improving the quality of such national surveillance systems is a challenge, as many different administrations and professions contribute to a complex system in which sensitive information must be exchanged in a reliable and timely fashion. We conducted a multidisciplinary quality circle on the national public health surveillance system in Germany which included clinicians, laboratory physicians, and staff from local and state health departments as well as from the Robert Koch-Institut. The recommendations resulting from the quality circle included proposals to change the federal law for the control of infectious diseases as well as practical activities such as the change of notification forms and the mailing of faxed information letters to clinicians. A number of recommendations have since been implemented, and some have resulted in measurable improvements. This demonstrates that the applied method of quality circle is a useful tool to improve the quality of national public health surveillance systems.


Sign in / Sign up

Export Citation Format

Share Document