scholarly journals “Thought I’d Share First”: An Analysis of COVID-19 Conspiracy Theories and Misinformation Spread on Twitter (Preprint)

2020 ◽  
Author(s):  
Dax Gerts ◽  
Courtney D. Shelley ◽  
Nidhi Parikh ◽  
Travis Pitts ◽  
Chrysm Watson Ross ◽  
...  

BACKGROUND Misinformation spread through social media is a growing problem, and the emergence of COVID-19 has caused an explosion in new activity and renewed focus on the resulting threat to public health. Given this increased visibility, in-depth analysis of COVID-19 misinformation spread is critical to understanding the evolution of ideas with potential negative public health impact. OBJECTIVE We use Twitter data to explore methods for characterization and classification of major COVID-19 myths and conspiracy theories, and to provide context for the theories’ evolution through the pandemic’s early months. METHODS Using a curated data set of COVID-19 tweets (N ~ 120 million tweets) spanning late January to early May 2020, we applied methods including regular expression filtering, supervised machine learning, sentiment analysis, geospatial analysis, and dynamic topic modeling to trace the spread of misinformation and to characterize novel features of COVID-19 conspiracy theories. RESULTS Random forest models for four major misinformation topics provided mixed results, with narrowly-defined conspiracy theories achieving F1 scores of 0.804 and 0.857, while more broad theories performed measurably worse, with scores of 0.654 and 0.347. Despite this, analysis using model-labeled data was beneficial for increasing the proportion of data matching misinformation indicators. We were able to identify distinct increases in negative sentiment, theory-specific trends in geospatial spread, and the evolution of conspiracy theory topics and subtopics over time. CONCLUSIONS COVID-19 related conspiracy theories show that history frequently repeats itself, with the same conspiracy theories being recycled for new situations. We use a combination of supervised learning, unsupervised learning, and natural language processing techniques to look at the evolution of theories over the first four months of the COVID-19 outbreak, how these theories intertwine, and to hypothesize on more effective public health messaging to combat misinformation in online spaces. CLINICALTRIAL N/A


10.2196/26527 ◽  
2021 ◽  
Vol 7 (4) ◽  
pp. e26527
Author(s):  
Dax Gerts ◽  
Courtney D Shelley ◽  
Nidhi Parikh ◽  
Travis Pitts ◽  
Chrysm Watson Ross ◽  
...  

Background The COVID-19 outbreak has left many people isolated within their homes; these people are turning to social media for news and social connection, which leaves them vulnerable to believing and sharing misinformation. Health-related misinformation threatens adherence to public health messaging, and monitoring its spread on social media is critical to understanding the evolution of ideas that have potentially negative public health impacts. Objective The aim of this study is to use Twitter data to explore methods to characterize and classify four COVID-19 conspiracy theories and to provide context for each of these conspiracy theories through the first 5 months of the pandemic. Methods We began with a corpus of COVID-19 tweets (approximately 120 million) spanning late January to early May 2020. We first filtered tweets using regular expressions (n=1.8 million) and used random forest classification models to identify tweets related to four conspiracy theories. Our classified data sets were then used in downstream sentiment analysis and dynamic topic modeling to characterize the linguistic features of COVID-19 conspiracy theories as they evolve over time. Results Analysis using model-labeled data was beneficial for increasing the proportion of data matching misinformation indicators. Random forest classifier metrics varied across the four conspiracy theories considered (F1 scores between 0.347 and 0.857); this performance increased as the given conspiracy theory was more narrowly defined. We showed that misinformation tweets demonstrate more negative sentiment when compared to nonmisinformation tweets and that theories evolve over time, incorporating details from unrelated conspiracy theories as well as real-world events. Conclusions Although we focus here on health-related misinformation, this combination of approaches is not specific to public health and is valuable for characterizing misinformation in general, which is an important first step in creating targeted messaging to counteract its spread. Initial messaging should aim to preempt generalized misinformation before it becomes widespread, while later messaging will need to target evolving conspiracy theories and the new facets of each as they become incorporated.



2020 ◽  
Author(s):  
Wasim Ahmed ◽  
Francesc López Seguí ◽  
Josep Vidal-Alaball ◽  
Matthew S Katz

BACKGROUND During the COVID-19 pandemic, a number of conspiracy theories have emerged. A popular theory posits that the pandemic is a hoax and suggests that certain hospitals are “empty.” Research has shown that accepting conspiracy theories increases the likelihood that an individual may ignore government advice about social distancing and other public health interventions. Due to the possibility of a second wave and future pandemics, it is important to gain an understanding of the drivers of misinformation and strategies to mitigate it. OBJECTIVE This study set out to evaluate the #FilmYourHospital conspiracy theory on Twitter, attempting to understand the drivers behind it. More specifically, the objectives were to determine which online sources of information were used as evidence to support the theory, the ratio of automated to organic accounts in the network, and what lessons can be learned to mitigate the spread of such a conspiracy theory in the future. METHODS Twitter data related to the #FilmYourHospital hashtag were retrieved and analyzed using social network analysis across a 7-day period from April 13-20, 2020. The data set consisted of 22,785 tweets and 11,333 Twitter users. The Botometer tool was used to identify accounts with a higher probability of being bots. RESULTS The most important drivers of the conspiracy theory are ordinary citizens; one of the most influential accounts is a Brexit supporter. We found that YouTube was the information source most linked to by users. The most retweeted post belonged to a verified Twitter user, indicating that the user may have had more influence on the platform. There was a small number of automated accounts (bots) and deleted accounts within the network. CONCLUSIONS Hashtags using and sharing conspiracy theories can be targeted in an effort to delegitimize content containing misinformation. Social media organizations need to bolster their efforts to label or remove content that contains misinformation. Public health authorities could enlist the assistance of influencers in spreading antinarrative content.



10.2196/22374 ◽  
2020 ◽  
Vol 22 (10) ◽  
pp. e22374 ◽  
Author(s):  
Wasim Ahmed ◽  
Francesc López Seguí ◽  
Josep Vidal-Alaball ◽  
Matthew S Katz

Background During the COVID-19 pandemic, a number of conspiracy theories have emerged. A popular theory posits that the pandemic is a hoax and suggests that certain hospitals are “empty.” Research has shown that accepting conspiracy theories increases the likelihood that an individual may ignore government advice about social distancing and other public health interventions. Due to the possibility of a second wave and future pandemics, it is important to gain an understanding of the drivers of misinformation and strategies to mitigate it. Objective This study set out to evaluate the #FilmYourHospital conspiracy theory on Twitter, attempting to understand the drivers behind it. More specifically, the objectives were to determine which online sources of information were used as evidence to support the theory, the ratio of automated to organic accounts in the network, and what lessons can be learned to mitigate the spread of such a conspiracy theory in the future. Methods Twitter data related to the #FilmYourHospital hashtag were retrieved and analyzed using social network analysis across a 7-day period from April 13-20, 2020. The data set consisted of 22,785 tweets and 11,333 Twitter users. The Botometer tool was used to identify accounts with a higher probability of being bots. Results The most important drivers of the conspiracy theory are ordinary citizens; one of the most influential accounts is a Brexit supporter. We found that YouTube was the information source most linked to by users. The most retweeted post belonged to a verified Twitter user, indicating that the user may have had more influence on the platform. There was a small number of automated accounts (bots) and deleted accounts within the network. Conclusions Hashtags using and sharing conspiracy theories can be targeted in an effort to delegitimize content containing misinformation. Social media organizations need to bolster their efforts to label or remove content that contains misinformation. Public health authorities could enlist the assistance of influencers in spreading antinarrative content.



2020 ◽  
Vol 110 (S3) ◽  
pp. S326-S330
Author(s):  
Erika Bonnevie ◽  
Jaclyn Goldbarg ◽  
Allison K. Gallegos-Jeffrey ◽  
Sarah D. Rosenberg ◽  
Ellen Wartella ◽  
...  

Objectives. To report on vaccine opposition and misinformation promoted on Twitter, highlighting Twitter accounts that drive conversation. Methods. We used supervised machine learning to code all Twitter posts. We first identified codes and themes manually by using a grounded theoretical approach and then applied them to the full data set algorithmically. We identified the top 50 authors month-over-month to determine influential sources of information related to vaccine opposition. Results. The data collection period was June 1 to December 1, 2019, resulting in 356 594 mentions of vaccine opposition. A total of 129 Twitter authors met the qualification of a top author in at least 1 month. Top authors were responsible for 59.5% of vaccine-opposition messages. We identified 10 conversation themes. Themes were similarly distributed across top authors and all other authors mentioning vaccine opposition. Top authors appeared to be highly coordinated in their promotion of misinformation within themes. Conclusions. Public health has struggled to respond to vaccine misinformation. Results indicate that sources of vaccine misinformation are not as heterogeneous or distributed as it may first appear given the volume of messages. There are identifiable upstream sources of misinformation, which may aid in countermessaging and public health surveillance.



2005 ◽  
Vol 173 (4S) ◽  
pp. 47-47
Author(s):  
Brent K. Hollenbeck ◽  
David A. Taub ◽  
Rodney L. Dunn ◽  
John T. Wei


Author(s):  
Emilda Emilda

The limitations of waste management in the Cipayung Landfill (TPA) causing a buildup of garbage up to more than 30 meters. This condition has a health impact on people in Cipayung Village. This study aims to analyze the impact of waste management at Cipayung Landfill on public health in Cipayung Village, Depok City. The research is descriptive qualitative. Data obtained by purposive sampling. Data was collected by interviews, observation and documentation. Based on interviews with 30 respondents, it was found that the most common diseases were diarrhea, then other types of stomach ailments, subsequent itching on the skin and coughing. This is presumably because the environmental conditions in the form of unhealthy air and water and clean and healthy living behaviors (PHBS) have not become the habit of the people. The results indicated that there were no respondents who had implemented all of these criteria. In general respondents have implemented  3 criteria, namely maintaining hair hygiene, maintaining skin cleanliness, and maintaining hand hygiene. While maintaining clean water storage is the most often overlooked behavior. To minimize this health impact, improvements in waste management in Cipayung landfill are needed along with continuous socialization and education to develop PHBS habits and the importance of maintaining a clean environment.



2020 ◽  
pp. 3-17
Author(s):  
Peter Nabende

Natural Language Processing for under-resourced languages is now a mainstream research area. However, there are limited studies on Natural Language Processing applications for many indigenous East African languages. As a contribution to covering the current gap of knowledge, this paper focuses on evaluating the application of well-established machine translation methods for one heavily under-resourced indigenous East African language called Lumasaaba. Specifically, we review the most common machine translation methods in the context of Lumasaaba including both rule-based and data-driven methods. Then we apply a state of the art data-driven machine translation method to learn models for automating translation between Lumasaaba and English using a very limited data set of parallel sentences. Automatic evaluation results show that a transformer-based Neural Machine Translation model architecture leads to consistently better BLEU scores than the recurrent neural network-based models. Moreover, the automatically generated translations can be comprehended to a reasonable extent and are usually associated with the source language input.



Sign in / Sign up

Export Citation Format

Share Document