Health-related fake news on social media platforms: A systematic literature review

2021 ◽  
pp. 146144482110387
Author(s):  
Cristiane Melchior ◽  
Mírian Oliveira

This review aims to (a) investigate the characteristics of both the research community and the published research on health-related fake news on social media platforms, and (b) identify the challenges and provide recommendations for future research on the subject. We reviewed 69 journal articles found in the main academic databases up to April 2021. The studies extracted data mainly from Twitter, YouTube, and Facebook. Most articles aimed to investigate the public’s reaction to fake health information, concluding that health agencies and professionals should increase their online presence. The articles also suggest that future work should aim to improve the quality of health information on social media platforms, develop new tools and strategies to combat fake news sharing, and study the credibility of health information. Nonetheless, those in control of the platforms are the only ones which can take effective measures to ensure that their users receive reliable information.

2020 ◽  
Vol 15 (4) ◽  
pp. 95-97
Author(s):  
Jeevan Bhatta ◽  
Sharmistha Sharma ◽  
Shashi Kandel ◽  
Roshan Nepal

Social media is a common platform that enables its users to share opinions, personal experiences, perspectives with one another instantaneously, globally. It has played a paramount role during pandemics such as COVID-19 and unveiled itself as a crucial means to communicate between the sources and the individuals. However, it also has become a place to disseminate misinformation and fake news rapidly. Infodemic, a plethora of information, some authentic some not makes it even harder to general people to receive factual and trustworthy information when required, has grown to be a major risk to public health and social media is developing as a trendy platform for this infodemic. This commentary aims to explore how social media has affected the current situation. We also aim to share our insight to control this misinformation.  This commentary contributes to evolving knowledge to counter fake news or health-related information shared over various social media platforms.


Author(s):  
Alberto Ardèvol-Abreu ◽  
Patricia Delponti ◽  
Carmen Rodríguez-Wangüemert

The main social media platforms have been implementing strategies to minimize fake news dissemination. These include identifying, labeling, and penalizing –via news feed ranking algorithms– fake publications. Part of the rationale behind this approach is that the negative effects of fake content arise only when social media users are deceived. Once debunked, fake posts and news stories should therefore become harmless. Unfortunately, the literature shows that the effects of misinformation are more complex and tend to persist and even backfire after correction. Furthermore, we still do not know much about how social media users evaluate content that has been fact-checked and flagged as false. More worryingly, previous findings suggest that some people may intentionally share made up news on social media, although their motivations are not fully explained. To better understand users’ interaction with social media content identified or recognized as false, we analyze qualitative and quantitative data from five focus groups and a sub-national online survey (N = 350). Findings suggest that the label of ‘false news’ plays a role –although not necessarily central– in social media users’ evaluation of the content and their decision (not) to share it. Some participants showed distrust in fact-checkers and lack of knowledge about the fact-checking process. We also found that fake news sharing is a two-dimensional phenomenon that includes intentional and unintentional behaviors. We discuss some of the reasons why some of social media users may choose to distribute fake news content intentionally.


2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Brinda Sampat ◽  
Sahil Raj

Purpose“Fake news” or misinformation sharing using social media sites into public discourse or politics has increased dramatically, over the last few years, especially in the current COVID-19 pandemic causing concern. However, this phenomenon is inadequately researched. This study examines fake news sharing with the lens of stimulus-organism-response (SOR) theory, uses and gratification theory (UGT) and big five personality traits (BFPT) theory to understand the motivations for sharing fake news and the personality traits that do so. The stimuli in the model comprise gratifications (pass time, entertainment, socialization, information sharing and information seeking) and personality traits (agreeableness, conscientiousness, extraversion, openness and neuroticism). The feeling of authenticating or instantly sharing news is the organism leading to sharing fake news, which forms the response in the study.Design/methodology/approachThe conceptual model was tested by the data collected from a sample of 221 social media users in India. The data were analyzed with partial least squares structural equation modeling to determine the effects of UGT and personality traits on fake news sharing. The moderating role of the platform WhatsApp or Facebook was studied.Findings The results suggest that pass time, information sharing and socialization gratifications lead to instant sharing news on social media platforms. Individuals who exhibit extraversion, neuroticism and openness share news on social media platforms instantly. In contrast, agreeableness and conscientiousness personality traits lead to authentication news before sharing on the social media platform.Originality/value This study contributes to social media literature by identifying the user gratifications and personality traits that lead to sharing fake news on social media platforms. Furthermore, the study also sheds light on the moderating influence of the choice of the social media platform for fake news sharing.


Author(s):  
Cristina Pulido ◽  
Laura Ruiz-Eugenio ◽  
Gisela Redondo-Sama ◽  
Beatriz Villarejo-Carballido

One of the challenges today is to face fake news (false information) in health due to its potential impact on people’s lives. This article contributes to a new application of social impact in social media (SISM) methodology. This study focuses on the social impact of the research to identify what type of health information is false and what type of information is evidence of the social impact shared in social media. The analysis of social media includes Reddit, Facebook, and Twitter. This analysis contributes to identifying how interactions in these forms of social media depend on the type of information shared. The results indicate that messages focused on fake health information are mostly aggressive, those based on evidence of social impact are respectful and transformative, and finally, deliberation contexts promoted in social media overcome false information about health. These results contribute to advancing knowledge in overcoming fake health-related news shared in social media.


2021 ◽  
Vol 2 (1) ◽  
pp. 100-114 ◽  
Author(s):  
Md. Sayeed Al-Zaman

COVID-19-related online fake news poses a threat to Indian public health. In response, this study seeks to understand the five important features of COVID-19-related social media fake news by analyzing 125 Indian fake news. The analysis produces five major findings based on five research questions. First, the seven themes of fake news are health, religiopolitical, political, crime, entertainment, religious, and miscellaneous. Health-related fake news (67.2%) is on the top of the list that includes medicine, medical and healthcare facilities, viral infection, and doctor-patient issues. Second, the seven types of fake news contents are text, photo, audio, video, text and photo, text and video, and text and photo and video. More fake news takes the form of text and video (47.2%). Third, online media produces more fake news (94.4%) than mainstream media (5.6%). More interestingly, four social media platforms: Twitter, Facebook, WhatsApp, and YouTube, produce most of the fake news. Fourth, relatively more fake news has international connections (54.4%) as the COVID-19 pandemic is a global phenomenon. Fifth, most of the COVID-19-related fake news is negative (63.2%) which could be a real threat to public health. These results may contribute to the academic understanding of social media fake news during the present and future health-crisis period. This paper concludes by stating some limitations regarding the data source and results, as well as provides a few suggestions for further research.


2021 ◽  
Author(s):  
Emily Chen ◽  
Julie Jiang ◽  
Ho-Chun Herbert Chang ◽  
Goran Muric ◽  
Emilio Ferrara

BACKGROUND The novel coronavirus, also known as COVID-19 or SARS-COV-2, has come to define much of our lives since the beginning of 2020. During this time, countries around the world imposed lockdowns and social distancing measures; our physical movements ground to a halt, while our online interactions increased as we turned to engaging with each other virtually. As our means of communication shifted online, so too did information consumption. While there has been an intentional shift and focus by governing authorities and health agencies on using social media and online platforms to spread factual and timely information, this has also opened the gate for misinformation, contributing to the phenomenon of misinfodemics. OBJECTIVE In this paper, we carry out an over a year-long analysis of Twitter discourse on over a billion tweets related to COVID-19 to identify and investigate prevalent misinformation narratives and trends. We also aim to describe the Twitter audience that is more susceptible to health-related misinformation and the network mechanisms driving misinfodemics. METHODS We leverage a dataset that we collected, and made public, containing over one billion tweets related to COVID-19 spanning between January 2020 and April 2021. We create a subset of this larger dataset by isolating tweets that include URLs with domains that have been identified by Media Bias/Fact Check as being prone to questionable and misinformation content. By leveraging clustering and topic modeling techniques, we identify the major narratives, including health misinformation and conspiracies, that are present within this subset of tweets. RESULTS Our focus is on a subset of 12,689,165 tweets that we determined are representative of COVID-19 misinformation narratives in our full dataset. When analyzing tweets that share content from domains known to be questionable or that promote misinformation, we find that a few key misinformation narratives emerge about Hydroxychloroquine and alternative medicines, United States officials and governing agencies directives, and COVID-19 prevention measures. We further analyze the misinformation retweet network and find that users who share both questionable and conspiracy-related content are clustered more closely in the network than others, supporting the hypothesis that echo chambers can contribute to the spread of health misinfodemics. CONCLUSIONS Our paper presents a summary and analysis of the major misinformation discourse surrounding COVID-19 and those who promoted and engaged with it. While misinformation is not limited to social media platforms, we hope that our insights will shed light on how best to combat misinformation, particularly pertaining to health-related emergencies, and pave the way for computational infodemiology to inform health surveillance and interventions.


10.2196/14731 ◽  
2019 ◽  
Vol 21 (10) ◽  
pp. e14731 ◽  
Author(s):  
Yahya Albalawi ◽  
Nikola S Nikolov ◽  
Jim Buckley

Background Social media platforms play a vital role in the dissemination of health information. However, evidence suggests that a high proportion of Twitter posts (ie, tweets) are not necessarily accurate, and many studies suggest that tweets do not need to be accurate, or at least evidence based, to receive traction. This is a dangerous combination in the sphere of health information. Objective The first objective of this study is to examine health-related tweets originating from Saudi Arabia in terms of their accuracy. The second objective is to find factors that relate to the accuracy and dissemination of these tweets, thereby enabling the identification of ways to enhance the dissemination of accurate tweets. The initial findings from this study and methodological improvements will then be employed in a larger-scale study that will address these issues in more detail. Methods A health lexicon was used to extract health-related tweets using the Twitter application programming interface and the results were further filtered manually. A total of 300 tweets were each labeled by two medical doctors; the doctors agreed that 109 tweets were either accurate or inaccurate. Other measures were taken from these tweets’ metadata to see if there was any relationship between the measures and either the accuracy or the dissemination of the tweets. The entire range of this metadata was analyzed using Python, version 3.6.5 (Python Software Foundation), to answer the research questions posed. Results A total of 34 out of 109 tweets (31.2%) in the dataset used in this study were classified as untrustworthy health information. These came mainly from users with a non-health care background and social media accounts that had no corresponding physical (ie, organization) manifestation. Unsurprisingly, we found that traditionally trusted health sources were more likely to tweet accurate health information than other users. Likewise, these provisional results suggest that tweets posted in the morning are more trustworthy than tweets posted at night, possibly corresponding to official and casual posts, respectively. Our results also suggest that the crowd was quite good at identifying trustworthy information sources, as evidenced by the number of times a tweet’s author was tagged as favorited by the community. Conclusions The results indicate some initially surprising factors that might correlate with the accuracy of tweets and their dissemination. For example, the time a tweet was posted correlated with its accuracy, which may reflect a difference between professional (ie, morning) and hobbyist (ie, evening) tweets. More surprisingly, tweets containing a kashida—a decorative element in Arabic writing used to justify the text within lines—were more likely to be disseminated through retweets. These findings will be further assessed using data analysis techniques on a much larger dataset in future work.


2021 ◽  
Vol 2 (2) ◽  
pp. 1-31
Author(s):  
Esteban A. Ríssola ◽  
David E. Losada ◽  
Fabio Crestani

Mental state assessment by analysing user-generated content is a field that has recently attracted considerable attention. Today, many people are increasingly utilising online social media platforms to share their feelings and moods. This provides a unique opportunity for researchers and health practitioners to proactively identify linguistic markers or patterns that correlate with mental disorders such as depression, schizophrenia or suicide behaviour. This survey describes and reviews the approaches that have been proposed for mental state assessment and identification of disorders using online digital records. The presented studies are organised according to the assessment technology and the feature extraction process conducted. We also present a series of studies which explore different aspects of the language and behaviour of individuals suffering from mental disorders, and discuss various aspects related to the development of experimental frameworks. Furthermore, ethical considerations regarding the treatment of individuals’ data are outlined. The main contributions of this survey are a comprehensive analysis of the proposed approaches for online mental state assessment on social media, a structured categorisation of the methods according to their design principles, lessons learnt over the years and a discussion on possible avenues for future research.


Author(s):  
Marina M. Schoemaker ◽  
Suzanne Houwen

Abstract Purpose of Review (1) To give an overview of what is currently known about health-related quality of life (HRQoL) in three common and co-occurring developmental disorders: attention deficit hyperactivity disorder (ADHD), autism spectrum disorders (ASD), and developmental coordination disorder (DCD), and (2) to provide directions for future research. Recent Findings HRQoL is compromised in all three developmental disorders, affecting various domains of HRQoL. However, some domains are more affected than others depending on the nature of the core deficits of the disorder. Overall, parents’ rate HRQoL of their children lower than the children themselves. Children with ASD and ADHD with co-occurring disorders have lower HRQoL compared to those with singular disorders. Future studies in DCD are needed to investigate the effect of co-occurring disorder in this population. Summary Children with developmental disorders have lower HRQoL than typically developing children. Future research should focus on the effects of co-occurring disorders on HRQoL and on protective factors that may increase HRQoL. HRQoL should be a part of clinical assessment, as it reveals the areas in life children are struggling with that could be targeted during intervention.


Trials ◽  
2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Lauren E. Wisk ◽  
Russell G. Buhr

Abstract Background In response to the COVID-19 pandemic and associated adoption of scarce resource allocation (SRA) policies, we sought to rapidly deploy a novel survey to ascertain community values and preferences for SRA and to test the utility of a brief intervention to improve knowledge of and values alignment with a new SRA policy. Given social distancing and precipitous evolution of the pandemic, Internet-enabled recruitment was deemed the best method to engage a community-based sample. We quantify the efficiency and acceptability of this Internet-based recruitment for engaging a trial cohort and describe the approach used for implementing a health-related trial entirely online using off-the-shelf tools. Methods We recruited 1971 adult participants (≥ 18 years) via engagement with community partners and organizations and outreach through direct and social media messaging. We quantified response rate and participant characteristics of our sample, examine sample representativeness, and evaluate potential non-response bias. Results Recruitment was similarly derived from direct referral from partner organizations and broader social media based outreach, with extremely low study entry from organic (non-invited) search activity. Of social media platforms, Facebook was the highest yield recruitment source. Bot activity was present but minimal and identifiable through meta-data and engagement behavior. Recruited participants differed from broader populations in terms of sex, ethnicity, and education, but had similar prevalence of chronic conditions. Retention was satisfactory, with entrance into the first follow-up survey for 61% of those invited. Conclusions We demonstrate that rapid recruitment into a longitudinal intervention trial via social media is feasible, efficient, and acceptable. Recruitment in conjunction with community partners representing target populations, and with outreach across multiple platforms, is recommended to optimize sample size and diversity. Trial implementation, engagement tracking, and retention are feasible with off-the-shelf tools using preexisting platforms. Trial registration ClinicalTrials.gov NCT04373135. Registered on May 4, 2020


Sign in / Sign up

Export Citation Format

Share Document