scholarly journals Do tabloids poison the well of social media? Explaining democratically dysfunctional news sharing

2018 ◽  
Vol 20 (11) ◽  
pp. 4255-4274 ◽  
Author(s):  
Andrew Chadwick ◽  
Cristian Vaccari ◽  
Ben O’Loughlin

The use of social media for sharing political information and the status of news as an essential raw material for good citizenship are both generating increasing public concern. We add to the debates about misinformation, disinformation, and “fake news” using a new theoretical framework and a unique research design integrating survey data and analysis of observed news sharing behaviors on social media. Using a media-as-resources perspective, we theorize that there are elective affinities between tabloid news and misinformation and disinformation behaviors on social media. Integrating four data sets we constructed during the 2017 UK election campaign—individual-level data on news sharing ( N = 1,525,748 tweets), website data ( N = 17,989 web domains), news article data ( N = 641 articles), and data from a custom survey of Twitter users ( N = 1313 respondents)—we find that sharing tabloid news on social media is a significant predictor of democratically dysfunctional misinformation and disinformation behaviors. We explain the consequences of this finding for the civic culture of social media and the direction of future scholarship on fake news.

Author(s):  
V.T Priyanga ◽  
J.P Sanjanasri ◽  
Vijay Krishna Menon ◽  
E.A Gopalakrishnan ◽  
K.P Soman

The widespread use of social media like Facebook, Twitter, Whatsapp, etc. has changed the way News is created and published; accessing news has become easy and inexpensive. However, the scale of usage and inability to moderate the content has made social media, a breeding ground for the circulation of fake news. Fake news is deliberately created either to increase the readership or disrupt the order in the society for political and commercial benefits. It is of paramount importance to identify and filter out fake news especially in democratic societies. Most existing methods for detecting fake news involve traditional supervised machine learning which has been quite ineffective. In this paper, we are analyzing word embedding features that can tell apart fake news from true news. We use the LIAR and ISOT data set. We churn out highly correlated news data from the entire data set by using cosine similarity and other such metrices, in order to distinguish their domains based on central topics. We then employ auto-encoders to detect and differentiate between true and fake news while also exploring their separability through network analysis.


Author(s):  
Giandomenico Di Domenico ◽  
Annamaria Tuan ◽  
Marco Visentin

AbstractIn the wake of the COVID-19 pandemic, unprecedent amounts of fake news and hoax spread on social media. In particular, conspiracy theories argued on the effect of specific new technologies like 5G and misinformation tarnished the reputation of brands like Huawei. Language plays a crucial role in understanding the motivational determinants of social media users in sharing misinformation, as people extract meaning from information based on their discursive resources and their skillset. In this paper, we analyze textual and non-textual cues from a panel of 4923 tweets containing the hashtags #5G and #Huawei during the first week of May 2020, when several countries were still adopting lockdown measures, to determine whether or not a tweet is retweeted and, if so, how much it is retweeted. Overall, through traditional logistic regression and machine learning, we found different effects of the textual and non-textual cues on the retweeting of a tweet and on its ability to accumulate retweets. In particular, the presence of misinformation plays an interesting role in spreading the tweet on the network. More importantly, the relative influence of the cues suggests that Twitter users actually read a tweet but not necessarily they understand or critically evaluate it before deciding to share it on the social media platform.


2019 ◽  
Vol 43 (1) ◽  
pp. 53-71 ◽  
Author(s):  
Ahmed Al-Rawi ◽  
Jacob Groshek ◽  
Li Zhang

PurposeThe purpose of this paper is to examine one of the largest data sets on the hashtag use of #fakenews that comprises over 14m tweets sent by more than 2.4m users.Design/methodology/approachTweets referencing the hashtag (#fakenews) were collected for a period of over one year from January 3 to May 7 of 2018. Bot detection tools were employed, and the most retweeted posts, most mentions and most hashtags as well as the top 50 most active users in terms of the frequency of their tweets were analyzed.FindingsThe majority of the top 50 Twitter users are more likely to be automated bots, while certain users’ posts like that are sent by President Donald Trump dominate the most retweeted posts that always associate mainstream media with fake news. The most used words and hashtags show that major news organizations are frequently referenced with a focus on CNN that is often mentioned in negative ways.Research limitations/implicationsThe research study is limited to the examination of Twitter data, while ethnographic methods like interviews or surveys are further needed to complement these findings. Though the data reported here do not prove direct effects, the implications of the research provide a vital framework for assessing and diagnosing the networked spammers and main actors that have been pivotal in shaping discourses around fake news on social media. These discourses, which are sometimes assisted by bots, can create a potential influence on audiences and their trust in mainstream media and understanding of what fake news is.Originality/valueThis paper offers results on one of the first empirical research studies on the propagation of fake news discourse on social media by shedding light on the most active Twitter users who discuss and mention the term “#fakenews” in connection to other news organizations, parties and related figures.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Staci L Benoit ◽  
Rachel F. Mauldin

Abstract Background Social media use has become a mainstay of communication and with that comes the exchange of factual and non-factual information. Social media has given many people the opportunity to speak their opinions without repercussions and create coalitionS of like-minded people. This also has led to the development of a community know as anti-vaxxers or vaccine deniers. This research explores the extent to which vaccine knowledge has reached on social media. Methods This cross sectional research explored the relationship between the spread of information regarding vaccines in relation to social media use. A sample of 2515 people over the age of 18 around the world completed the survey via a link distributed on Twitter, Facebook and Instagram. A series of questions on vaccine knowledge and beliefs were compounded to create an individual’s “knowledge score” and a “belief score”. Knowledge scores were ranked from low knowledge to high knowledge with increasing scores. Belief scores were ranked from belief in myths to disbelief in myths with higher scores. This score was then analysed, using a Welch test and post hoc testing when applicable, across demographics and questions relating to social media use. Results Significant relations were found in both the knowledge and belief categories, many of which were similar findings between the two. North Americans had significantly lower knowledge and belief scores compared to all other continents. While the majority of people primarily use Facebook, Twitter users were significantly more knowledgeable. It was also found that higher education was correlated with higher knowledge and belief scores. Conclusions Overall, these correlations are important in determining ways to intervene into the anti-vax movement through the use of social media. Cross demographics were not analysed in this study but could be in future studies. To better understand the social media exposures related to vaccine information a follow up structured interview research study would be beneficial. Note that due to the cross sectional nature of this study, causal relationships could not be made.


Author(s):  
Feng Qian ◽  
Chengyue Gong ◽  
Karishma Sharma ◽  
Yan Liu

Fake news on social media is a major challenge and studies have shown that fake news can propagate exponentially quickly in early stages. Therefore, we focus on early detection of fake news, and consider that only news article text is available at the time of detection, since additional information such as user responses and propagation patterns can be obtained only after the news spreads. However, we find historical user responses to previous articles are available and can be treated as soft semantic labels, that enrich the binary label of an article, by providing insights into why the article must be labeled as fake. We propose a novel Two-Level Convolutional Neural Network with User Response Generator (TCNN-URG) where TCNN captures semantic information from article text by representing it at the sentence and word level, and URG learns a generative model of user response to article text from historical user responses which it can use to generate responses to new articles in order to assist fake news detection. We conduct experiments on one available dataset and a larger dataset collected by ourselves. Experimental results show that TCNN-URG outperforms the baselines based on prior approaches that detect fake news from article text alone.


2020 ◽  
pp. 177-196
Author(s):  
Turgay Yerlikaya ◽  
Seca Toker

This article focuses on how virtual social networks affect socio-political life. The main theme of the article is how social networks such as Facebook and Twitter can direct voters’ electoral preferences, especially during election time, through the dissemination of manipulative content and fake news. The use of social media, which was initially thought to have a positive effect on democratization, has been extensively discussed in recent years as threat to democracy. Examples from the 2016 U.S. presidential elections, France, Brexit, Germany, the UK and Turkey will be used to illustrate the risks that social networks pose to democracy, especially during election periods.


Author(s):  
Michael Bossetta

State-sponsored “bad actors” increasingly weaponize social media platforms to launch cyberattacks and disinformation campaigns during elections. Social media companies, due to their rapid growth and scale, struggle to prevent the weaponization of their platforms. This study conducts an automated spear phishing and disinformation campaign on Twitter ahead of the 2018 United States midterm elections. A fake news bot account — the @DCNewsReport — was created and programmed to automatically send customized tweets with a “breaking news” link to 138 Twitter users, before being restricted by Twitter.Overall, one in five users clicked the link, which could have potentially led to the downloading of ransomware or the theft of private information. However, the link in this experiment was non-malicious and redirected users to a Google Forms survey. In predicting users’ likelihood to click the link on Twitter, no statistically significant differences were observed between right-wing and left-wing partisans, or between Web users and mobile users. The findings signal that politically expressive Americans on Twitter, regardless of their party preferences or the devices they use to access the platform, are at risk of being spear phished on social media.


2021 ◽  
Vol 11 (11) ◽  
pp. 1489
Author(s):  
Chiara Scuotto ◽  
Ciro Rosario Ilardi ◽  
Francesco Avallone ◽  
Gianpaolo Maggi ◽  
Alfonso Ilardi ◽  
...  

The exposure to relevant social and/or historical events can increase the generation of false memories (FMs). The Coronavirus Disease 2019 (COVID-19) pandemic is a calamity challenging health, political, and journalistic bodies, with media generating confusion that has facilitated the spread of fake news. In this respect, our study aims at investigating the relationships between memories (true memories, TMs vs. FMs) for COVID-19-related news and different individual variables (i.e., use of traditional and social media, COVID-19 perceived and objective knowledge, fear of the disease, depression and anxiety symptoms, reasoning skills, and coping mechanisms). One hundred and seventy-one university students (131 females) were surveyed. Overall, our results suggested that depression and anxiety symptoms, reasoning skills, and coping mechanisms did not affect the formation of FMs. Conversely, the fear of loved ones contracting the infection was found to be negatively associated with FMs. This finding might be due to an empathy/prosociality-based positive bias boosting memory abilities, also explained by the young age of participants. Furthermore, objective knowledge (i) predicted an increase in TMs and decrease in FMs and (ii) significantly mediated the relationships between the use of social media and development of both TMs and FMs. In particular, higher levels of objective knowledge strengthened the formation of TMs and decreased the development of FMs following use of social media. These results may lead to reconsidering the idea of social media as the main source of fake news. This claim is further supported by either the lack of substantial differences between the use of traditional and social media among participants reporting FMs or the positive association between use of social media and levels of objective knowledge. The knowledge about the topic rather than the type of source would make a difference in the process of memory formation.


2019 ◽  
Vol 2 (2) ◽  
pp. e20-e29 ◽  
Author(s):  
Kalyan Gudaru ◽  
Leonardo Tortolero Blanco ◽  
Daniele Castellani ◽  
Hegel Trujillo Santamaria ◽  
Marcela Pelayo-Nieto ◽  
...  

Background and Objectives There is an increasing use of social media amongst the urological community. However, it is difficult to identify urological data on various social media platforms in an efficient manner. We proposed a hashtag, #UroSoMe, to be used when posting urology-related content in the social media platforms. The objectives of this article are to describe how #UroSoMe was developed, and to report the data of the first month of #UroSoMe.   Material and Methods The hashtag, #UroSoMe, was introduced to the urological community. The #UroSoMe working group was formed, and the members actively invited and encouraged people to use the hashtag #UroSoMe when posting urology-related contents. After the #UroSoMe (@so_uro) platform on twitter had grown to more than 300 users, the first live event of online case discussion, i.e. #LiveCaseDiscussions, was conducted. A prospective observational study of the hashtag #UroSoMe Twitter activity during the first month of its usage from 14 December 2018 to 13 January 2019 was evaluated. Outcome measures included number of users, number of tweets, user location, top tweeters, top hashtags used and interactions. Analysis was performed using NodeXL (Social Media Research Foundation; California, USA; https://www.smrfoundation.org/nodexl/), Symplur (https:// www.symplur.com) and Twitonomy (https://www.twitonomy.com).   Results The first month of #UroSoMe activity documented 1373 tweets/retweets by 1008 tweeters with 17698 mentions and 1003 replies. The #LiveCaseDiscussions was able to achieve a potential reach of 2,033,352 Twitter users. The top tweets mainly included cases presented by #UroSoMe working group members during #LiveCaseDiscussions. The twitonomy map showed participation from 214 geographical locations. The major groups of participants using the hashtag #UroSoMe were ‘Researcher/Academic’ and ‘Doctor’. The twitter account of #UroSoMe (@so_uro) has now grown to more than 1000 followers.   Conclusions Social media is an excellent platform for interaction amongst the urological community. The results demonstrated that #UroSoMe was able to achieve wide spread engagement from all over the world.


Author(s):  
Alberto Ardèvol-Abreu ◽  
Patricia Delponti ◽  
Carmen Rodríguez-Wangüemert

The main social media platforms have been implementing strategies to minimize fake news dissemination. These include identifying, labeling, and penalizing –via news feed ranking algorithms– fake publications. Part of the rationale behind this approach is that the negative effects of fake content arise only when social media users are deceived. Once debunked, fake posts and news stories should therefore become harmless. Unfortunately, the literature shows that the effects of misinformation are more complex and tend to persist and even backfire after correction. Furthermore, we still do not know much about how social media users evaluate content that has been fact-checked and flagged as false. More worryingly, previous findings suggest that some people may intentionally share made up news on social media, although their motivations are not fully explained. To better understand users’ interaction with social media content identified or recognized as false, we analyze qualitative and quantitative data from five focus groups and a sub-national online survey (N = 350). Findings suggest that the label of ‘false news’ plays a role –although not necessarily central– in social media users’ evaluation of the content and their decision (not) to share it. Some participants showed distrust in fact-checkers and lack of knowledge about the fact-checking process. We also found that fake news sharing is a two-dimensional phenomenon that includes intentional and unintentional behaviors. We discuss some of the reasons why some of social media users may choose to distribute fake news content intentionally.


Sign in / Sign up

Export Citation Format

Share Document