scholarly journals A simulated cyberattack on Twitter: Assessing partisan vulnerability to spear phishing and disinformation ahead of the 2018 U.S. midterm elections

Author(s):  
Michael Bossetta

State-sponsored “bad actors” increasingly weaponize social media platforms to launch cyberattacks and disinformation campaigns during elections. Social media companies, due to their rapid growth and scale, struggle to prevent the weaponization of their platforms. This study conducts an automated spear phishing and disinformation campaign on Twitter ahead of the 2018 United States midterm elections. A fake news bot account — the @DCNewsReport — was created and programmed to automatically send customized tweets with a “breaking news” link to 138 Twitter users, before being restricted by Twitter.Overall, one in five users clicked the link, which could have potentially led to the downloading of ransomware or the theft of private information. However, the link in this experiment was non-malicious and redirected users to a Google Forms survey. In predicting users’ likelihood to click the link on Twitter, no statistically significant differences were observed between right-wing and left-wing partisans, or between Web users and mobile users. The findings signal that politically expressive Americans on Twitter, regardless of their party preferences or the devices they use to access the platform, are at risk of being spear phished on social media.

2021 ◽  
Vol 118 (50) ◽  
pp. e2116310118
Author(s):  
Dominik Hangartner ◽  
Gloria Gennaro ◽  
Sary Alasiri ◽  
Nicholas Bahrich ◽  
Alexandra Bornhoft ◽  
...  

Despite heightened awareness of the detrimental impact of hate speech on social media platforms on affected communities and public discourse, there is little consensus on approaches to mitigate it. While content moderation—either by governments or social media companies—can curb online hostility, such policies may suppress valuable as well as illicit speech and might disperse rather than reduce hate speech. As an alternative strategy, an increasing number of international and nongovernmental organizations (I/NGOs) are employing counterspeech to confront and reduce online hate speech. Despite their growing popularity, there is scant experimental evidence on the effectiveness and design of counterspeech strategies (in the public domain). Modeling our interventions on current I/NGO practice, we randomly assign English-speaking Twitter users who have sent messages containing xenophobic (or racist) hate speech to one of three counterspeech strategies—empathy, warning of consequences, and humor—or a control group. Our intention-to-treat analysis of 1,350 Twitter users shows that empathy-based counterspeech messages can increase the retrospective deletion of xenophobic hate speech by 0.2 SD and reduce the prospective creation of xenophobic hate speech over a 4-wk follow-up period by 0.1 SD. We find, however, no consistent effects for strategies using humor or warning of consequences. Together, these results advance our understanding of the central role of empathy in reducing exclusionary behavior and inform the design of future counterspeech interventions.


2021 ◽  
Author(s):  
Sahana Udupa ◽  

Recent trends of migration to smaller social media platforms among right wing actors have raised a caution that an excessive focus on large, transnational social media companies might lose sight of the volatile spaces of homegrown and niche platforms, which have begun to offer diverse “alternative” avenues to extreme speech. Such trends, which drew global media attention during Trump supporters’ attempted exodus to Parler, are also gaining salience in Europe and the global South. Turning the focus to these developments, this article pries open three pertinent features of extreme speech on small platforms: its propensity to migrate between platforms, its embeddedness in domestic regulatory and technological innovations, and its evolving role in facilitating hateful language and disinformation in and through deep trust-based networks. Rather than assuming that smaller platforms are on an obvious trajectory toward progressive alternatives, their diverse entanglements with exclusionary extreme speech, I suggest, should be an important focal point for policy measures.


2020 ◽  
Author(s):  
Aleksandra Urman ◽  
Stefania Ionescu ◽  
David Garcia ◽  
Anikó Hannák

BACKGROUND Since the beginning of the COVID-19 pandemic, scientists have been willing to share their results quickly to speed up the development of potential treatments and/or a vaccine. At the same time, traditional peer-review-based publication systems are not always able to process new research promptly. This has contributed to a surge in the number of medical preprints published since January 2020. In the absence of a vaccine, preventative measures such as social distancing are most helpful in slowing the spread of COVID-19. Their effectiveness can be undermined if the public does not comply with them. Hence, public discourse can have a direct effect on the progression of the pandemic. Research shows that social media discussions on COVID-19 are driven mainly by the findings from preprints, not peer-reviewed papers, highlighting the need to examine the ways medical preprints are shared and discussed online. OBJECTIVE We examine the patterns of medRxiv preprint sharing on Twitter to establish (1) whether the number of tweets linking to medRxiv increased with the advent of the COVID-19 pandemic; (2) which medical preprints were mentioned on Twitter most often; (3) whether medRxiv sharing patterns on Twitter exhibit political partisanship; (4) whether the discourse surrounding medical preprints among Twitter users has changed throughout the pandemic. METHODS The analysis is based on tweets (n=557,405) containing links to medRxriv preprint repository that were posted between the creation of the repository in June 2019 and June 2020. The study relies on a combination of statistical techniques and text analysis methods. RESULTS Since January 2020, the number of tweets linking to medRxiv has increased drastically, peaking in April 2020 with a subsequent cool-down. Before the pandemic, preprints were shared predominantly by users we identify as medical professionals and scientists. After January 2020, other users, including politically-engaged ones, have started increasingly tweeting about medRxiv. Our findings indicate a political divide in sharing patterns of the top-10 most-tweeted preprints. All of them were shared more frequently by users who describe themselves as Republicans than by users who describe themselves as Democrats. Finally, we observe a change in the discourse around medRxiv preprints. Pre-pandemic tweets linking to them were predominantly using the word “preprint”. In February 2020 “preprint” was taken over by the word “study”. Our analysis suggests this change is at least partially driven by politically-engaged users. Widely shared medical preprints can have a direct effect on the public discourse around COVID-19, which in turn can affect the societies’ willingness to comply with preventative measures. This calls for an increased responsibility when dealing with medical preprints from all parties involved: scientists, preprint repositories, media, politicians, and social media companies. CONCLUSIONS Widely shared medical preprints can have a direct effect on the public discourse around COVID-19, which in turn can affect the societies’ willingness to comply with preventative measures. This calls for an increased responsibility when dealing with medical preprints from all parties involved: scientists, preprint repositories, media, politicians, and social media companies.


2020 ◽  
Author(s):  
Ethan Kaji ◽  
Maggie Bushman

BACKGROUND Adolescents with depression often turn to social media to express their feelings, for support, and for educational purposes. Little is known about how Reddit, a forum-based platform, compares to Twitter, a newsfeed platform, when it comes to content surrounding depression. OBJECTIVE The purpose of this study is to identify differences between Reddit and Twitter concerning how depression is discussed and represented online. METHODS A content analysis of Reddit posts and Twitter posts, using r/depression and #depression, identified signs of depression using the DSM-IV criteria. Other youth-related topics, including School, Family, and Social Activity, and the presence of medical or promotional content were also coded for. Relative frequency of each code was then compared between platforms as well as the average DSM-IV score for each platform. RESULTS A total of 102 posts were included in this study, with 53 Reddit posts and 49 Twitter posts. Findings suggest that Reddit has more content with signs of depression with 92% than Twitter with 24%. 28.3% of Reddit posts included medical content compared to Twitter with 18.4%. 53.1% of Twitter posts had promotional content while Reddit posts didn’t contain promotional content. CONCLUSIONS Users with depression seem more willing to discuss their mental health on the subreddit r/depression than on Twitter. Twitter users also use #depression with a wider variety of topics, not all of which actually involve a case of depression.


2021 ◽  
pp. 016344372110158
Author(s):  
Opeyemi Akanbi

Moving beyond the current focus on the individual as the unit of analysis in the privacy paradox, this article examines the misalignment between privacy attitudes and online behaviors at the level of society as a collective. I draw on Facebook’s market performance to show how despite concerns about privacy, market structures drive user, advertiser and investor behaviors to continue to reward corporate owners of social media platforms. In this market-oriented analysis, I introduce the metaphor of elasticity to capture the responsiveness of demand for social media to the data (price) charged by social media companies. Overall, this article positions social media as inelastic, relative to privacy costs; highlights the role of the social collective in the privacy crises; and ultimately underscores the need for structural interventions in addressing privacy risks.


Author(s):  
Giandomenico Di Domenico ◽  
Annamaria Tuan ◽  
Marco Visentin

AbstractIn the wake of the COVID-19 pandemic, unprecedent amounts of fake news and hoax spread on social media. In particular, conspiracy theories argued on the effect of specific new technologies like 5G and misinformation tarnished the reputation of brands like Huawei. Language plays a crucial role in understanding the motivational determinants of social media users in sharing misinformation, as people extract meaning from information based on their discursive resources and their skillset. In this paper, we analyze textual and non-textual cues from a panel of 4923 tweets containing the hashtags #5G and #Huawei during the first week of May 2020, when several countries were still adopting lockdown measures, to determine whether or not a tweet is retweeted and, if so, how much it is retweeted. Overall, through traditional logistic regression and machine learning, we found different effects of the textual and non-textual cues on the retweeting of a tweet and on its ability to accumulate retweets. In particular, the presence of misinformation plays an interesting role in spreading the tweet on the network. More importantly, the relative influence of the cues suggests that Twitter users actually read a tweet but not necessarily they understand or critically evaluate it before deciding to share it on the social media platform.


Symmetry ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 556
Author(s):  
Thaer Thaher ◽  
Mahmoud Saheb ◽  
Hamza Turabieh ◽  
Hamouda Chantar

Fake or false information on social media platforms is a significant challenge that leads to deliberately misleading users due to the inclusion of rumors, propaganda, or deceptive information about a person, organization, or service. Twitter is one of the most widely used social media platforms, especially in the Arab region, where the number of users is steadily increasing, accompanied by an increase in the rate of fake news. This drew the attention of researchers to provide a safe online environment free of misleading information. This paper aims to propose a smart classification model for the early detection of fake news in Arabic tweets utilizing Natural Language Processing (NLP) techniques, Machine Learning (ML) models, and Harris Hawks Optimizer (HHO) as a wrapper-based feature selection approach. Arabic Twitter corpus composed of 1862 previously annotated tweets was utilized by this research to assess the efficiency of the proposed model. The Bag of Words (BoW) model is utilized using different term-weighting schemes for feature extraction. Eight well-known learning algorithms are investigated with varying combinations of features, including user-profile, content-based, and words-features. Reported results showed that the Logistic Regression (LR) with Term Frequency-Inverse Document Frequency (TF-IDF) model scores the best rank. Moreover, feature selection based on the binary HHO algorithm plays a vital role in reducing dimensionality, thereby enhancing the learning model’s performance for fake news detection. Interestingly, the proposed BHHO-LR model can yield a better enhancement of 5% compared with previous works on the same dataset.


2021 ◽  
pp. 1-41
Author(s):  
Donato VESE

Governments around the world are strictly regulating information on social media in the interests of addressing fake news. There is, however, a risk that the uncontrolled spread of information could increase the adverse effects of the COVID-19 health emergency through the influence of false and misleading news. Yet governments may well use health emergency regulation as a pretext for implementing draconian restrictions on the right to freedom of expression, as well as increasing social media censorship (ie chilling effects). This article seeks to challenge the stringent legislative and administrative measures governments have recently put in place in order to analyse their negative implications for the right to freedom of expression and to suggest different regulatory approaches in the context of public law. These controversial government policies are discussed in order to clarify why freedom of expression cannot be allowed to be jeopardised in the process of trying to manage fake news. Firstly, an analysis of the legal definition of fake news in academia is presented in order to establish the essential characteristics of the phenomenon (Section II). Secondly, the legislative and administrative measures implemented by governments at both international (Section III) and European Union (EU) levels (Section IV) are assessed, showing how they may undermine a core human right by curtailing freedom of expression. Then, starting from the premise of social media as a “watchdog” of democracy and moving on to the contention that fake news is a phenomenon of “mature” democracy, the article argues that public law already protects freedom of expression and ensures its effectiveness at the international and EU levels through some fundamental rules (Section V). There follows a discussion of the key regulatory approaches, and, as alternatives to government intervention, self-regulation and especially empowering users are proposed as strategies to effectively manage fake news by mitigating the risks of undue interference by regulators in the right to freedom of expression (Section VI). The article concludes by offering some remarks on the proposed solution and in particular by recommending the implementation of reliability ratings on social media platforms (Section VII).


2019 ◽  
Vol 43 (1) ◽  
pp. 53-71 ◽  
Author(s):  
Ahmed Al-Rawi ◽  
Jacob Groshek ◽  
Li Zhang

PurposeThe purpose of this paper is to examine one of the largest data sets on the hashtag use of #fakenews that comprises over 14m tweets sent by more than 2.4m users.Design/methodology/approachTweets referencing the hashtag (#fakenews) were collected for a period of over one year from January 3 to May 7 of 2018. Bot detection tools were employed, and the most retweeted posts, most mentions and most hashtags as well as the top 50 most active users in terms of the frequency of their tweets were analyzed.FindingsThe majority of the top 50 Twitter users are more likely to be automated bots, while certain users’ posts like that are sent by President Donald Trump dominate the most retweeted posts that always associate mainstream media with fake news. The most used words and hashtags show that major news organizations are frequently referenced with a focus on CNN that is often mentioned in negative ways.Research limitations/implicationsThe research study is limited to the examination of Twitter data, while ethnographic methods like interviews or surveys are further needed to complement these findings. Though the data reported here do not prove direct effects, the implications of the research provide a vital framework for assessing and diagnosing the networked spammers and main actors that have been pivotal in shaping discourses around fake news on social media. These discourses, which are sometimes assisted by bots, can create a potential influence on audiences and their trust in mainstream media and understanding of what fake news is.Originality/valueThis paper offers results on one of the first empirical research studies on the propagation of fake news discourse on social media by shedding light on the most active Twitter users who discuss and mention the term “#fakenews” in connection to other news organizations, parties and related figures.


Sign in / Sign up

Export Citation Format

Share Document