Fake News And Tampered Image Detection In Social Networks Using Machine Learning

Author(s):  
S Devi ◽  
V Karthik ◽  
S Baga Vathi Bavatharani ◽  
K Indhumadhi
Technologies ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 64
Author(s):  
Panagiotis Kantartopoulos ◽  
Nikolaos Pitropakis ◽  
Alexios Mylonas ◽  
Nicolas Kylilis

Social media has become very popular and important in people’s lives, as personal ideas, beliefs and opinions are expressed and shared through them. Unfortunately, social networks, and specifically Twitter, suffer from massive existence and perpetual creation of fake users. Their goal is to deceive other users employing various methods, or even create a stream of fake news and opinions in order to influence an idea upon a specific subject, thus impairing the platform’s integrity. As such, machine learning techniques have been widely used in social networks to address this type of threat by automatically identifying fake accounts. Nonetheless, threat actors update their arsenal and launch a range of sophisticated attacks to undermine this detection procedure, either during the training or test phase, rendering machine learning algorithms vulnerable to adversarial attacks. Our work examines the propagation of adversarial attacks in machine learning based detection for fake Twitter accounts, which is based on AdaBoost. Moreover, we propose and evaluate the use of k-NN as a countermeasure to remedy the effects of the adversarial attacks that we have implemented.


Author(s):  
Nisha P. Shetty ◽  
Balachandra Muniyal ◽  
Arshia Anand ◽  
Sushant Kumar

Sybil accounts are swelling in popular social networking sites such as Twitter, Facebook etc. owing to cheap subscription and easy access to large masses. A malicious person creates multiple fake identities to outreach and outgrow his network. People blindly trust their online connections and fall into trap set up by these fake perpetrators. Sybil nodes exploit OSN’s ready-made connectivity to spread fake news, spamming, influencing polls, recommendations and advertisements, masquerading to get critical information, launching phishing attacks etc. Such accounts are surging in wide scale and so it has become very vital to effectively detect such nodes. In this research a new classifier (combination of Sybil Guard, Twitter engagement rate and Profile statistics analyser) is developed to combat such Sybil nodes. The proposed classifier overcomes the limitations of structure based, machine learning based and behaviour-based classifiers and is proven to be more accurate and robust than the base Sybil guard algorithm.


2021 ◽  
Author(s):  
Jaouhar Fattahi ◽  
Mohamed Mejri ◽  
Marwa Ziadia

Propaganda, defamation, abuse, insults, disinformation and fake news are not new phenomena and have been around for several decades. However, with the advent of the Internet and social networks, their magnitude has increased and the damage caused to individuals and corporate entities is becoming increasingly greater, even irreparable. In this paper, we tackle the detection of text-based cyberpropaganda using Machine Learning and NLP techniques. We use the eXtreme Gradient Boosting (XGBoost) algorithm for learning and detection, in tandem with Bag-of-Words (BoW) and Term Frequency-Inverse Document Frequency (TF-IDF) for text vectorization. We highlight the contribution of gradient boosting and regularization mechanisms in the performance of the explored model.


Author(s):  
S. W. Kwon ◽  
I. S. Song ◽  
S. W. Lee ◽  
J. S. Lee ◽  
J. H. Kim ◽  
...  

Author(s):  
V.T Priyanga ◽  
J.P Sanjanasri ◽  
Vijay Krishna Menon ◽  
E.A Gopalakrishnan ◽  
K.P Soman

The widespread use of social media like Facebook, Twitter, Whatsapp, etc. has changed the way News is created and published; accessing news has become easy and inexpensive. However, the scale of usage and inability to moderate the content has made social media, a breeding ground for the circulation of fake news. Fake news is deliberately created either to increase the readership or disrupt the order in the society for political and commercial benefits. It is of paramount importance to identify and filter out fake news especially in democratic societies. Most existing methods for detecting fake news involve traditional supervised machine learning which has been quite ineffective. In this paper, we are analyzing word embedding features that can tell apart fake news from true news. We use the LIAR and ISOT data set. We churn out highly correlated news data from the entire data set by using cosine similarity and other such metrices, in order to distinguish their domains based on central topics. We then employ auto-encoders to detect and differentiate between true and fake news while also exploring their separability through network analysis.


2021 ◽  
Vol 13 (3) ◽  
pp. 76
Author(s):  
Quintino Francesco Lotito ◽  
Davide Zanella ◽  
Paolo Casari

The pervasiveness of online social networks has reshaped the way people access information. Online social networks make it common for users to inform themselves online and share news among their peers, but also favor the spreading of both reliable and fake news alike. Because fake news may have a profound impact on the society at large, realistically simulating their spreading process helps evaluate the most effective countermeasures to adopt. It is customary to model the spreading of fake news via the same epidemic models used for common diseases; however, these models often miss concepts and dynamics that are peculiar to fake news spreading. In this paper, we fill this gap by enriching typical epidemic models for fake news spreading with network topologies and dynamics that are typical of realistic social networks. Specifically, we introduce agents with the role of influencers and bots in the model and consider the effects of dynamical network access patterns, time-varying engagement, and different degrees of trust in the sources of circulating information. These factors concur with making the simulations more realistic. Among other results, we show that influencers that share fake news help the spreading process reach nodes that would otherwise remain unaffected. Moreover, we emphasize that bots dramatically speed up the spreading process and that time-varying engagement and network access change the effectiveness of fake news spreading.


Author(s):  
Giandomenico Di Domenico ◽  
Annamaria Tuan ◽  
Marco Visentin

AbstractIn the wake of the COVID-19 pandemic, unprecedent amounts of fake news and hoax spread on social media. In particular, conspiracy theories argued on the effect of specific new technologies like 5G and misinformation tarnished the reputation of brands like Huawei. Language plays a crucial role in understanding the motivational determinants of social media users in sharing misinformation, as people extract meaning from information based on their discursive resources and their skillset. In this paper, we analyze textual and non-textual cues from a panel of 4923 tweets containing the hashtags #5G and #Huawei during the first week of May 2020, when several countries were still adopting lockdown measures, to determine whether or not a tweet is retweeted and, if so, how much it is retweeted. Overall, through traditional logistic regression and machine learning, we found different effects of the textual and non-textual cues on the retweeting of a tweet and on its ability to accumulate retweets. In particular, the presence of misinformation plays an interesting role in spreading the tweet on the network. More importantly, the relative influence of the cues suggests that Twitter users actually read a tweet but not necessarily they understand or critically evaluate it before deciding to share it on the social media platform.


Sign in / Sign up

Export Citation Format

Share Document