scholarly journals An Adaptive Deep Transfer Learning Model for Rumor Detection without Sufficient Identified Rumors

2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Meicheng Guo ◽  
Zhiwei Xu ◽  
Limin Liu ◽  
Mengjie Guo ◽  
Yujun Zhang

With the extensive usage of social media platforms, spam information, especially rumors, has become a serious problem of social network platforms. The rumors make it difficult for people to get credible information from Internet and cause social panic. Existing detection methods always rely on a large amount of training data. However, the number of the identified rumors is always insufficient for developing a stable detection model. To handle this problem, we proposed a deep transfer model to achieve accurate rumor detection in social media platforms. In detail, an adaptive parameter tuning method is proposed to solve the negative transferring problem in the parameter transferring process. Experiments based on real-world datasets demonstrate that the proposed model achieves more accurate rumor detection and significantly outperforms state-of-the-art rumor detection models.

PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0256039
Author(s):  
Jiho Choi ◽  
Taewook Ko ◽  
Younhyuk Choi ◽  
Hyungho Byun ◽  
Chong-kwon Kim

Social media has become an ideal platform for the propagation of rumors, fake news, and misinformation. Rumors on social media not only mislead online users but also affect the real world immensely. Thus, detecting the rumors and preventing their spread became an essential task. Some of the recent deep learning-based rumor detection methods, such as Bi-Directional Graph Convolutional Networks (Bi-GCN), represent rumor using the completed stage of the rumor diffusion and try to learn the structural information from it. However, these methods are limited to represent rumor propagation as a static graph, which isn’t optimal for capturing the dynamic information of the rumors. In this study, we propose novel graph convolutional networks with attention mechanisms, named Dynamic GCN, for rumor detection. We first represent rumor posts with their responsive posts as dynamic graphs. The temporal information is used to generate a sequence of graph snapshots. The representation learning on graph snapshots with attention mechanism captures both structural and temporal information of rumor spreads. The conducted experiments on three real-world datasets demonstrate the superiority of Dynamic GCN over the state-of-the-art methods in the rumor detection task.


Author(s):  
Xiaoyu Yang ◽  
Yuefei Lyu ◽  
Tian Tian ◽  
Yifei Liu ◽  
Yudong Liu ◽  
...  

The wide spread of rumors on social media has caused tremendous effects in both the online and offline world. In addition to text information, recent detection methods began to exploit the graph structure in the propagation network. However, without a rigorous design, rumors may evade such graph models using various camouflage strategies by perturbing the structured data. Our focus in this work is to develop a robust graph-based detector to identify rumors on social media from an adversarial perspective. We first build a heterogeneous information network to model the rich information among users, posts, and user comments for detection. We then propose a graph adversarial learning framework, where the attacker tries to dynamically add intentional perturbations on the graph structure to fool the detector, while the detector would learn more distinctive structure features to resist such perturbations. In this way, our model would be enhanced in both robustness and generalization. Experiments on real-world datasets demonstrate that our model achieves better results than the state-of-the-art methods.


2021 ◽  
Author(s):  
Hansi Hettiarachchi ◽  
Mariam Adedoyin-Olowe ◽  
Jagdev Bhogal ◽  
Mohamed Medhat Gaber

AbstractSocial media is becoming a primary medium to discuss what is happening around the world. Therefore, the data generated by social media platforms contain rich information which describes the ongoing events. Further, the timeliness associated with these data is capable of facilitating immediate insights. However, considering the dynamic nature and high volume of data production in social media data streams, it is impractical to filter the events manually and therefore, automated event detection mechanisms are invaluable to the community. Apart from a few notable exceptions, most previous research on automated event detection have focused only on statistical and syntactical features in data and lacked the involvement of underlying semantics which are important for effective information retrieval from text since they represent the connections between words and their meanings. In this paper, we propose a novel method termed Embed2Detect for event detection in social media by combining the characteristics in word embeddings and hierarchical agglomerative clustering. The adoption of word embeddings gives Embed2Detect the capability to incorporate powerful semantical features into event detection and overcome a major limitation inherent in previous approaches. We experimented our method on two recent real social media data sets which represent the sports and political domain and also compared the results to several state-of-the-art methods. The obtained results show that Embed2Detect is capable of effective and efficient event detection and it outperforms the recent event detection methods. For the sports data set, Embed2Detect achieved 27% higher F-measure than the best-performed baseline and for the political data set, it was an increase of 29%.


2020 ◽  
Vol 8 (4) ◽  
pp. 47-62
Author(s):  
Francisca Oladipo ◽  
Ogunsanya, F. B ◽  
Musa, A. E. ◽  
Ogbuju, E. E ◽  
Ariwa, E.

The social media space has evolved into a large labyrinth of information exchange platform and due to the growth in the adoption of different social media platforms, there has been an increasing wave of interests in sentiment analysis as a paradigm for the mining and analysis of users’ opinions and sentiments based on their posts. In this paper, we present a review of contextual sentiment analysis on social media entries with a specific focus on Twitter. The sentimental analysis consists of two broad approaches which are machine learning which uses classification techniques to classify text and is further categorized into supervised learning and unsupervised learning; and the lexicon-based approach which uses a dictionary without using any test or training data set, unlike the machine learning approach.  


2021 ◽  
Vol 13 (10) ◽  
pp. 244
Author(s):  
Mohammed N. Alenezi ◽  
Zainab M. Alqenaei

Social media platforms such as Facebook, Instagram, and Twitter are an inevitable part of our daily lives. These social media platforms are effective tools for disseminating news, photos, and other types of information. In addition to the positives of the convenience of these platforms, they are often used for propagating malicious data or information. This misinformation may misguide users and even have dangerous impact on society’s culture, economics, and healthcare. The propagation of this enormous amount of misinformation is difficult to counter. Hence, the spread of misinformation related to the COVID-19 pandemic, and its treatment and vaccination may lead to severe challenges for each country’s frontline workers. Therefore, it is essential to build an effective machine-learning (ML) misinformation-detection model for identifying the misinformation regarding COVID-19. In this paper, we propose three effective misinformation detection models. The proposed models are long short-term memory (LSTM) networks, which is a special type of RNN; a multichannel convolutional neural network (MC-CNN); and k-nearest neighbors (KNN). Simulations were conducted to evaluate the performance of the proposed models in terms of various evaluation metrics. The proposed models obtained superior results to those from the literature.


Author(s):  
Panpan Zheng ◽  
Shuhan Yuan ◽  
Xintao Wu

Many online platforms have deployed anti-fraud systems to detect and prevent fraudulent activities. However, there is usually a gap between the time that a user commits a fraudulent action and the time that the user is suspended by the platform. How to detect fraudsters in time is a challenging problem. Most of the existing approaches adopt classifiers to predict fraudsters given their activity sequences along time. The main drawback of classification models is that the prediction results between consecutive timestamps are often inconsistent. In this paper, we propose a survival analysis based fraud early detection model, SAFE, which maps dynamic user activities to survival probabilities that are guaranteed to be monotonically decreasing along time. SAFE adopts recurrent neural network (RNN) to handle user activity sequences and directly outputs hazard values at each timestamp, and then, survival probability derived from hazard values is deployed to achieve consistent predictions. Because we only observe the user suspended time instead of the fraudulent activity time in the training data, we revise the loss function of the regular survival model to achieve fraud early detection. Experimental results on two real world datasets demonstrate that SAFE outperforms both the survival analysis model and recurrent neural network model alone as well as state-of-theart fraud early detection approaches.


Symmetry ◽  
2020 ◽  
Vol 12 (11) ◽  
pp. 1806
Author(s):  
Zunwang Ke ◽  
Zhe Li ◽  
Chenzhi Zhou ◽  
Jiabao Sheng ◽  
Wushour Silamu ◽  
...  

Social media had a revolutionary impact because it provides an ideal platform for share information; however, it also leads to the publication and spreading of rumors. Existing rumor detection methods have relied on finding cues from only user-generated content, user profiles, or the structures of wide propagation. However, the previous works have ignored the organic combination of wide dispersion structures in rumor detection and text semantics. To this end, we propose KZWANG, a framework for rumor detection that provides sufficient domain knowledge to classify rumors accurately, and semantic information and a propagation heterogeneous graph are symmetry fused together. We utilize an attention mechanism to learn a semantic representation of text and introduce a GCN to capture the global and local relationships among all the source microblogs, reposts, and users. An organic combination of text semantics and propagating heterogeneous graphs is then used to train a rumor detection classifier. Experiments on Sina Weibo, Twitter15, and Twitter16 rumor detection datasets demonstrate the proposed model’s superiority over baseline methods. We also conduct an ablation study to understand the relative contributions of the various aspects of the method we proposed.


2020 ◽  
Vol 34 (01) ◽  
pp. 516-523 ◽  
Author(s):  
Yaqing Wang ◽  
Weifeng Yang ◽  
Fenglong Ma ◽  
Jin Xu ◽  
Bin Zhong ◽  
...  

Today social media has become the primary source for news. Via social media platforms, fake news travel at unprecedented speeds, reach global audiences and put users and communities at great risk. Therefore, it is extremely important to detect fake news as early as possible. Recently, deep learning based approaches have shown improved performance in fake news detection. However, the training of such models requires a large amount of labeled data, but manual annotation is time-consuming and expensive. Moreover, due to the dynamic nature of news, annotated samples may become outdated quickly and cannot represent the news articles on newly emerged events. Therefore, how to obtain fresh and high-quality labeled samples is the major challenge in employing deep learning models for fake news detection. In order to tackle this challenge, we propose a reinforced weakly-supervised fake news detection framework, i.e., WeFEND, which can leverage users' reports as weak supervision to enlarge the amount of training data for fake news detection. The proposed framework consists of three main components: the annotator, the reinforced selector and the fake news detector. The annotator can automatically assign weak labels for unlabeled news based on users' reports. The reinforced selector using reinforcement learning techniques chooses high-quality samples from the weakly labeled data and filters out those low-quality ones that may degrade the detector's prediction performance. The fake news detector aims to identify fake news based on the news content. We tested the proposed framework on a large collection of news articles published via WeChat official accounts and associated user reports. Extensive experiments on this dataset show that the proposed WeFEND model achieves the best performance compared with the state-of-the-art methods.


2020 ◽  
Vol 10 (10) ◽  
pp. 2446-2451
Author(s):  
Hussain Ahmad ◽  
Muhammad Zubair Asghar ◽  
Fahad M. Alotaibi ◽  
Ibrahim A. Hameed

In social media, depression identification could be regarded as a complex task because of the complicated nature associated with mental disorders. In recent times, there has been an evolution in this research area with growing popularity of social media platforms as these have become a fundamental part of people's day-to-day life. Social media platforms and their users share a close relationship due to which the users' personal life is reflected in these platforms on several levels. Apart from the associated complexity in recognising mental illnesses via social media platforms, implementing supervised machine learning approaches like deep neural networks is yet to be adopted in a large scale because of the inherent difficulties associated with procuring sufficient quantities of annotated training data. Because of such reasons, we have made effort to identify deep learning model that is most effective from amongst selected architectures with previous successful record in supervised learning methods. The selected model is employed to recognise online users that display depression; since there is limited unstructured text data that could be extracted from Twitter.


2021 ◽  
Vol 40 ◽  
pp. 03003
Author(s):  
Prasad Kulkarni ◽  
Suyash Karwande ◽  
Rhucha Keskar ◽  
Prashant Kale ◽  
Sumitra Iyer

Everyone depends upon various online resources for news in this modern age, where the internet is pervasive. As the use of social media platforms such as Facebook, Twitter, and others has increased, news spreads quickly among millions of users in a short time. The consequences of Fake news are far-reaching, from swaying election outcomes in favor of certain candidates to creating biased opinions. WhatsApp, Instagram, and many other social media platforms are the main source for spreading fake news. This work provides a solution by introducing a fake news detection model using machine learning. This model requires prerequisite data extracted from various news websites. Web scraping technique is used for data extraction which is further used to create datasets. The data is classified into two major categories which are true dataset and false dataset. Classifiers used for the classification of data are Random Forest, Logistic Regression, Decision Tree, KNN and Gradient Booster. Based on the output received the data is classified either as true or false data. Based on that, the user can find out whether the given news is fake or not on the webserver.


Sign in / Sign up

Export Citation Format

Share Document