Multi-view learning with distinguishable feature fusion for rumor detection

2022 ◽  
pp. 108085
Author(s):  
Xueqin Chen ◽  
Fan Zhou ◽  
Goce Trajcevski ◽  
Marcello Bonsangue
2020 ◽  
Vol 1601 ◽  
pp. 032032
Author(s):  
Li Tan ◽  
Zihao Ma ◽  
Juan Cao ◽  
Xinyue Lv

Information ◽  
2022 ◽  
Vol 13 (1) ◽  
pp. 25
Author(s):  
Changsong Bing ◽  
Yirong Wu ◽  
Fangmin Dong ◽  
Shouzhi Xu ◽  
Xiaodi Liu ◽  
...  

Social media has become more popular these days due to widely used instant messaging. Nevertheless, rumor propagation on social media has become an increasingly important issue. The purpose of this study is to investigate the impact of various features in social media on rumor detection, propose a dual co-attention-based multi-feature fusion method for rumor detection, and explore the detection capability of the proposed method in early rumor detection tasks. The proposed BERT-based Dual Co-attention Neural Network (BDCoNN) method for rumor detection, which uses BERT for word embedding . It simultaneously integrates features from three sources: publishing user profiles, source tweets, and comments. In the BDCoNN method, user discrete features and identity descriptors in user profiles are extracted using a one-dimensional convolutional neural network (CNN) and TextCNN, respectively. The bidirectional gate recurrent unit network (BiGRU) with a hierarchical attention mechanism is used to learn the hidden layer representation of tweet sequence and comment sequence. A dual collaborative attention mechanism is used to explore the correlation among publishing user profiles, tweet content, and comments. Then the feature vector is fed into classifier to identify the implicit differences between rumor spreaders and non-rumor spreaders. In this study, we conducted several experiments on the Weibo and CED datasets collected from microblog. The results show that the proposed method achieves the state-of-the-art performance compared with baseline methods, which is 5.2% and 5% higher than the dEFEND. The F1 value is increased by 4.4% and 4%, respectively. In addition, this paper conducts research on early rumor detection tasks, which verifies the proposed method detects rumors more quickly and accurately than competitors.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Zhirui Luo ◽  
Qingqing Li ◽  
Jun Zheng

2019 ◽  
Vol 63 (5) ◽  
pp. 50402-1-50402-9 ◽  
Author(s):  
Ing-Jr Ding ◽  
Chong-Min Ruan

Abstract The acoustic-based automatic speech recognition (ASR) technique has been a matured technique and widely seen to be used in numerous applications. However, acoustic-based ASR will not maintain a standard performance for the disabled group with an abnormal face, that is atypical eye or mouth geometrical characteristics. For governing this problem, this article develops a three-dimensional (3D) sensor lip image based pronunciation recognition system where the 3D sensor is efficiently used to acquire the action variations of the lip shapes of the pronunciation action from a speaker. In this work, two different types of 3D lip features for pronunciation recognition are presented, 3D-(x, y, z) coordinate lip feature and 3D geometry lip feature parameters. For the 3D-(x, y, z) coordinate lip feature design, 18 location points, each of which has 3D-sized coordinates, around the outer and inner lips are properly defined. In the design of 3D geometry lip features, eight types of features considering the geometrical space characteristics of the inner lip are developed. In addition, feature fusion to combine both 3D-(x, y, z) coordinate and 3D geometry lip features is further considered. The presented 3D sensor lip image based feature evaluated the performance and effectiveness using the principal component analysis based classification calculation approach. Experimental results on pronunciation recognition of two different datasets, Mandarin syllables and Mandarin phrases, demonstrate the competitive performance of the presented 3D sensor lip image based pronunciation recognition system.


2010 ◽  
Vol 30 (3) ◽  
pp. 643-645 ◽  
Author(s):  
Wei ZENG ◽  
Gui-bin ZHU ◽  
Jie CHEN ◽  
Ding-ding TANG

Sign in / Sign up

Export Citation Format

Share Document