scholarly journals Linguistic-Based Detection of Fake News in Social Media

2021 ◽  
Author(s):  
Mohammad Mahyoob ◽  
Jeehaan Algaraady ◽  
Musaad Alrahaili

The tremendous growth and impact of fake news as a hot research field gained the public’s attention and threatened their safety in recent years. However, there is a wide range of developed fashions to detect fake contents, either those human-based approaches or machine-based approaches; both have shown inadequacy and limitations, especially those fully automatic approaches. The purpose of this analytic study of media news language is to investigate and identify the linguistic features and their contribution in analyzing data to detect, filter, and differentiate between fake and authentic news texts. This study outlines promising uses of linguistic indicators and adds a rather unconventional outlook to prior literature. It utilizes qualitative and quantitative data analysis as an analytic method to identify systematic nuances between fake and factual news in terms of detecting and comparing 16 attributes under three main linguistic features categories (lexical, grammatical, and syntactic features) assigned manually to news texts. The obtained datasets consist of publicly available right documents on the Politi-fact website and the raw (test) data set collected randomly from news posts on Facebook pages. The results show that linguistic features, especially grammatical features, help determine untrustworthy texts and demonstrate that most of the test news tends to be unreliable articles.

2020 ◽  
Vol 11 (1) ◽  
pp. 99
Author(s):  
Mohammad Mahyoob ◽  
Jeehaan Algaraady ◽  
Musaad Alrahaili

The tremendous growth and impact of fake news as a hot research field gained the public’s attention and threatened their safety in recent years. However, there is a wide range of developed fashions to detect fake contents, either those human-based approaches or machine-based approaches; both have shown inadequacy and limitations, especially those fully automatic approaches. The purpose of this analytic study of media news language is to investigate and identify the linguistic features and their contribution in analyzing data to detect, filter, and differentiate between fake and authentic news texts. This study outlines promising uses of linguistic indicators and adds a rather unconventional outlook to prior literature. It utilizes qualitative and quantitative data analysis as an analytic method to identify systematic nuances between fake and factual news in terms of detecting and comparing 16 attributes under three main linguistic features categories (lexical, grammatical, and syntactic features) assigned manually to news texts. The obtained datasets consist of publicly available right documents on the Politi-fact website and the raw (test) data set collected randomly from news posts on Facebook pages. The results show that linguistic features, especially grammatical features, help determine untrustworthy texts and demonstrate that most of the test news tends to be unreliable articles.


2017 ◽  
Author(s):  
João C. Marques ◽  
Michael B. Orger

AbstractHow to partition a data set into a set of distinct clusters is a ubiquitous and challenging problem. The fact that data varies widely in features such as cluster shape, cluster number, density distribution, background noise, outliers and degree of overlap, makes it difficult to find a single algorithm that can be broadly applied. One recent method, clusterdp, based on search of density peaks, can be applied successfully to cluster many kinds of data, but it is not fully automatic, and fails on some simple data distributions. We propose an alternative approach, clusterdv, which estimates density dips between points, and allows robust determination of cluster number and distribution across a wide range of data, without any manual parameter adjustment. We show that this method is able to solve a range of synthetic and experimental data sets, where the underlying structure is known, and identifies consistent and meaningful clusters in new behavioral data.Author summarIt is common that natural phenomena produce groupings, or clusters, in data, that can reveal the underlying processes. However, the form of these clusters can vary arbitrarily, making it challenging to find a single algorithm that identifies their structure correctly, without prior knowledge of the number of groupings or their distribution. We describe a simple clustering algorithm that is fully automatic and is able to correctly identify the number and shape of groupings in data of many types. We expect this algorithm to be useful in finding unknown natural phenomena present in data from a wide range of scientific fields.


2022 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Krishnadas Nanath ◽  
Supriya Kaitheri ◽  
Sonia Malik ◽  
Shahid Mustafa

Purpose The purpose of this paper is to examine the factors that significantly affect the prediction of fake news from the virality theory perspective. The paper looks at a mix of emotion-driven content, sentimental resonance, topic modeling and linguistic features of news articles to predict the probability of fake news. Design/methodology/approach A data set of over 12,000 articles was chosen to develop a model for fake news detection. Machine learning algorithms and natural language processing techniques were used to handle big data with efficiency. Lexicon-based emotion analysis provided eight kinds of emotions used in the article text. The cluster of topics was extracted using topic modeling (five topics), while sentiment analysis provided the resonance between the title and the text. Linguistic features were added to the coding outcomes to develop a logistic regression predictive model for testing the significant variables. Other machine learning algorithms were also executed and compared. Findings The results revealed that positive emotions in a text lower the probability of news being fake. It was also found that sensational content like illegal activities and crime-related content were associated with fake news. The news title and the text exhibiting similar sentiments were found to be having lower chances of being fake. News titles with more words and content with fewer words were found to impact fake news detection significantly. Practical implications Several systems and social media platforms today are trying to implement fake news detection methods to filter the content. This research provides exciting parameters from a viral theory perspective that could help develop automated fake news detectors. Originality/value While several studies have explored fake news detection, this study uses a new perspective on viral theory. It also introduces new parameters like sentimental resonance that could help predict fake news. This study deals with an extensive data set and uses advanced natural language processing to automate the coding techniques in developing the prediction model.


2019 ◽  
Vol 16 (7) ◽  
pp. 808-817 ◽  
Author(s):  
Laxmi Banjare ◽  
Sant Kumar Verma ◽  
Akhlesh Kumar Jain ◽  
Suresh Thareja

Background: In spite of the availability of various treatment approaches including surgery, radiotherapy, and hormonal therapy, the steroidal aromatase inhibitors (SAIs) play a significant role as chemotherapeutic agents for the treatment of estrogen-dependent breast cancer with the benefit of reduced risk of recurrence. However, due to greater toxicity and side effects associated with currently available anti-breast cancer agents, there is emergent requirement to develop target-specific AIs with safer anti-breast cancer profile. Methods: It is challenging task to design target-specific and less toxic SAIs, though the molecular modeling tools viz. molecular docking simulations and QSAR have been continuing for more than two decades for the fast and efficient designing of novel, selective, potent and safe molecules against various biological targets to fight the number of dreaded diseases/disorders. In order to design novel and selective SAIs, structure guided molecular docking assisted alignment dependent 3D-QSAR studies was performed on a data set comprises of 22 molecules bearing steroidal scaffold with wide range of aromatase inhibitory activity. Results: 3D-QSAR model developed using molecular weighted (MW) extent alignment approach showed good statistical quality and predictive ability when compared to model developed using moments of inertia (MI) alignment approach. Conclusion: The explored binding interactions and generated pharmacophoric features (steric and electrostatic) of steroidal molecules could be exploited for further design, direct synthesis and development of new potential safer SAIs, that can be effective to reduce the mortality and morbidity associated with breast cancer.


Author(s):  
Eun-Young Mun ◽  
Anne E. Ray

Integrative data analysis (IDA) is a promising new approach in psychological research and has been well received in the field of alcohol research. This chapter provides a larger unifying research synthesis framework for IDA. Major advantages of IDA of individual participant-level data include better and more flexible ways to examine subgroups, model complex relationships, deal with methodological and clinical heterogeneity, and examine infrequently occurring behaviors. However, between-study heterogeneity in measures, designs, and samples and systematic study-level missing data are significant barriers to IDA and, more broadly, to large-scale research synthesis. Based on the authors’ experience working on the Project INTEGRATE data set, which combined individual participant-level data from 24 independent college brief alcohol intervention studies, it is also recognized that IDA investigations require a wide range of expertise and considerable resources and that some minimum standards for reporting IDA studies may be needed to improve transparency and quality of evidence.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 348
Author(s):  
Choongsang Cho ◽  
Young Han Lee ◽  
Jongyoul Park ◽  
Sangkeun Lee

Semantic image segmentation has a wide range of applications. When it comes to medical image segmentation, its accuracy is even more important than those of other areas because the performance gives useful information directly applicable to disease diagnosis, surgical planning, and history monitoring. The state-of-the-art models in medical image segmentation are variants of encoder-decoder architecture, which is called U-Net. To effectively reflect the spatial features in feature maps in encoder-decoder architecture, we propose a spatially adaptive weighting scheme for medical image segmentation. Specifically, the spatial feature is estimated from the feature maps, and the learned weighting parameters are obtained from the computed map, since segmentation results are predicted from the feature map through a convolutional layer. Especially in the proposed networks, the convolutional block for extracting the feature map is replaced with the widely used convolutional frameworks: VGG, ResNet, and Bottleneck Resent structures. In addition, a bilinear up-sampling method replaces the up-convolutional layer to increase the resolution of the feature map. For the performance evaluation of the proposed architecture, we used three data sets covering different medical imaging modalities. Experimental results show that the network with the proposed self-spatial adaptive weighting block based on the ResNet framework gave the highest IoU and DICE scores in the three tasks compared to other methods. In particular, the segmentation network combining the proposed self-spatially adaptive block and ResNet framework recorded the highest 3.01% and 2.89% improvements in IoU and DICE scores, respectively, in the Nerve data set. Therefore, we believe that the proposed scheme can be a useful tool for image segmentation tasks based on the encoder-decoder architecture.


Plants ◽  
2021 ◽  
Vol 10 (7) ◽  
pp. 1443
Author(s):  
Yoshiaki Kamiyama ◽  
Sotaro Katagiri ◽  
Taishi Umezawa

Reversible phosphorylation is a major mechanism for regulating protein function and controls a wide range of cellular functions including responses to external stimuli. The plant-specific SNF1-related protein kinase 2s (SnRK2s) function as central regulators of plant growth and development, as well as tolerance to multiple abiotic stresses. Although the activity of SnRK2s is tightly regulated in a phytohormone abscisic acid (ABA)-dependent manner, recent investigations have revealed that SnRK2s can be activated by group B Raf-like protein kinases independently of ABA. Furthermore, evidence is accumulating that SnRK2s modulate plant growth through regulation of target of rapamycin (TOR) signaling. Here, we summarize recent advances in knowledge of how SnRK2s mediate plant growth and osmotic stress signaling and discuss future challenges in this research field.


2021 ◽  
Vol 11 (7) ◽  
pp. 3153
Author(s):  
Saifeddine Benhadhria ◽  
Mohamed Mansouri ◽  
Ameni Benkhlifa ◽  
Imed Gharbi ◽  
Nadhem Jlili

Multirotor drones are widely used currently in several areas of life. Their suitable size and the tasks that they can perform are their main advantages. However, to the best of our knowledge, they must be controlled via remote control to fly from one point to another, and they can only be used for a specific mission (tracking, searching, computing, and so on). In this paper, we intend to present an autonomous UAV based on Raspberry Pi and Android. Android offers a wide range of applications for direct use by the UAV depending on the context of the assigned mission. The applications cover a large number of areas such as object identification, facial recognition, and counting objects such as panels, people, and so on. In addition, the proposed UAV calculates optimal trajectories, provides autonomous navigation without external control, detects obstacles, and ensures live streaming during the mission. Experiments are carried out to test the above-mentioned criteria.


Author(s):  
Pawan Kumar Verma ◽  
Prateek Agrawal ◽  
Ivone Amorim ◽  
Radu Prodan

Author(s):  
V.T Priyanga ◽  
J.P Sanjanasri ◽  
Vijay Krishna Menon ◽  
E.A Gopalakrishnan ◽  
K.P Soman

The widespread use of social media like Facebook, Twitter, Whatsapp, etc. has changed the way News is created and published; accessing news has become easy and inexpensive. However, the scale of usage and inability to moderate the content has made social media, a breeding ground for the circulation of fake news. Fake news is deliberately created either to increase the readership or disrupt the order in the society for political and commercial benefits. It is of paramount importance to identify and filter out fake news especially in democratic societies. Most existing methods for detecting fake news involve traditional supervised machine learning which has been quite ineffective. In this paper, we are analyzing word embedding features that can tell apart fake news from true news. We use the LIAR and ISOT data set. We churn out highly correlated news data from the entire data set by using cosine similarity and other such metrices, in order to distinguish their domains based on central topics. We then employ auto-encoders to detect and differentiate between true and fake news while also exploring their separability through network analysis.


Sign in / Sign up

Export Citation Format

Share Document