scholarly journals Semantic Graph Neural Network: A Conversion from Spam Email Classification to Graph Classification

2022 ◽  
Vol 2022 ◽  
pp. 1-8
Author(s):  
Weisen Pan ◽  
Jian Li ◽  
Lisa Gao ◽  
Liexiang Yue ◽  
Yan Yang ◽  
...  

In this study, we propose a method named Semantic Graph Neural Network (SGNN) to address the challenging task of email classification. This method converts the email classification problem into a graph classification problem by projecting email into a graph and applying the SGNN model for classification. The email features are generated from the semantic graph; hence, there is no need of embedding the words into a numerical vector representation. The method performance is tested on the different public datasets. Experiments in the public dataset show that the presented method achieves high accuracy in the email classification test against a few public datasets. The performance is better than the state-of-the-art deep learning-based method in terms of spam classification.

Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5283 ◽  
Author(s):  
Muhammad Tariq Sadiq ◽  
Xiaojun Yu ◽  
Zhaohui Yuan ◽  
Muhammad Zulkifal Aziz

The development of fast and robust brain–computer interface (BCI) systems requires non-complex and efficient computational tools. The modern procedures adopted for this purpose are complex which limits their use in practical applications. In this study, for the first time, and to the best of our knowledge, a successive decomposition index (SDI)-based feature extraction approach is utilized for the classification of motor and mental imagery electroencephalography (EEG) tasks. First of all, the public datasets IVa, IVb, and V from BCI competition III were denoised using multiscale principal analysis (MSPCA), and then a SDI feature was calculated corresponding to each trial of the data. Finally, six benchmark machine learning and neural network classifiers were used to evaluate the performance of the proposed method. All the experiments were performed for motor and mental imagery datasets in binary and multiclass applications using a 10-fold cross-validation method. Furthermore, computerized automatic detection of motor and mental imagery using SDI (CADMMI-SDI) is developed to describe the proposed approach practically. The experimental results suggest that the highest classification accuracy of 97.46% (Dataset IVa), 99.52% (Dataset IVb), and 99.33% (Dataset V) was obtained using feedforward neural network classifier. Moreover, a series of experiments, namely, statistical analysis, channels variation, classifier parameters variation, processed and unprocessed data, and computational complexity, were performed and it was concluded that SDI is robust for noise, and a non-complex and efficient biomarker for the development of fast and accurate motor and mental imagery BCI systems.


2020 ◽  
Vol 9 (2) ◽  
pp. 285
Author(s):  
Putu Wahyu Tirta Guna ◽  
Luh Arida Ayu Ayu Rahning Putri

Not many people know that endek cloth itself has 4 known variances. .Nowadays. Computing and classification algorithm can be implemented to solve classification problem with respect to the features data as input. We can use this computing power to digitalize these endek pattern. The features extraction algorithm used in this research is GLCM. Where these data will act as input for the neural network model later. There is a lot of optimizer algorithm to use in back propagation phase. In this research we  prefer to use adam which is one of the newest and most popular optimizer algorithm. To compare its performace we also use SGD which is older and popular optimizer algorithm. Later we find that adam algorithm generate 33% accuracy which is better than what SGD algorithm give, it is 23% accuracy. Longer epoch also give affect for overall model accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (21) ◽  
pp. 7414
Author(s):  
Jing Li ◽  
Haowen Zhang ◽  
Yabo Dong ◽  
Tongbin Zuo ◽  
Duanqing Xu

Traditional supervised time series classification (TSC) tasks assume that all training data are labeled. However, in practice, manually labelling all unlabeled data could be very time-consuming and often requires the participation of skilled domain experts. In this paper, we concern with the positive unlabeled time series classification problem (PUTSC), which refers to automatically labelling the large unlabeled set U based on a small positive labeled set PL. The self-training (ST) is the most widely used method for solving the PUTSC problem and has attracted increased attention due to its simplicity and effectiveness. The existing ST methods simply employ the one-nearest-neighbor (1NN) formula to determine which unlabeled time-series should be labeled. Nevertheless, we note that the 1NN formula might not be optimal for PUTSC tasks because it may be sensitive to the initial labeled data located near the boundary between the positive and negative classes. To overcome this issue, in this paper we propose an exploratory methodology called ST-average. Unlike conventional ST-based approaches, ST-average utilizes the average sequence calculated by DTW barycenter averaging technique to label the data. Compared with any individuals in PL set, the average sequence is more representative. Our proposal is insensitive to the initial labeled data and is more reliable than existing ST-based methods. Besides, we demonstrate that ST-average can naturally be implemented along with many existing techniques used in original ST. Experimental results on public datasets show that ST-average performs better than related popular methods.


2020 ◽  
Vol 2020 (8) ◽  
pp. 186-1-186-11
Author(s):  
Xiaoyu Xiang ◽  
Yang Cheng ◽  
Shaoyuan Xu ◽  
Qian Lin ◽  
Jan Allebach

The evolving algorithms for 2D facial landmark detection empower people to recognize faces, analyze facial expressions, etc. However, existing methods still encounter problems of unstable facial landmarks when applied to videos. Because previous research shows that the instability of facial landmarks is caused by the inconsistency of labeling quality among the public datasets, we want to have a better understanding of the influence of annotation noise in them. In this paper, we make the following contributions: 1) we propose two metrics that quantitatively measure the stability of detected facial landmarks, 2) we model the annotation noise in an existing public dataset, 3) we investigate the influence of different types of noise in training face alignment neural networks, and propose corresponding solutions. Our results demonstrate improvements in both accuracy and stability of detected facial landmarks.


2012 ◽  
Vol 22 (01) ◽  
pp. 1250013 ◽  
Author(s):  
TUBA AYHAN ◽  
MÜŞTAK E. YALÇIN

Many biological networks are constructed with both regular and random connections between neurons. Bio-inspired systems should prevent this mixed topology of biological networks while the artificial system is still realizable. In this work, a bio-inspired network which has many analog realizations, Cellular Neural Network (CNN) is investigated under existing random connections in addition to its regular connections: Small-World Cellular Neural Network (SWCNN). Antennal Lobe, an organ in the olfaction system of insects, is modeled with SWCNN by extending the network with the use of two types of processors on the same network. The model combined with a classifier, SVM and overall system is tested with a five-class odor classification problem. While all neurons are connected to each other with direct or indirect connections in CNNs, the idea of short-cuts does not provide an improvement in classification performance but the results show that the fault tolerance ability of SWCNN is better than the classical CNN.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5236
Author(s):  
Bosheng Qin ◽  
Dongxiao Li

The rapid worldwide spread of Coronavirus Disease 2019 (COVID-19) has resulted in a global pandemic. Correct facemask wearing is valuable for infectious disease control, but the effectiveness of facemasks has been diminished, mostly due to improper wearing. However, there have not been any published reports on the automatic identification of facemask-wearing conditions. In this study, we develop a new facemask-wearing condition identification method by combining image super-resolution and classification networks (SRCNet), which quantifies a three-category classification problem based on unconstrained 2D facial images. The proposed algorithm contains four main steps: Image pre-processing, facial detection and cropping, image super-resolution, and facemask-wearing condition identification. Our method was trained and evaluated on the public dataset Medical Masks Dataset containing 3835 images with 671 images of no facemask-wearing, 134 images of incorrect facemask-wearing, and 3030 images of correct facemask-wearing. Finally, the proposed SRCNet achieved 98.70% accuracy and outperformed traditional end-to-end image classification methods using deep learning without image super-resolution by over 1.5% in kappa. Our findings indicate that the proposed SRCNet can achieve high-accuracy identification of facemask-wearing conditions, thus having potential applications in epidemic prevention involving COVID-19.


Diabetes is a worldwide spread disease which is increasing rapidly and found in all age people. Diabetic Retinopathy is a retinal abnormality caused by diabetes. Which can lead to permanent vision loss or blindness. As Diabetic Retinopathy pathology damages retina without any early symptoms, it is very important to do the regular screening of retina and detection of Retinopathy. Ophthalmologist does the identification of Retinopathy manually which is time consuming and error prone. Hence, there is a need for early and correct automatic detection of Diabetic Retinopathy. Many researches have done for detection using Image Processing, Artificial Intelligence, Neural Network and Machine Learning. This paper presents a review on Diabetic Retinopathy Detection systems. This review highlights the public datasets available for the evaluation of the detection systems with different segmentation and classification techniques. We have discussed the analysis of different classification and segmentation techniques used in DR detection.


Author(s):  
Guolong Wang ◽  
Junchi Yan ◽  
Zheng Qin

The ever-increasing volume of visual images has stimulated the demand for organizing such data by aesthetic quality. Automatic and especially learning based aesthetic assessment methods have shown potential by recent works. Existing image aesthetic prediction is often user-agnostic which may ignore the fact that the rating to an image can be inherently individual. We fill this gap by formulating the personalized image aesthetic assessment problem with a novel learning method. Specifically, we collect user-image textual reviews in addition with visual images from the public dataset to organize a review-augmented benchmark. Using this enriched dataset, we devise a deep neural network with a user/image relation encoding input for collaborative filtering. Meanwhile an attentive mechanism is designed to capture the user-specific taste for image semantic tags and regions of interest by fusing the image and user's review. Extensive and promising experimental results on the review-augmented benchmark corroborate the efficacy of our approach.


Author(s):  
Subhan Panji Cipta

Weather and climate information have contributed as one consideration for decision makers. This arises because the information the weather / climate has economic value in a variety of activities , ranging from agriculture to flood control . From the data obtained implied that the current rainfall prediction not so accurate . Forecasts are often given to the public on a regular basis is the weather forecast , not the amount of rainfall. This study uses an algorithm Evolving Neural Network (ENN) as an approach to predict the rainfall , the data processing and calculations will use MatLab 2009b . The parameters used in this study is time , rainfall , humidity and temperature. The results also compared with the test results and predictions BPNN BMKG. From the results of research conducted from early stage to test and measurement , the application of this ENN has a rainfall prediction with accuracy better than the BPNN and prediction algorithms BMKG.  


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Qifei Zhang ◽  
Lingjian Fu ◽  
Linyue Gu

Motion artifacts and myoelectrical noise are common issues complicating the collection and processing of dynamic electrocardiogram (ECG) signals. Recent signal quality studies have utilized a binary classification metric in which ECG samples are determined to either be clean or noisy. However, the clinical use of dynamic ECGs requires specific noise level classification for varying applications. Conventional signal processing methods, including waveform discrimination, are limited in their ability to remove motion artifacts and myoelectrical noise from dynamic ECGs. As such, a novel cascaded convolutional neural network (CNN) is proposed and demonstrated for application to the five-classification problem (low interference, mild motion artifacts, mild myoelectrical noise, severe motion artifacts, and severe myoelectrical noise). Specifically, this study finally categorizes dynamic ECG signals into three levels (low, mild, and severe) using the proposed CNN to meet clinical requirements. The network includes two components, the first of which was used to distinguish signal interference types, while the second was used to distinguish signal interference levels. This model does not require feature engineering, includes powerful nonlinear mapping capabilities, and is robust to varying noise types. Experimental data are composed of private dataset and public dataset, which were acquired from 90,000 four-second dynamic ECG signals and MIT-BIH Arrhythmia database, respectively. Experimental results produced an overall recognition rate of 92.7% on private dataset and 91.8% on public dataset. These results suggest the proposed technique to be a valuable new tool for dynamic ECG auxiliary diagnosis.


Sign in / Sign up

Export Citation Format

Share Document