scholarly journals Research on the Garbage Classification Problem Based on Convolutional Neural Network

2021 ◽  
Vol 2078 (1) ◽  
pp. 012056
Author(s):  
Shuang Wu ◽  
Zeyu Li ◽  
Xinqiong Chen ◽  
Peiwen Zhong ◽  
Liangcai Mei ◽  
...  

Abstract In order to better promote garbage classification, machine learning models are used to discover and solve garbage classification problems. First, the factor analysis is used to conduct field investigation and data analysis on residents' perception of waste classification. Second, convolutional neural network (CNN) is used to classify and recognize garbage images, which is used to assist the judgment of garbage classification. We should put forward some reasonable classification suggestions to better promote the problem of garbage classification.

2020 ◽  
Vol 36 (3) ◽  
pp. 1166-1187 ◽  
Author(s):  
Shohei Naito ◽  
Hiromitsu Tomozawa ◽  
Yuji Mori ◽  
Takeshi Nagata ◽  
Naokazu Monma ◽  
...  

This article presents a method for detecting damaged buildings in the event of an earthquake using machine learning models and aerial photographs. We initially created training data for machine learning models using aerial photographs captured around the town of Mashiki immediately after the main shock of the 2016 Kumamoto earthquake. All buildings are classified into one of the four damage levels by visual interpretation. Subsequently, two damage discrimination models are developed: a bag-of-visual-words model and a model based on a convolutional neural network. Results are compared and validated in terms of accuracy, revealing that the latter model is preferable. Moreover, for the convolutional neural network model, the target areas are expanded and the recalls of damage classification at the four levels range approximately from 66% to 81%.


2021 ◽  
Vol 12 (6) ◽  
pp. 1-24
Author(s):  
Shaojie Qiao ◽  
Nan Han ◽  
Jianbin Huang ◽  
Kun Yue ◽  
Rui Mao ◽  
...  

Bike-sharing systems are becoming popular and generate a large volume of trajectory data. In a bike-sharing system, users can borrow and return bikes at different stations. In particular, a bike-sharing system will be affected by weather, the time period, and other dynamic factors, which challenges the scheduling of shared bikes. In this article, a new shared-bike demand forecasting model based on dynamic convolutional neural networks, called SDF , is proposed to predict the demand of shared bikes. SDF chooses the most relevant weather features from real weather data by using the Pearson correlation coefficient and transforms them into a two-dimensional dynamic feature matrix, taking into account the states of stations from historical data. The feature information in the matrix is extracted, learned, and trained with a newly proposed dynamic convolutional neural network to predict the demand of shared bikes in a dynamical and intelligent fashion. The phase of parameter update is optimized from three aspects: the loss function, optimization algorithm, and learning rate. Then, an accurate shared-bike demand forecasting model is designed based on the basic idea of minimizing the loss value. By comparing with classical machine learning models, the weight sharing strategy employed by SDF reduces the complexity of the network. It allows a high prediction accuracy to be achieved within a relatively short period of time. Extensive experiments are conducted on real-world bike-sharing datasets to evaluate SDF. The results show that SDF significantly outperforms classical machine learning models in prediction accuracy and efficiency.


2021 ◽  
Author(s):  
Chayaporn Suphavilai ◽  
Hatairat Yingtaweesittikul

Background: Transcriptomic profiles have become crucial information in understanding diseases and improving treatments. While dysregulated gene sets are identified via pathway analysis, various machine learning models have been proposed for predicting phenotypes such as disease type and drug response based on gene expression patterns. However, these models still lack interpretability, as well as the ability to integrate prior knowledge from a protein-protein interaction network. Results: We propose Grandline, a graph convolutional neural network that can integrate gene expression data and structure of the protein interaction network to predict a specific phenotype. Transforming the interaction network into a spectral domain enables convolution of neighbouring genes and pinpointing high-impact subnetworks, which allow better interpretability of deep learning models. Grandline achieves high phenotype prediction accuracy (67-85% in 8 use cases), comparable to state-of-the-art machine learning models while requiring a smaller number of parameters, allowing it to learn complex but interpretable gene expression patterns from biological datasets. Conclusion: To improve the interpretability of phenotype prediction based on gene expression patterns, we developed Grandline using graph convolutional neural network technique to integrate protein interaction information. We focus on improving the ability to learn nonlinear relationships between gene expression patterns and a given phenotype and incorporation of prior knowledge, which are the main challenges of machine learning models for biological datasets. The graph convolution allows us to aggregate information from relevant genes and reduces the number of trainable parameters, facilitating model training for a small-sized biological dataset.


2021 ◽  
Vol 12 ◽  
Author(s):  
Hang Yang ◽  
Xin-Rong Hu ◽  
Ling Sun ◽  
Dian Hong ◽  
Ying-Yi Zheng ◽  
...  

BackgroundNoonan syndrome (NS), a genetically heterogeneous disorder, presents with hypertelorism, ptosis, dysplastic pulmonary valve stenosis, hypertrophic cardiomyopathy, and small stature. Early detection and assessment of NS are crucial to formulating an individualized treatment protocol. However, the diagnostic rate of pediatricians and pediatric cardiologists is limited. To overcome this challenge, we propose an automated facial recognition model to identify NS using a novel deep convolutional neural network (DCNN) with a loss function called additive angular margin loss (ArcFace).MethodsThe proposed automated facial recognition models were trained on dataset that included 127 NS patients, 163 healthy children, and 130 children with several other dysmorphic syndromes. The photo dataset contained only one frontal face image from each participant. A novel DCNN framework with ArcFace loss function (DCNN-Arcface model) was constructed. Two traditional machine learning models and a DCNN model with cross-entropy loss function (DCNN-CE model) were also constructed. Transfer learning and data augmentation were applied in the training process. The identification performance of facial recognition models was assessed by five-fold cross-validation. Comparison of the DCNN-Arcface model to two traditional machine learning models, the DCNN-CE model, and six physicians were performed.ResultsAt distinguishing NS patients from healthy children, the DCNN-Arcface model achieved an accuracy of 0.9201 ± 0.0138 and an area under the receiver operator characteristic curve (AUC) of 0.9797 ± 0.0055. At distinguishing NS patients from children with several other genetic syndromes, it achieved an accuracy of 0.8171 ± 0.0074 and an AUC of 0.9274 ± 0.0062. In both cases, the DCNN-Arcface model outperformed the two traditional machine learning models, the DCNN-CE model, and six physicians.ConclusionThis study shows that the proposed DCNN-Arcface model is a promising way to screen NS patients and can improve the NS diagnosis rate.


2019 ◽  
Vol 2019 ◽  
pp. 1-9 ◽  
Author(s):  
Young-Seob Jeong ◽  
Jiyoung Woo ◽  
Ah Reum Kang

With increasing amount of data, the threat of malware keeps growing recently. The malicious actions embedded in nonexecutable documents especially (e.g., PDF files) can be more dangerous, because it is difficult to detect and most users are not aware of such type of malicious attacks. In this paper, we design a convolutional neural network to tackle the malware detection on the PDF files. We collect malicious and benign PDF files and manually label the byte sequences within the files. We intensively examine the structure of the input data and illustrate how we design the proposed network based on the characteristics of data. The proposed network is designed to interpret high-level patterns among collectable spatial clues, thereby predicting whether the given byte sequence has malicious actions or not. By experimental results, we demonstrate that the proposed network outperform several representative machine-learning models as well as other networks with different settings.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5238
Author(s):  
Anthony N. Turner ◽  
Carl Wheldon ◽  
Tzany Kokalova Wheldon ◽  
Mark R. Gilbert ◽  
Lee W. Packer ◽  
...  

Improvements in Radio-Isotope IDentification (RIID) algorithms have seen a resurgence in interest with the increased accessibility of machine learning models. Convolutional Neural Network (CNN)-based models have been developed to identify arbitrary mixtures of unstable nuclides from gamma spectra. In service of this, methods for the simulation and pre-processing of training data were also developed. The implementation of 1D multi-class, multi-label CNNs demonstrated good generalisation to real spectra with poor statistics and significant gain shifts. It is also shown that even basic CNN architectures prove reliable for RIID under the challenging conditions of heavy shielding and close source geometries, and may be extended to generalised solutions for pragmatic RIID.


Sensors ◽  
2019 ◽  
Vol 19 (1) ◽  
pp. 210 ◽  
Author(s):  
Zied Tayeb ◽  
Juri Fedjaev ◽  
Nejla Ghaboosi ◽  
Christoph Richter ◽  
Lukas Everding ◽  
...  

Non-invasive, electroencephalography (EEG)-based brain-computer interfaces (BCIs) on motor imagery movements translate the subject’s motor intention into control signals through classifying the EEG patterns caused by different imagination tasks, e.g., hand movements. This type of BCI has been widely studied and used as an alternative mode of communication and environmental control for disabled patients, such as those suffering from a brainstem stroke or a spinal cord injury (SCI). Notwithstanding the success of traditional machine learning methods in classifying EEG signals, these methods still rely on hand-crafted features. The extraction of such features is a difficult task due to the high non-stationarity of EEG signals, which is a major cause by the stagnating progress in classification performance. Remarkable advances in deep learning methods allow end-to-end learning without any feature engineering, which could benefit BCI motor imagery applications. We developed three deep learning models: (1) A long short-term memory (LSTM); (2) a spectrogram-based convolutional neural network model (CNN); and (3) a recurrent convolutional neural network (RCNN), for decoding motor imagery movements directly from raw EEG signals without (any manual) feature engineering. Results were evaluated on our own publicly available, EEG data collected from 20 subjects and on an existing dataset known as 2b EEG dataset from “BCI Competition IV”. Overall, better classification performance was achieved with deep learning models compared to state-of-the art machine learning techniques, which could chart a route ahead for developing new robust techniques for EEG signal decoding. We underpin this point by demonstrating the successful real-time control of a robotic arm using our CNN based BCI.


2021 ◽  
Author(s):  
Mohammed Ayub ◽  
SanLinn Kaka

Abstract Manual first-break picking from a large volume of seismic data is extremely tedious and costly. Deployment of machine learning models makes the process fast and cost effective. However, these machine learning models require high representative and effective features for accurate automatic picking. Therefore, First- Break (FB) picking classification model that uses effective minimum number of features and promises performance efficiency is proposed. The variants of Recurrent Neural Networks (RNNs) such as Long ShortTerm Memory (LSTM) and Gated Recurrent Unit (GRU) can retain contextual information from long previous time steps. We deploy this advantage for FB picking as seismic traces are amplitude values of vibration along the time-axis. We use behavioral fluctuation of amplitude as input features for LSTM and GRU. The models are trained on noisy data and tested for generalization on original traces not seen during the training and validation process. In order to analyze the real-time suitability, the performance is benchmarked using accuracy, F1-measure and three other established metrics. We have trained two RNN models and two deep Neural Network models for FB classification using only amplitude values as features. Both LSTM and GRU have the accuracy and F1-measure with a score of 94.20%. With the same features, Convolutional Neural Network (CNN) has an accuracy of 93.58% and F1-score of 93.63%. Again, Deep Neural Network (DNN) model has scores of 92.83% and 92.59% as accuracy and F1-measure, respectively. From the pexperiment results, we see significant superior performance of LSTM and GRU to CNN and DNN when used the same features. For robustness of LSTM and GRU models, the performance is compared with DNN model that is trained using nine features derived from seismic traces and observed that the performance superiority of RNN models. Therefore, it is safe to conclude that RNN models (LSTM and GRU) are capable of classifying the FB events efficiently even by using a minimum number of features that are not computationally expensive. The novelty of our work is the capability of automatic FB classification with the RNN models that incorporate contextual behavioral information without the need for sophisticated feature extraction or engineering techniques that in turn can help in reducing the cost and fostering classification model robust and faster.


2018 ◽  
Vol 8 (12) ◽  
pp. 2663 ◽  
Author(s):  
Davy Preuveneers ◽  
Vera Rimmer ◽  
Ilias Tsingenopoulos ◽  
Jan Spooren ◽  
Wouter Joosen ◽  
...  

The adoption of machine learning and deep learning is on the rise in the cybersecurity domain where these AI methods help strengthen traditional system monitoring and threat detection solutions. However, adversaries too are becoming more effective in concealing malicious behavior amongst large amounts of benign behavior data. To address the increasing time-to-detection of these stealthy attacks, interconnected and federated learning systems can improve the detection of malicious behavior by joining forces and pooling together monitoring data. The major challenge that we address in this work is that in a federated learning setup, an adversary has many more opportunities to poison one of the local machine learning models with malicious training samples, thereby influencing the outcome of the federated learning and evading detection. We present a solution where contributing parties in federated learning can be held accountable and have their model updates audited. We describe a permissioned blockchain-based federated learning method where incremental updates to an anomaly detection machine learning model are chained together on the distributed ledger. By integrating federated learning with blockchain technology, our solution supports the auditing of machine learning models without the necessity to centralize the training data. Experiments with a realistic intrusion detection use case and an autoencoder for anomaly detection illustrate that the increased complexity caused by blockchain technology has a limited performance impact on the federated learning, varying between 5 and 15%, while providing full transparency over the distributed training process of the neural network. Furthermore, our blockchain-based federated learning solution can be generalized and applied to more sophisticated neural network architectures and other use cases.


Sign in / Sign up

Export Citation Format

Share Document