Fault severity classification of ball bearing using SinGAN and deep convolutional neural network

Author(s):  
P Akhenia ◽  
K Bhavsar ◽  
J Panchal ◽  
V Vakharia

Condition monitoring and diagnosis of a bearing are very important for any rotating machine as it governs the safety while the machine is in operating condition. To construct a feature vector selection of suitable signal processing techniques is a challenge for vibration-based condition monitoring techniques. In the methodology proposed, Short Time Fourier Transform (STFT), Walsh Hadamard Transform (WHT) and Variable Mode Decomposition (VMD) are used to generate 2-D time-frequency spectrograms from the various fault conditions of bearing. When Deep learning techniques apply for fault diagnosis, a large amount of dataset is required for training of machine learning model. To overcome this issue single image Generative Adversarial Network (SinGAN) as a data augmentation technique, utilized for generating additional 2-D time-frequency spectrograms from various fault conditions of ball bearing. To detect fault severity, four deep learning algorithms, ResNet 34, ResNet50, VGG16, and MobileNetV2 are used as a classifier. Experiments are conducted on a rolling bearing dataset provided by the bearing data center of Case Western Reserve University (CWRU) for validating the utility of methodology proposed. Results show that the proposed methodology enables to detect fault severity level with high classification accuracy.

Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 249
Author(s):  
Xin Jin ◽  
Yuanwen Zou ◽  
Zhongbing Huang

The cell cycle is an important process in cellular life. In recent years, some image processing methods have been developed to determine the cell cycle stages of individual cells. However, in most of these methods, cells have to be segmented, and their features need to be extracted. During feature extraction, some important information may be lost, resulting in lower classification accuracy. Thus, we used a deep learning method to retain all cell features. In order to solve the problems surrounding insufficient numbers of original images and the imbalanced distribution of original images, we used the Wasserstein generative adversarial network-gradient penalty (WGAN-GP) for data augmentation. At the same time, a residual network (ResNet) was used for image classification. ResNet is one of the most used deep learning classification networks. The classification accuracy of cell cycle images was achieved more effectively with our method, reaching 83.88%. Compared with an accuracy of 79.40% in previous experiments, our accuracy increased by 4.48%. Another dataset was used to verify the effect of our model and, compared with the accuracy from previous results, our accuracy increased by 12.52%. The results showed that our new cell cycle image classification system based on WGAN-GP and ResNet is useful for the classification of imbalanced images. Moreover, our method could potentially solve the low classification accuracy in biomedical images caused by insufficient numbers of original images and the imbalanced distribution of original images.


2020 ◽  
Vol 12 (22) ◽  
pp. 3715 ◽  
Author(s):  
Minsoo Park ◽  
Dai Quoc Tran ◽  
Daekyo Jung ◽  
Seunghee Park

To minimize the damage caused by wildfires, a deep learning-based wildfire-detection technology that extracts features and patterns from surveillance camera images was developed. However, many studies related to wildfire-image classification based on deep learning have highlighted the problem of data imbalance between wildfire-image data and forest-image data. This data imbalance causes model performance degradation. In this study, wildfire images were generated using a cycle-consistent generative adversarial network (CycleGAN) to eliminate data imbalances. In addition, a densely-connected-convolutional-networks-based (DenseNet-based) framework was proposed and its performance was compared with pre-trained models. While training with a train set containing an image generated by a GAN in the proposed DenseNet-based model, the best performance result value was realized among the models with an accuracy of 98.27% and an F1 score of 98.16, obtained using the test dataset. Finally, this trained model was applied to high-quality drone images of wildfires. The experimental results showed that the proposed framework demonstrated high wildfire-detection accuracy.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Jianghua Nie ◽  
Yongsheng Xiao ◽  
Lizhen Huang ◽  
Feng Lv

Aiming at the problem of radar target recognition of High-Resolution Range Profile (HRRP) under low signal-to-noise ratio conditions, a recognition method based on the Constrained Naive Least-Squares Generative Adversarial Network (CN-LSGAN), Short-time Fourier Transform (STFT), and Convolutional Neural Network (CNN) is proposed. Combining the Least-Squares Generative Adversarial Network (LSGAN) with the Wasserstein Generative Adversarial Network with Gradient Penalty (WGAN-GP), the CN-LSGAN is presented and applied to the HRRP denoise. The frequency domain and phase features of HRRP are gained by STFT in order to facilitate feature learning and also match the input data format of the CNN. These experimental results show that the CN-LSGAN has better data augmentation performance and can effectively avoid the model collapse compared to the generative adversarial network (GAN) and LSGAN. Also, the method has better recognition performance than the one-dimensional CNN method and the Long Short-Term Memory (LSTM) network method.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 256
Author(s):  
Thierry Pécot ◽  
Alexander Alekseyenko ◽  
Kristin Wallace

Deep learning has revolutionized the automatic processing of images. While deep convolutional neural networks have demonstrated astonishing segmentation results for many biological objects acquired with microscopy, this technology's good performance relies on large training datasets. In this paper, we present a strategy to minimize the amount of time spent in manually annotating images for segmentation. It involves using an efficient and open source annotation tool, the artificial increase of the training data set with data augmentation, the creation of an artificial data set with a conditional generative adversarial network and the combination of semantic and instance segmentations. We evaluate the impact of each of these approaches for the segmentation of nuclei in 2D widefield images of human precancerous polyp biopsies in order to define an optimal strategy.


2021 ◽  
Vol 11 (4) ◽  
pp. 1798
Author(s):  
Jun Yang ◽  
Huijuan Yu ◽  
Tao Shen ◽  
Yaolian Song ◽  
Zhuangfei Chen

As the capability of an electroencephalogram’s (EEG) measurement of the real-time electrodynamics of the human brain is known to all, signal processing techniques, particularly deep learning, could either provide a novel solution for learning but also optimize robust representations from EEG signals. Considering the limited data collection and inadequate concentration of during subjects testing, it becomes essential to obtain sufficient training data and useful features with a potential end-user of a brain–computer interface (BCI) system. In this paper, we combined a conditional variational auto-encoder network (CVAE) with a generative adversarial network (GAN) for learning latent representations from EEG brain signals. By updating the fine-tuned parameter fed into the resulting generative model, we could synthetize the EEG signal under a specific category. We employed an encoder network to obtain the distributed samples of the EEG signal, and applied an adversarial learning mechanism to continuous optimization of the parameters of the generator, discriminator and classifier. The CVAE was adopted to adjust the synthetics more approximately to the real sample class. Finally, we demonstrated our approach take advantages of both statistic and feature matching to make the training process converge faster and more stable and address the problem of small-scale datasets in deep learning applications for motor imagery tasks through data augmentation. The augmented training datasets produced by our proposed CVAE-GAN method significantly enhance the performance of MI-EEG recognition.


Author(s):  
Yongqing Wang ◽  
Mengmeng Niu ◽  
Kuo Liu ◽  
Honghui Wang ◽  
Mingrui Shen ◽  
...  

Abstract In the process of parts processing, due to the real working conditions and data acquisition equipment, the collected working data of tools are actually limited. Meanwhile, the tool usually works in the normal state, so it is prone to cause the problem of unbalanced data set, which restricts the accuracy of tool condition monitoring. Aiming at this problem, this paper proposes a tool condition monitoring method based on generative adversarial network (GAN) for data augmentation. Specifically, first collect original samples data during processing in different tool conditions, then the collected sample data is input into GAN, and the generator of GAN can generate new samples which has similar distribution with original samples from tool condition signals data, finally the real samples and generated samples are combined to train deep learning network to predict tool conditions. Experimental results show that the proposed method can significantly improve the accuracy of tool condition monitoring. This paper compares and visualizes the impact of the training data set on the classification ability of the deep learning network model. In addition, some traditional methods are used for comparison, and F1 measure is introduced to evaluate the quality of the results. The results show that this method is better than the Adaptive Synthetic Sampling (Adasyn), add-noise, and resampling.


2019 ◽  
Vol 9 (19) ◽  
pp. 4166 ◽  
Author(s):  
Yung-Chien Chou ◽  
Cheng-Ju Kuo ◽  
Tzu-Ting Chen ◽  
Gwo-Jiun Horng ◽  
Mao-Yuan Pai ◽  
...  

In the production process from green beans to coffee bean packages, the defective bean removal (or in short, defect removal) is one of most labor-consuming stages, and many companies investigate the automation of this stage for minimizing human efforts. In this paper, we propose a deep-learning-based defective bean inspection scheme (DL-DBIS), together with a GAN (generative-adversarial network)-structured automated labeled data augmentation method (GALDAM) for enhancing the proposed scheme, so that the automation degree of bean removal with robotic arms can be further improved for coffee industries. The proposed scheme is aimed at providing an effective model to a deep-learning-based object detection module for accurately identifying defects among dense beans. The proposed GALDAM can be used to greatly reduce labor costs, since the data labeling is the most labor-intensive work in this sort of solutions. Our proposed scheme brings two main impacts to intelligent agriculture. First, our proposed scheme is can be easily adopted by industries as human effort in labeling coffee beans are minimized. The users can easily customize their own defective bean model without spending a great amount of time on labeling small and dense objects. Second, our scheme can inspect all classes of defective beans categorized by the SCAA (Specialty Coffee Association of America) at the same time and can be easily extended if more classes of defective beans are added. These two advantages increase the degree of automation in the coffee industry. The prototype of the proposed scheme was developed for studying integrated tests. Testing results of a case study reveal that the proposed scheme can efficiently and effectively generate models for identifying defective beans with accuracy and precision values up to 80 % .


2020 ◽  
Author(s):  
Erdi Acar ◽  
Engin Şahin ◽  
İhsan Yılmaz

AbstractComputerized Tomography (CT) has a prognostic role in the early diagnosis of COVID-19 due to it gives both fast and accurate results. This is very important to help decision making of clinicians for quick isolation and appropriate patient treatment. In this study, we combine methods such as segmentation, data augmentation and the generative adversarial network (GAN) to improve the effectiveness of learning models. We obtain the best performance with 99% accuracy for lung segmentation. Using the above improvements we get the highest rates in terms of accuracy (99.8%), precision (99.8%), recall (99.8%), f1-score (99.8%) and roc acu (99.9979%) with deep learning methods in this paper. Also we compare popular deep learning-based frameworks such as VGG16, VGG19, Xception, ResNet50, ResNet50V2, DenseNet121, DenseNet169, InceptionV3 and InceptionResNetV2 for automatic COVID-19 classification. The DenseNet169 amongst deep convolutional neural networks achieves the best performance with 99.8% accuracy. The second-best learner is InceptionResNetV2 with accuracy of 99.65%. The third-best learner is Xception and InceptionV3 with accuracy of 99.60%.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4365
Author(s):  
Kwangyong Jung ◽  
Jae-In Lee ◽  
Nammoon Kim ◽  
Sunjin Oh ◽  
Dong-Wook Seo

Radar target classification is an important task in the missile defense system. State-of-the-art studies using micro-doppler frequency have been conducted to classify the space object targets. However, existing studies rely highly on feature extraction methods. Therefore, the generalization performance of the classifier is limited and there is room for improvement. Recently, to improve the classification performance, the popular approaches are to build a convolutional neural network (CNN) architecture with the help of transfer learning and use the generative adversarial network (GAN) to increase the training datasets. However, these methods still have drawbacks. First, they use only one feature to train the network. Therefore, the existing methods cannot guarantee that the classifier learns more robust target characteristics. Second, it is difficult to obtain large amounts of data that accurately mimic real-world target features by performing data augmentation via GAN instead of simulation. To mitigate the above problem, we propose a transfer learning-based parallel network with the spectrogram and the cadence velocity diagram (CVD) as the inputs. In addition, we obtain an EM simulation-based dataset. The radar-received signal is simulated according to a variety of dynamics using the concept of shooting and bouncing rays with relative aspect angles rather than the scattering center reconstruction method. Our proposed model is evaluated on our generated dataset. The proposed method achieved about 0.01 to 0.39% higher accuracy than the pre-trained networks with a single input feature.


Sign in / Sign up

Export Citation Format

Share Document