HEp-2 CELL CLASSIFICATION BY ADAPTIVE CONVOLUTIONAL LAYER BASED CONVOLUTIONAL NEURAL NETWORK

2019 ◽  
Vol 31 (06) ◽  
pp. 1950044
Author(s):  
C. C. Manju ◽  
M. Victor Jose

Objective: The antinuclear antibodies (ANA) that present in the human serum have a link with various autoimmune diseases. Human Epithelial type-2 (HEp-2) cells acts as a substance in the Indirect Immuno fluorescence (IIF) test for diagnosing these autoimmune diseases. In recent times, the computer-aided diagnosis of autoimmune diseases by the HEp-2 cell classification has drawn more interest. Though, they often pose limitations like large intra-class and small inter-class variations. Hence, various efforts have been performed to automate the procedure of HEp-2 cell classification. To overcome these problems, this research work intends to propose a new HEp-2 classification process. Materials and Methods: This is regulated by integrating two processes, namely, segmentation and classification. Initially, the segmentation of the HEp-2 cells is carried out by deploying the morphological operations. In this paper, two morphology operations are deployed called opening and closing. Further, the classification process is exploited by proposing a modified Convolutional Neural Network (CNN). The main objective is to classify the HEp-2 cells effectively (Centromere, Golgi, Homogeneous, Nucleolar, NuMem, and Speckled) and is made by exploiting the optimization concept. This is implanted by developing a new algorithm called Distance Sorting Lion Algorithm (DSLA), which selects the optimal convolutional layer in CNN. Results: Through the performance analysis, the performance of the proposed model for test case 1 at learning percentage 60 is 3.84%, 1.79%, 6.22%, 1.69%, and 5.53% better than PSO, FF, GWO, WOA, and LA, respectively. At 80, the performance of the proposed model is 5.77%, 6.46%, 3.95%, 3.24%, and 5.55% better from PSO, FF, GWO, WOA, and LA, respectively. Hence, the performance of the proposed work is proved over other models under different measures. Conclusion: Finally, the performance is evaluated by comparing it with the other conventional algorithms in terms of accuracy, sensitivity, specificity, precision, FPR, FNR, NPV, MCC, F1-Score and FDR, and proves the efficacy of the proposed model.

2020 ◽  
Author(s):  
Yakoop Razzaz Hamoud Qasim ◽  
Habeb Abdulkhaleq Mohammed Hassan ◽  
Abdulelah Abdulkhaleq Mohammed Hassan

In this paper we present a Convolutional Neural Network consisting of NASNet and MobileNet in parallel (concatenation) to classify three classes COVID-19, normal and pneumonia, depending on a dataset of 1083 x-ray images divided into 361 images for each class. VGG16 and RESNet152-v2 models were also prepared and trained on the same dataset to compare performance of the proposed model with their performance. After training the networks and evaluating their performance, an overall accuracy of 96.91%for the proposed model, 92.59% for VGG16 model and 94.14% for RESNet152. We obtained accuracy, sensitivity, specificity and precision of 99.69%, 99.07%, 100% and 100% respectively for the proposed model related to the COVID-19 class. These results were better than the results of other models. The conclusion, neural networks are built from models in parallel are most effective when the data available for training are small and the features of different classes are similar.


2021 ◽  
Vol 11 (6) ◽  
pp. 2838
Author(s):  
Nikitha Johnsirani Venkatesan ◽  
Dong Ryeol Shin ◽  
Choon Sung Nam

In the pharmaceutical field, early detection of lung nodules is indispensable for increasing patient survival. We can enhance the quality of the medical images by intensifying the radiation dose. High radiation dose provokes cancer, which forces experts to use limited radiation. Using abrupt radiation generates noise in CT scans. We propose an optimal Convolutional Neural Network model in which Gaussian noise is removed for better classification and increased training accuracy. Experimental demonstration on the LUNA16 dataset of size 160 GB shows that our proposed method exhibit superior results. Classification accuracy, specificity, sensitivity, Precision, Recall, F1 measurement, and area under the ROC curve (AUC) of the model performance are taken as evaluation metrics. We conducted a performance comparison of our proposed model on numerous platforms, like Apache Spark, GPU, and CPU, to depreciate the training time without compromising the accuracy percentage. Our results show that Apache Spark, integrated with a deep learning framework, is suitable for parallel training computation with high accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2648
Author(s):  
Muhammad Aamir ◽  
Tariq Ali ◽  
Muhammad Irfan ◽  
Ahmad Shaf ◽  
Muhammad Zeeshan Azam ◽  
...  

Natural disasters not only disturb the human ecological system but also destroy the properties and critical infrastructures of human societies and even lead to permanent change in the ecosystem. Disaster can be caused by naturally occurring events such as earthquakes, cyclones, floods, and wildfires. Many deep learning techniques have been applied by various researchers to detect and classify natural disasters to overcome losses in ecosystems, but detection of natural disasters still faces issues due to the complex and imbalanced structures of images. To tackle this problem, we propose a multilayered deep convolutional neural network. The proposed model works in two blocks: Block-I convolutional neural network (B-I CNN), for detection and occurrence of disasters, and Block-II convolutional neural network (B-II CNN), for classification of natural disaster intensity types with different filters and parameters. The model is tested on 4428 natural images and performance is calculated and expressed as different statistical values: sensitivity (SE), 97.54%; specificity (SP), 98.22%; accuracy rate (AR), 99.92%; precision (PRE), 97.79%; and F1-score (F1), 97.97%. The overall accuracy for the whole model is 99.92%, which is competitive and comparable with state-of-the-art algorithms.


Author(s):  
Young Hyun Kim ◽  
Eun-Gyu Ha ◽  
Kug Jin Jeon ◽  
Chena Lee ◽  
Sang-Sun Han

Objectives: This study aimed to develop a fully automated human identification method based on a convolutional neural network (CNN) with a large-scale dental panoramic radiograph (DPR) dataset. Methods: In total, 2,760 DPRs from 746 subjects who had 2 to 17 DPRs with various changes in image characteristics due to various dental treatments (tooth extraction, oral surgery, prosthetics, orthodontics, or tooth development) were collected. The test dataset included the latest DPR of each subject (746 images) and the other DPRs (2,014 images) were used for model training. A modified VGG16 model with two fully connected layers was applied for human identification. The proposed model was evaluated with rank-1, –3, and −5 accuracies, running time, and gradient-weighted class activation mapping (Grad-CAM)–applied images. Results: This model had rank-1,–3, and −5 accuracies of 82.84%, 89.14%, and 92.23%, respectively. All rank-1 accuracy values of the proposed model were above 80% regardless of changes in image characteristics. The average running time to train the proposed model was 60.9 sec per epoch, and the prediction time for 746 test DPRs was short (3.2 sec/image). The Grad-CAM technique verified that the model automatically identified humans by focusing on identifiable dental information. Conclusion: The proposed model showed good performance in fully automatic human identification despite differing image characteristics of DPRs acquired from the same patients. Our model is expected to assist in the fast and accurate identification by experts by comparing large amounts of images and proposing identification candidates at high speed.


Memory management is very essential task for large-scale storage systems; in mobile platform generate storage errors due to insufficient memory as well as additional task overhead. Many existing systems have illustrated different solution for such issues, like load balancing and load rebalancing. Different unusable applications which are already installed in mobile platform user never access frequently but it allocates some memory space on hard device storage. In the proposed research work we describe dynamic resource allocation for mobile platforms using deep learning approach. In Real world mobile systems users may install different kind of applications which required ad-hoc basis. Such applications may be affect to execution performance of system as well space complexity, sometime they also affect another runnable applications performance. To eliminate of such issues, we carried out an approach to allocate runtime resources for data storage for mobile platform. When system connected with cloud data server it store complete file system on remote Virtual Machine (VM) and whenever a single application required which immediately install beginning as remote server to local device. For developed of proposed system we implemented deep learning base Convolutional Neural Network (CNN), algorithm has used with tensorflow environment which reduces the time complexity for data storage as well as extraction respectively.


2021 ◽  
Vol 16 ◽  
Author(s):  
Di Gai ◽  
Xuanjing Shen ◽  
Haipeng Chen

Background: The effective classification of the melting curve is conducive to measure the specificity of the amplified products and the influence of invalid data on subsequent experiments is excluded. Objective: In this paper, a convolutional neural network (CNN) classification model based on dynamic filter is proposed, which can categorize the number of peaks in the melting curve image and distinguish the pollution data represented by the noise peaks. Method: The main advantage of the proposed model is that it adopts the filter which changes with the input and uses the dynamic filter to capture more information in the image, making the network learning more accurate. In addition, the residual module is used to extract the characteristics of the melting curve, and the pooling operation is replaced with an atrous convolution to prevent the loss of context information. Result: In order to train the proposed model, a novel melting curve dataset is created, which includes a balanced dataset and an unbalanced dataset. The proposed method uses six classification-based assessment criteria to compare with seven representative methods based on deep learning. Experimental results show that proposed method is not only markedly outperforms the other state-of-the-art methods in accuracy, but also has much less running time. Conclusion: It evidently proves that the proposed method is suitable for judging the specificity of amplification products according to the melting curve. Simultaneously, it overcomes the difficulties of manual selection with low efficiency and artificial bias.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Defeng Lv ◽  
Huawei Wang ◽  
Changchang Che

Purpose The purpose of this study is to achieve an accurate intelligent fault diagnosis of rolling bearing. Design/methodology/approach To extract deep features of the original vibration signal and improve the generalization ability and robustness of the fault diagnosis model, this paper proposes a fault diagnosis method of rolling bearing based on multiscale convolutional neural network (MCNN) and decision fusion. The original vibration signals are normalized and matrixed to form grayscale image samples. In addition, multiscale samples can be achieved by convoluting these samples with different convolution kernels. Subsequently, MCNN is constructed for fault diagnosis. The results of MCNN are put into a data fusion model to obtain comprehensive fault diagnosis results. Findings The bearing data sets with multiple multivariate time series are used to testify the effectiveness of the proposed method. The proposed model can achieve 99.8% accuracy of fault diagnosis. Based on MCNN and decision fusion, the accuracy can be improved by 0.7%–3.4% compared with other models. Originality/value The proposed model can extract deep general features of vibration signals by MCNN and obtained robust fault diagnosis results based on the decision fusion model. For a long time series of vibration signals with noise, the proposed model can still achieve accurate fault diagnosis.


2021 ◽  
Vol 55 (4) ◽  
pp. 88-98
Author(s):  
Maria Inês Pereira ◽  
Pedro Nuno Leite ◽  
Andry Maykol Pinto

Abstract The maritime industry has been following the paradigm shift toward the automation of typically intelligent procedures, with research regarding autonomous surface vehicles (ASVs) having seen an upward trend in recent years. However, this type of vehicle cannot be employed on a full scale until a few challenges are solved. For example, the docking process of an ASV is still a demanding task that currently requires human intervention. This research work proposes a volumetric convolutional neural network (vCNN) for the detection of docking structures from 3-D data, developed according to a balance between precision and speed. Another contribution of this article is a set of synthetically generated data regarding the context of docking structures. The dataset is composed of LiDAR point clouds, stereo images, GPS, and Inertial Measurement Unit (IMU) information. Several robustness tests carried out with different levels of Gaussian noise demonstrated an average accuracy of 93.34% and a deviation of 5.46% for the worst case. Furthermore, the system was fine-tuned and evaluated in a real commercial harbor, achieving an accuracy of over 96%. The developed classifier is able to detect different types of structures and works faster than other state-of-the-art methods that establish their performance in real environments.


2022 ◽  
pp. 155-170
Author(s):  
Lap-Kei Lee ◽  
Kwok Tai Chui ◽  
Jingjing Wang ◽  
Yin-Chun Fung ◽  
Zhanhui Tan

The dependence on Internet in our daily life is ever-growing, which provides opportunity to discover valuable and subjective information using advanced techniques such as natural language processing and artificial intelligence. In this chapter, the research focus is a convolutional neural network for three-class (positive, neutral, and negative) cross-domain sentiment analysis. The model is enhanced in two-fold. First, a similarity label method facilitates the management between the source and target domains to generate more labelled data. Second, term frequency-inverse document frequency (TF-IDF) and latent semantic indexing (LSI) are employed to compute the similarity between source and target domains. Performance evaluation is conducted using three datasets, beauty reviews, toys reviews, and phone reviews. The proposed method enhances the accuracy by 4.3-7.6% and reduces the training time by 50%. The limitations of the research work have been discussed, which serve as the rationales of future research directions.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Jianfang Cao ◽  
Chenyan Wu ◽  
Lichao Chen ◽  
Hongyan Cui ◽  
Guoqing Feng

In today’s society, image resources are everywhere, and the number of available images can be overwhelming. Determining how to rapidly and effectively query, retrieve, and organize image information has become a popular research topic, and automatic image annotation is the key to text-based image retrieval. If the semantic images with annotations are not balanced among the training samples, the low-frequency labeling accuracy can be poor. In this study, a dual-channel convolution neural network (DCCNN) was designed to improve the accuracy of automatic labeling. The model integrates two convolutional neural network (CNN) channels with different structures. One channel is used for training based on the low-frequency samples and increases the proportion of low-frequency samples in the model, and the other is used for training based on all training sets. In the labeling process, the outputs of the two channels are fused to obtain a labeling decision. We verified the proposed model on the Caltech-256, Pascal VOC 2007, and Pascal VOC 2012 standard datasets. On the Pascal VOC 2012 dataset, the proposed DCCNN model achieves an overall labeling accuracy of up to 93.4% after 100 training iterations: 8.9% higher than the CNN and 15% higher than the traditional method. A similar accuracy can be achieved by the CNN only after 2,500 training iterations. On the 50,000-image dataset from Caltech-256 and Pascal VOC 2012, the performance of the DCCNN is relatively stable; it achieves an average labeling accuracy above 93%. In contrast, the CNN reaches an accuracy of only 91% even after extended training. Furthermore, the proposed DCCNN achieves a labeling accuracy for low-frequency words approximately 10% higher than that of the CNN, which further verifies the reliability of the proposed model in this study.


Sign in / Sign up

Export Citation Format

Share Document