scholarly journals An Automatic Mass Screening System for Cervical Cancer Detection Based on Convolutional Neural Network

2020 ◽  
Vol 2020 ◽  
pp. 1-14
Author(s):  
Aziz-ur -Rehman ◽  
Nabeel Ali ◽  
Imtiaz.A. Taj ◽  
Muhammad Sajid ◽  
Khasan S. Karimov

Cervical cancer is the fourth most common type of cancer and is also a leading cause of mortality among women across the world. Various types of screening tests are used for its diagnosis, but the most popular one is the Papanicolaou smear test, in which cell cytology is carried out. It is a reliable tool for early identification of cervical cancer, but there is always a chance of misdiagnosis because of possible errors in human observations. In this paper, an auto-assisted cervical cancer screening system is proposed that uses a convolutional neural network trained on Cervical Cells database. The training of the network is accomplished through transfer learning, whereby initializing weights are obtained from the training on ImageNet dataset. After fine-tuning the network on the Cervical Cells database, the feature vector is extracted from the last fully connected layer of convolutional neural network. For final classification/screening of the cell samples, three different classifiers are proposed including Softmax regression (SR), Support vector machine (SVM), and GentleBoost ensemble of decision trees (GEDT). The performance of the proposed screening system is evaluated for two different testing protocols, namely, 2-class problem and 7-class problem, on the Herlev database. Classification accuracies of SR, SVM, and GEDT for the 2-class problem are found to be 98.8%, 99.5%, and 99.6%, respectively, while for the 7-class problem, they are 97.21%, 98.12%, and 98.85%, respectively. These results show that the proposed system provides better performance than its previous counterparts under various testing conditions.

2020 ◽  
Vol 13 (5) ◽  
pp. 2219-2239 ◽  
Author(s):  
Georgios Touloupas ◽  
Annika Lauber ◽  
Jan Henneberger ◽  
Alexander Beck ◽  
Aurélien Lucchi

Abstract. During typical field campaigns, millions of cloud particle images are captured with imaging probes. Our interest lies in classifying these particles in order to compute the statistics needed for understanding clouds. Given the large volume of collected data, this raises the need for an automated classification approach. Traditional classification methods that require extracting features manually (e.g., decision trees and support vector machines) show reasonable performance when trained and tested on data coming from a unique dataset. However, they often have difficulties in generalizing to test sets coming from other datasets where the distribution of the features might be significantly different. In practice, we found that for holographic imagers each new dataset requires labeling a huge amount of data by hand using those methods. Convolutional neural networks have the potential to overcome this problem due to their ability to learn complex nonlinear models directly from the images instead of pre-engineered features, as well as by relying on powerful regularization techniques. We show empirically that a convolutional neural network trained on cloud particles from holographic imagers generalizes well to unseen datasets. Moreover, fine tuning the same network with a small number (256) of training images improves the classification accuracy. Thus, the automated classification with a convolutional neural network not only reduces the hand-labeling effort for new datasets but is also no longer the main error source for the classification of small particles.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Haoyan Yang ◽  
Jiangong Ni ◽  
Jiyue Gao ◽  
Zhongzhi Han ◽  
Tao Luan

AbstractCrop variety identification is an essential link in seed detection, phenotype collection and scientific breeding. This paper takes peanut as an example to explore a new method for crop variety identification. Peanut is a crucial oil crop and cash crop. The yield and quality of different peanut varieties are different, so it is necessary to identify and classify different peanut varieties. The traditional image processing method of peanut variety identification needs to extract many features, which has defects such as intense subjectivity and insufficient generalization ability. Based on the deep learning technology, this paper improved the deep convolutional neural network VGG16 and applied the improved VGG16 to the identification and classification task of 12 varieties of peanuts. Firstly, the peanut pod images of 12 varieties obtained by the scanner were preprocessed with gray-scale, binarization, and ROI extraction to form a peanut pod data set with a total of 3365 images of 12 varieties. A series of improvements have been made to VGG16. Remove the F6 and F7 fully connected layers of VGG16. Add Conv6 and Global Average Pooling Layer. The three convolutional layers of conv5 have changed into Depth Concatenation and add the Batch Normalization(BN) layers to the model. Besides, fine-tuning is carried out based on the improved VGG16. We adjusted the location of the BN layers. Adjust the number of filters for Conv6. Finally, the improved VGG16 model's training test results were compared with the other classic models, AlexNet, VGG16, GoogLeNet, ResNet18, ResNet50, SqueezeNet, DenseNet201 and MobileNetv2 verify its superiority. The average accuracy of the improved VGG16 model on the peanut pods test set was 96.7%, which was 8.9% higher than that of VGG16, and 1.6–12.3% higher than that of other classical models. Besides, supplementary experiments were carried out to prove the robustness and generality of the improved VGG16. The improved VGG16 was applied to the identification and classification of seven corn grain varieties with the same method and an average accuracy of 90.1% was achieved. The experimental results show that the improved VGG16 proposed in this paper can identify and classify peanut pods of different varieties, proving the feasibility of a convolutional neural network in variety identification and classification. The model proposed in this experiment has a positive significance for exploring other Crop variety identification and classification.


2021 ◽  
Vol 13 (5) ◽  
pp. 347-360
Author(s):  
Qinqing Kang ◽  
Xiong Ding

Based on the case images in the smart city management system, the advantage of deep learning is used to learn image features on its own, an improved deep convolutional neural network algorithm is proposed in this paper, and the algorithm is used to improve the smart city management system (hereinafter referred to as “Smart City Management”). These case images are quickly and accurately classified, the automatic classification of cases is completed in the city management system. ZCA (Zero-phase Component Analysis)-whitening is used to reduce the correlation between image data features, an eight-layer convolutional neural network model is built to classify the whitened images, and rectified linear unit (ReLU) is used in the convolutional layer to accelerate the training process, the dropout technology is used in the pooling layer, the algorithm is prevented from overfitting. Back Propagation (BP) algorithm is used for optimization in the network fine-tuning stage, the robustness of the algorithm is improved. Based on the above method, the two types of case images of road traffic and city appearance environment were subjected to two classification experiments. The accuracy has reached 97.5%, and the F1-Score has reached 0.98. The performance exceeded LSVM (Langrangian Support Vector Machine), SAE (Sparse autoencoder), and traditional CNN (Convolution Neural Network). At the same time, this method conducts four-classification experiments on four types of cases: electric vehicles, littering, illegal parking of motor vehicles, and mess around garbage bins. The accuracy is 90.5%, and the F1-Score is 0.91. The performance still exceeds LSVM, SAE and traditional CNN and other methods.


2019 ◽  
Author(s):  
Georgios Touloupas ◽  
Annika Lauber ◽  
Jan Henneberger ◽  
Alexander Beck ◽  
Thomas Hofmann ◽  
...  

Abstract. During typical field campaigns, millions of cloud particle images are captured with imaging probes. Our interest lies in classifying these particles in order to compute the statistics needed for understanding clouds. Given the large volume of collected data, this raises the need for an automated classification approach. Traditional classification methods that require extracting features manually (e.g. decision trees and support vector machines) show reasonable performance when trained and tested on data coming from a unique dataset. However, they often have difficulties to generalize to test sets coming from other datasets where the distribution of the features might be significantly different. In practice, we found that each new dataset requires labeling a huge amount of data by hand using those methods. Convolutional neural networks have the potential to overcome this problem due to their ability to learn complex non-linear models directly from the images instead of pre-engineered features, as well as by relying on powerful regularization techniques. We show empirically that a convolutional neural network trained on cloud particles from holographic imagers generalizes well to unseen datasets. Moreover, fine-tuning the same network with a small number (256) of training images improves the classification accuracy. Thus, the automated classification with a convolutional neural network not only reduces the hand-labeling effort for new datasets but is also no longer the main error source for the classification of small particles.


Symmetry ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2140
Author(s):  
Jing Chen ◽  
Qi Liu ◽  
Lingwang Gao

Due to the benefits of convolutional neural networks (CNNs) in image classification, they have been extensively used in the computerized classification and focus of crop pests. The intention of the current find out about is to advance a deep convolutional neural network to mechanically identify 14 species of tea pests that possess symmetry properties. (1) As there are not enough tea pests images in the network to train the deep convolutional neural network, we proposes to classify tea pests images by fine-tuning the VGGNET-16 deep convolutional neural network. (2) Through comparison with traditional machine learning algorithms Support Vector Machine (SVM) and Multi-Layer Perceptron (MLP), the performance of our method is evaluated (3) The three methods can identify tea tree pests well: the proposed convolutional neural network classification has accuracy up to 97.75%, while MLP and SVM have accuracies of 76.07% and 68.81%, respectively. Our proposed method performs the best of the assessed recognition algorithms. The experimental results also show that the fine-tuning method is a very powerful and efficient tool for small datasets in practical problems.


2019 ◽  
Vol 24 (3) ◽  
pp. 220-228
Author(s):  
Gusti Alfahmi Anwar ◽  
Desti Riminarsih

Panthera merupakan genus dari keluarga kucing yang memiliki empat spesies popular yaitu, harimau, jaguar, macan tutul, singa. Singa memiliki warna keemasan dan tidak memilki motif, harimau memiliki motif loreng dengan garis-garis panjang, jaguar memiliki tubuh yang lebih besar dari pada macan tutul serta memiliki motif tutul yang lebih lebar, sedangkan macan tutul memiliki tubuh yang sedikit lebih ramping dari pada jaguar dan memiliki tutul yang tidak terlalu lebar. Pada penelitian ini dilakukan klasifikasi genus panther yaitu harimau, jaguar, macan tutul, dan singa menggunakan metode Convolutional Neural Network. Model Convolutional Neural Network yang digunakan memiliki 1 input layer, 5 convolution layer, dan 2 fully connected layer. Dataset yang digunakan berupa citra harimau, jaguar, macan tutul, dan singa. Data training terdiri dari 3840 citra, data validasi sebanyak 960 citra, dan data testing sebanyak 800 citra. Hasil akurasi dari pelatihan model untuk training yaitu 92,31% dan validasi yaitu 81,88%, pengujian model menggunakan dataset testing mendapatan hasil 68%. Hasil akurasi prediksi didapatkan dari nilai F1-Score pada pengujian didapatkan sebesar 78% untuk harimau, 70% untuk jaguar, 37% untuk macan tutul, 74% untuk singa. Macan tutul mendapatkan akurasi terendah dibandingkan 3 hewan lainnya tetapi lebih baik dibandingkan hasil penelitian sebelumnya.


Author(s):  
Niha Kamal Basha ◽  
Aisha Banu Wahab

: Absence seizure is a type of brain disorder in which subject get into sudden lapses in attention. Which means sudden change in brain stimulation. Most of this type of disorder is widely found in children’s (5-18 years). These Electroencephalogram (EEG) signals are captured with long term monitoring system and are analyzed individually. In this paper, a Convolutional Neural Network to extract single channel EEG seizure features like Power, log sum of wavelet transform, cross correlation, and mean phase variance of each frame in a windows are extracted after pre-processing and classify them into normal or absence seizure class, is proposed as an empowerment of monitoring system by automatic detection of absence seizure. The training data is collected from the normal and absence seizure subjects in the form of Electroencephalogram. The objective is to perform automatic detection of absence seizure using single channel electroencephalogram signal as input. Here the data is used to train the proposed Convolutional Neural Network to extract and classify absence seizure. The Convolutional Neural Network consist of three layers 1] convolutional layer – which extract the features in the form of vector 2] Pooling layer – the dimensionality of output from convolutional layer is reduced and 3] Fully connected layer–the activation function called soft-max is used to find the probability distribution of output class. This paper goes through the automatic detection of absence seizure in detail and provide the comparative analysis of classification between Support Vector Machine and Convolutional Neural Network. The proposed approach outperforms the performance of Support Vector Machine by 80% in automatic detection of absence seizure and validated using confusion matrix.


2021 ◽  
Author(s):  
Satoshi Suzuki ◽  
Shoichiro Takeda ◽  
Ryuichi Tanida ◽  
Hideaki Kimata ◽  
Hayaru Shouno

Author(s):  
Wanli Wang ◽  
Botao Zhang ◽  
Kaiqi Wu ◽  
Sergey A Chepinskiy ◽  
Anton A Zhilenkov ◽  
...  

In this paper, a hybrid method based on deep learning is proposed to visually classify terrains encountered by mobile robots. Considering the limited computing resource on mobile robots and the requirement for high classification accuracy, the proposed hybrid method combines a convolutional neural network with a support vector machine to keep a high classification accuracy while improve work efficiency. The key idea is that the convolutional neural network is used to finish a multi-class classification and simultaneously the support vector machine is used to make a two-class classification. The two-class classification performed by the support vector machine is aimed at one kind of terrain that users are mostly concerned with. Results of the two classifications will be consolidated to get the final classification result. The convolutional neural network used in this method is modified for the on-board usage of mobile robots. In order to enhance efficiency, the convolutional neural network has a simple architecture. The convolutional neural network and the support vector machine are trained and tested by using RGB images of six kinds of common terrains. Experimental results demonstrate that this method can help robots classify terrains accurately and efficiently. Therefore, the proposed method has a significant potential for being applied to the on-board usage of mobile robots.


Sign in / Sign up

Export Citation Format

Share Document