scholarly journals Single-cell conventional pap smear image classification using pre-trained deep neural network architectures

2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Mohammed Aliy Mohammed ◽  
Fetulhak Abdurahman ◽  
Yodit Abebe Ayalew

Abstract Background Automating cytology-based cervical cancer screening could alleviate the shortage of skilled pathologists in developing countries. Up until now, computer vision experts have attempted numerous semi and fully automated approaches to address the need. Yet, these days, leveraging the astonishing accuracy and reproducibility of deep neural networks has become common among computer vision experts. In this regard, the purpose of this study is to classify single-cell Pap smear (cytology) images using pre-trained deep convolutional neural network (DCNN) image classifiers. We have fine-tuned the top ten pre-trained DCNN image classifiers and evaluated them using five class single-cell Pap smear images from SIPaKMeD dataset. The pre-trained DCNN image classifiers were selected from Keras Applications based on their top 1% accuracy. Results Our experimental result demonstrated that from the selected top-ten pre-trained DCNN image classifiers DenseNet169 outperformed with an average accuracy, precision, recall, and F1-score of 0.990, 0.974, 0.974, and 0.974, respectively. Moreover, it dashed the benchmark accuracy proposed by the creators of the dataset with 3.70%. Conclusions Even though the size of DenseNet169 is small compared to the experimented pre-trained DCNN image classifiers, yet, it is not suitable for mobile or edge devices. Further experimentation with mobile or small-size DCNN image classifiers is required to extend the applicability of the models in real-world demands. In addition, since all experiments used the SIPaKMeD dataset, additional experiments will be needed using new datasets to enhance the generalizability of the models.

2021 ◽  
Author(s):  
Mohammed Aliy Mohammed ◽  
Fetulhak Abdurahman ◽  
Yodit Abebe Ayalew

Abstract Background: Automating Pap smear based cervical cancer screening could alleviate the challenge related to the shortage of skilled manpower, mainly pathologists in developing countries. The astonishing accuracy and reproducibility of deep neural networks in computer vision have made them a frontline candidate to tackle such challenges. In this regard, the main purpose of this research project is to classify single-cell Pap smear microscopic images using the architectures of the pre-trained deep convolutional neural network image classifiers. We have trained five class single-cell Pap smear images from SIPaKMeD on top ten pre-trained image classification architectures. The architectures were selected from Keras Applications based on their top 1% accuracy. Then, the output layers of the selected architectures were fine-tuned from 1000 classes to five classes and retrained. Results: Our experimental result demonstrated that DenseNet169 outperformed the selected 10 pre-trained architectures with an average accuracy, precision, recall and f1-scores of 0.990, 0.974, 0.974 and 0.974, respectively. Moreover, it dashed the benchmark accuracy proposed by the creators of the dataset with 3.70%. Conclusions: Even though the size of DenseNet169 is small compared to the experimented architectures, still it is not suitable for mobile and edged devices. It is required to experiment with mobile or small size image classification neural network architectures.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Anand Ramachandran ◽  
Steven S. Lumetta ◽  
Eric W. Klee ◽  
Deming Chen

Abstract Background Modern Next Generation- and Third Generation- Sequencing methods such as Illumina and PacBio Circular Consensus Sequencing platforms provide accurate sequencing data. Parallel developments in Deep Learning have enabled the application of Deep Neural Networks to variant calling, surpassing the accuracy of classical approaches in many settings. DeepVariant, arguably the most popular among such methods, transforms the problem of variant calling into one of image recognition where a Deep Neural Network analyzes sequencing data that is formatted as images, achieving high accuracy. In this paper, we explore an alternative approach to designing Deep Neural Networks for variant calling, where we use meticulously designed Deep Neural Network architectures and customized variant inference functions that account for the underlying nature of sequencing data instead of converting the problem to one of image recognition. Results Results from 27 whole-genome variant calling experiments spanning Illumina, PacBio and hybrid Illumina-PacBio settings suggest that our method allows vastly smaller Deep Neural Networks to outperform the Inception-v3 architecture used in DeepVariant for indel and substitution-type variant calls. For example, our method reduces the number of indel call errors by up to 18%, 55% and 65% for Illumina, PacBio and hybrid Illumina-PacBio variant calling respectively, compared to a similarly trained DeepVariant pipeline. In these cases, our models are between 7 and 14 times smaller. Conclusions We believe that the improved accuracy and problem-specific customization of our models will enable more accurate pipelines and further method development in the field. HELLO is available at https://github.com/anands-repo/hello


2019 ◽  
Vol 77 (4) ◽  
pp. 1340-1353 ◽  
Author(s):  
Geoff French ◽  
Michal Mackiewicz ◽  
Mark Fisher ◽  
Helen Holah ◽  
Rachel Kilburn ◽  
...  

Abstract We report on the development of a computer vision system that analyses video from CCTV systems installed on fishing trawlers for the purpose of monitoring and quantifying discarded fish catch. Our system is designed to operate in spite of the challenging computer vision problem posed by conditions on-board fishing trawlers. We describe the approaches developed for isolating and segmenting individual fish and for species classification. We present an analysis of the variability of manual species identification performed by expert human observers and contrast the performance of our species classifier against this benchmark. We also quantify the effect of the domain gap on the performance of modern deep neural network-based computer vision systems.


Author(s):  
Mohammad Javad Shafiee ◽  
Paul Fieguth ◽  
Alexander Wong

Deep neural networks have been shown to outperform conventionalstate-of-the-art approaches in several structured predictionapplications. While high-performance computing devices such asGPUs has made developing very powerful deep neural networkspossible, it is not feasible to run these networks on low-cost, lowpowercomputing devices such as embedded CPUs or even embeddedGPUs. As such, there has been a lot of recent interestto produce efficient deep neural network architectures that can berun on small computing devices. Motivated by this, the idea ofStochasticNets was introduced, where deep neural networks areformed by leveraging random graph theory. It has been shownthat StochasticNet can form new networks with 2X or 3X architecturalefficiency while maintaining modeling accuracy. Motivated bythese promising results, here we investigate the idea of Stochastic-Net in StochasticNet (SiS), where highly-efficient deep neural networkswith Network in Network (NiN) architectures are formed ina stochastic manner. Such networks have an intertwining structurecomposed of convolutional layers and micro neural networksto boost the modeling accuracy. The experimental results showthat SiS can form deep neural networks with NiN architectures thathave 4X greater architectural efficiency with only a 2% dropin accuracy for the CIFAR10 dataset. The results are even morepromising for the SVHN dataset, where SiS formed deep neuralnetworks with NiN architectures that have 11.5X greater architecturalefficiency with only a 1% decrease in modeling accuracy.


2020 ◽  
Vol 44 (6) ◽  
pp. 968-977
Author(s):  
M.O. Kalinina ◽  
P.L. Nikolaev

Nowadays deep neural networks play a significant part in various fields of human activity. Especially they benefit spheres dealing with large amounts of data and lengthy operations on obtaining and processing information from the visual environment. This article deals with the development of a convolutional neural network based on the YOLO architecture, intended for real-time book recognition. The creation of an original data set and the training of the deep neural network are described. The structure of the neural network obtained is presented and the most frequently used metrics for estimating the quality of the network performance are considered. A brief review of the existing types of neural network architectures is also made. YOLO architecture possesses a number of advantages that allow it to successfully compete with other models and make it the most suitable variant for creating an object detection network since it enables some of the common disadvantages of such networks to be significantly mitigated (such as recognition of similarly looking, same-color book coves or slanted books). The results obtained in the course of training the deep neural network allow us to use it as a basis for the development of the software for book spine recognition.


2021 ◽  
Vol 15 (1) ◽  
pp. 141-148
Author(s):  
Suprava Patnaik ◽  
Sourodip Ghosh ◽  
Richik Ghosh ◽  
Shreya Sahay

Skeletal maturity estimation is routinely evaluated by pediatrics and radiologists to assess growth and hormonal disorders. Methods integrated with regression techniques are incompatible with low-resolution digital samples and generate bias, when the evaluation protocols are implemented for feature assessment on coarse X-Ray hand images. This paper proposes a comparative analysis between two deep neural network architectures, with the base models such as Inception-ResNet-V2 and Xception-pre-trained networks. Based on 12,611 hand X-Ray images of RSNA Bone Age database, Inception-ResNet-V2 and Xception models have achieved R-Squared value of 0.935 and 0.942 respectively. Further, in the same order, the MAE accomplished by the two models are 12.583 and 13.299 respectively, when subjected to very few training instances with negligible chances of overfitting.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Fardin Ghorbani ◽  
Sina Beyraghi ◽  
Javad Shabanpour ◽  
Homayoon Oraizi ◽  
Hossein Soleimani ◽  
...  

AbstractBeyond the scope of conventional metasurface, which necessitates plenty of computational resources and time, an inverse design approach using machine learning algorithms promises an effective way for metasurface design. In this paper, benefiting from Deep Neural Network (DNN), an inverse design procedure of a metasurface in an ultra-wide working frequency band is presented in which the output unit cell structure can be directly computed by a specified design target. To reach the highest working frequency for training the DNN, we consider 8 ring-shaped patterns to generate resonant notches at a wide range of working frequencies from 4 to 45 GHz. We propose two network architectures. In one architecture, we restrict the output of the DNN, so the network can only generate the metasurface structure from the input of 8 ring-shaped patterns. This approach drastically reduces the computational time, while keeping the network’s accuracy above 91%. We show that our model based on DNN can satisfactorily generate the output metasurface structure with an average accuracy of over 90% in both network architectures. Determination of the metasurface structure directly without time-consuming optimization procedures, an ultra-wide working frequency, and high average accuracy equip an inspiring platform for engineering projects without the need for complex electromagnetic theory.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Florian Stelzer ◽  
André Röhm ◽  
Raul Vicente ◽  
Ingo Fischer ◽  
Serhiy Yanchuk

AbstractDeep neural networks are among the most widely applied machine learning tools showing outstanding performance in a broad range of tasks. We present a method for folding a deep neural network of arbitrary size into a single neuron with multiple time-delayed feedback loops. This single-neuron deep neural network comprises only a single nonlinearity and appropriately adjusted modulations of the feedback signals. The network states emerge in time as a temporal unfolding of the neuron’s dynamics. By adjusting the feedback-modulation within the loops, we adapt the network’s connection weights. These connection weights are determined via a back-propagation algorithm, where both the delay-induced and local network connections must be taken into account. Our approach can fully represent standard Deep Neural Networks (DNN), encompasses sparse DNNs, and extends the DNN concept toward dynamical systems implementations. The new method, which we call Folded-in-time DNN (Fit-DNN), exhibits promising performance in a set of benchmark tasks.


2021 ◽  
Author(s):  
Luke Gundry ◽  
Gareth Kennedy ◽  
Alan Bond ◽  
Jie Zhang

The use of Deep Neural Networks (DNNs) for the classification of electrochemical mechanisms based on training with simulations of the initial cycle of potential have been reported. In this paper,...


2021 ◽  
pp. 1-15
Author(s):  
Wenjun Tan ◽  
Luyu Zhou ◽  
Xiaoshuo Li ◽  
Xiaoyu Yang ◽  
Yufei Chen ◽  
...  

BACKGROUND: The distribution of pulmonary vessels in computed tomography (CT) and computed tomography angiography (CTA) images of lung is important for diagnosing disease, formulating surgical plans and pulmonary research. PURPOSE: Based on the pulmonary vascular segmentation task of International Symposium on Image Computing and Digital Medicine 2020 challenge, this paper reviews 12 different pulmonary vascular segmentation algorithms of lung CT and CTA images and then objectively evaluates and compares their performances. METHODS: First, we present the annotated reference dataset of lung CT and CTA images. A subset of the dataset consisting 7,307 slices for training and 3,888 slices for testing was made available for participants. Second, by analyzing the performance comparison of different convolutional neural networks from 12 different institutions for pulmonary vascular segmentation, the reasons for some defects and improvements are summarized. The models are mainly based on U-Net, Attention, GAN, and multi-scale fusion network. The performance is measured in terms of Dice coefficient, over segmentation ratio and under segmentation rate. Finally, we discuss several proposed methods to improve the pulmonary vessel segmentation results using deep neural networks. RESULTS: By comparing with the annotated ground truth from both lung CT and CTA images, most of 12 deep neural network algorithms do an admirable job in pulmonary vascular extraction and segmentation with the dice coefficients ranging from 0.70 to 0.85. The dice coefficients for the top three algorithms are about 0.80. CONCLUSIONS: Study results show that integrating methods that consider spatial information, fuse multi-scale feature map, or have an excellent post-processing to deep neural network training and optimization process are significant for further improving the accuracy of pulmonary vascular segmentation.


Sign in / Sign up

Export Citation Format

Share Document