scholarly journals Deep Learning-Based Microscopic Diagnosis of Odontogenic Keratocysts and Non-Keratocysts in Haematoxylin and Eosin-Stained Incisional Biopsies

Diagnostics ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 2184
Author(s):  
Roopa S. Rao ◽  
Divya B. Shivanna ◽  
Kirti S. Mahadevpur ◽  
Sinchana G. Shivaramegowda ◽  
Spoorthi Prakash ◽  
...  

Background: The goal of the study was to create a histopathology image classification automation system that could identify odontogenic keratocysts in hematoxylin and eosin-stained jaw cyst sections. Methods: From 54 odontogenic keratocysts, 23 dentigerous cysts, and 20 radicular cysts, about 2657 microscopic pictures with 400× magnification were obtained. The images were annotated by a pathologist and categorized into epithelium, cystic lumen, and stroma of keratocysts and non-keratocysts. Preprocessing was performed in two steps; the first is data augmentation, as the Deep Learning techniques (DLT) improve their performance with increased data size. Secondly, the epithelial region was selected as the region of interest. Results: Four experiments were conducted using the DLT. In the first, a pre-trained VGG16 was employed to classify after-image augmentation. In the second, DenseNet-169 was implemented for image classification on the augmented images. In the third, DenseNet-169 was trained on the two-step preprocessed images. In the last experiment, two and three results were averaged to obtain an accuracy of 93% on OKC and non-OKC images. Conclusions: The proposed algorithm may fit into the automation system of OKC and non-OKC diagnosis. Utmost care was taken in the manual process of image acquisition (minimum 28–30 images/slide at 40× magnification covering the entire stretch of epithelium and stromal component). Further, there is scope to improve the accuracy rate and make it human bias free by using a whole slide imaging scanner for image acquisition from slides.

Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 249
Author(s):  
Xin Jin ◽  
Yuanwen Zou ◽  
Zhongbing Huang

The cell cycle is an important process in cellular life. In recent years, some image processing methods have been developed to determine the cell cycle stages of individual cells. However, in most of these methods, cells have to be segmented, and their features need to be extracted. During feature extraction, some important information may be lost, resulting in lower classification accuracy. Thus, we used a deep learning method to retain all cell features. In order to solve the problems surrounding insufficient numbers of original images and the imbalanced distribution of original images, we used the Wasserstein generative adversarial network-gradient penalty (WGAN-GP) for data augmentation. At the same time, a residual network (ResNet) was used for image classification. ResNet is one of the most used deep learning classification networks. The classification accuracy of cell cycle images was achieved more effectively with our method, reaching 83.88%. Compared with an accuracy of 79.40% in previous experiments, our accuracy increased by 4.48%. Another dataset was used to verify the effect of our model and, compared with the accuracy from previous results, our accuracy increased by 12.52%. The results showed that our new cell cycle image classification system based on WGAN-GP and ResNet is useful for the classification of imbalanced images. Moreover, our method could potentially solve the low classification accuracy in biomedical images caused by insufficient numbers of original images and the imbalanced distribution of original images.


Author(s):  
Nassima Dif ◽  
Zakaria Elberrichi

Deep learning methods are characterized by their capacity to learn data representation compared to the traditional machine learning algorithms. However, these methods are prone to overfitting on small volumes of data. The objective of this research is to overcome this limitation by improving the generalization in the proposed deep learning framework based on various techniques: data augmentation, small models, optimizer selection, and ensemble learning. For ensembling, the authors used selected models from different checkpoints and both voting and unweighted average methods for combination. The experimental study on the lymphomas histopathological dataset highlights the efficiency of the MobileNet2 network combined with the stochastic gradient descent (SGD) optimizer in terms of generalization. The best results have been achieved by the combination of the best three checkpoint models (98.67% of accuracy). These findings provide important insights into the efficiency of the checkpoint ensemble learning method for histopathological image classification.


2020 ◽  
Vol 10 (21) ◽  
pp. 7755 ◽  
Author(s):  
Liangliang Chen ◽  
Ning Yan ◽  
Hongmai Yang ◽  
Linlin Zhu ◽  
Zongwei Zheng ◽  
...  

Deep learning technology is outstanding in visual inspection. However, in actual industrial production, the use of deep learning technology for visual inspection requires a large number of training data with different acquisition scenarios. At present, the acquisition of such datasets is very time-consuming and labor-intensive, which limits the further development of deep learning in industrial production. To solve the problem of image data acquisition difficulty in industrial production with deep learning, this paper proposes a data augmentation method for deep learning based on multi-degree of freedom (DOF) automatic image acquisition and designs a multi-DOF automatic image acquisition system for deep learning. By designing random acquisition angles and random illumination conditions, different acquisition scenes in actual production are simulated. By optimizing the image acquisition path, a large number of accurate data can be obtained in a short time. In order to verify the performance of the dataset collected by the system, the fabric is selected as the research object after the system is built, and the dataset comparison experiment is carried out. The dataset comparison experiment confirms that the dataset obtained by the system is rich and close to the real application environment, which solves the problem of dataset insufficient in the application process of deep learning to a certain extent.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yong Liang ◽  
Qi Cui ◽  
Xing Luo ◽  
Zhisong Xie

Rock classification is a significant branch of geology which can help understand the formation and evolution of the planet, search for mineral resources, and so on. In traditional methods, rock classification is usually done based on the experience of a professional. However, this method has problems such as low efficiency and susceptibility to subjective factors. Therefore, it is of great significance to establish a simple, fast, and accurate rock classification model. This paper proposes a fine-grained image classification network combining image cutting method and SBV algorithm to improve the classification performance of a small number of fine-grained rock samples. The method uses image cutting to achieve data augmentation without adding additional datasets and uses image block voting scoring to obtain richer complementary information, thereby improving the accuracy of image classification. The classification accuracy of 32 images is 75%, 68.75%, and 75%. The results show that the method proposed in this paper has a significant improvement in the accuracy of image classification, which is 34.375%, 18.75%, and 43.75% higher than that of the original algorithm. It verifies the effectiveness of the algorithm in this paper and at the same time proves that deep learning has great application value in the field of geology.


2020 ◽  
Vol 2020 ◽  
pp. 1-11 ◽  
Author(s):  
Qinghe Zheng ◽  
Mingqiang Yang ◽  
Xinyu Tian ◽  
Nan Jiang ◽  
Deqiang Wang

Nowadays, deep learning has achieved remarkable results in many computer vision related tasks, among which the support of big data is essential. In this paper, we propose a full stage data augmentation framework to improve the accuracy of deep convolutional neural networks, which can also play the role of implicit model ensemble without introducing additional model training costs. Simultaneous data augmentation during training and testing stages can ensure network optimization and enhance its generalization ability. Augmentation in two stages needs to be consistent to ensure the accurate transfer of specific domain information. Furthermore, this framework is universal for any network architecture and data augmentation strategy and therefore can be applied to a variety of deep learning based tasks. Finally, experimental results about image classification on the coarse-grained dataset CIFAR-10 (93.41%) and fine-grained dataset CIFAR-100 (70.22%) demonstrate the effectiveness of the framework by comparing with state-of-the-art results.


Author(s):  
Oleksii Denysenko

Image classification is one of the most fundamental applications in the field of computer vision, which continues to arouse enormous interest. People can recognize a large number of objects in images with little effort, even though a number of their characteristics can change. Objects can be recognized even when their detection is partially difficult. At the same time, algorithmic description of the recognition problem for computer implementation is still an urgent problem. Existing methods for solving this problem are effective only for certain cases (for example, for geometric objects, human faces, road signs, printed or handwritten symbols) and only under certain conditions. A model designed to identify and classify objects must be able to identify their location, as well as distinguish between various features of objects, such as edges, corners, color differences, etc. Deep convolutional neural networks have demonstrated the best performance when working with images, which sometimes exceeds the capabilities of human vision. However, even with this significant improvement, there are still some issues with overfitting and vanishing gradient. To solve them, some well-known methods are used: data augmentation, batch normalization and dropout; modern classification models do not perform color space conversion of original RGB images. The study of the use of different color spaces in the task of image classification is one of the topical problems related to deep learning, and it determines the relevance of this study. If this problem is solved, this will improve performance of the models used.


2020 ◽  
Vol 58 (12) ◽  
pp. 3049-3061
Author(s):  
Christoph Hoog Antink ◽  
Joana Carlos Mesquita Ferreira ◽  
Michael Paul ◽  
Simon Lyra ◽  
Konrad Heimann ◽  
...  

AbstractPhotoplethysmography imaging (PPGI) for non-contact monitoring of preterm infants in the neonatal intensive care unit (NICU) is a promising technology, as it could reduce medical adhesive-related skin injuries and associated complications. For practical implementations of PPGI, a region of interest has to be detected automatically in real time. As the neonates’ body proportions differ significantly from adults, existing approaches may not be used in a straightforward way, and color-based skin detection requires RGB data, thus prohibiting the use of less-intrusive near-infrared (NIR) acquisition. In this paper, we present a deep learning-based method for segmentation of neonatal video data. We augmented an existing encoder-decoder semantic segmentation method with a modified version of the ResNet-50 encoder. This reduced the computational time by a factor of 7.5, so that 30 frames per second can be processed at 960 × 576 pixels. The method was developed and optimized on publicly available databases with segmentation data from adults. For evaluation, a comprehensive dataset consisting of RGB and NIR video recordings from 29 neonates with various skin tones recorded in two NICUs in Germany and India was used. From all recordings, 643 frames were manually segmented. After pre-training the model on the public adult data, parts of the neonatal data were used for additional learning and left-out neonates are used for cross-validated evaluation. On the RGB data, the head is segmented well (82% intersection over union, 88% accuracy), and performance is comparable with those achieved on large, public, non-neonatal datasets. On the other hand, performance on the NIR data was inferior. By employing data augmentation to generate additional virtual NIR data for training, results could be improved and the head could be segmented with 62% intersection over union and 65% accuracy. The method is in theory capable of performing segmentation in real time and thus it may provide a useful tool for future PPGI applications.


Sign in / Sign up

Export Citation Format

Share Document