scholarly journals Deep learning based automated diagnosis of bone metastases with SPECT thoracic bone images

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Qiang Lin ◽  
Tongtong Li ◽  
Chuangui Cao ◽  
Yongchun Cao ◽  
Zhengxing Man ◽  
...  

AbstractSPECT nuclear medicine imaging is widely used for treating, diagnosing, evaluating and preventing various serious diseases. The automated classification of medical images is becoming increasingly important in developing computer-aided diagnosis systems. Deep learning, particularly for the convolutional neural networks, has been widely applied to the classification of medical images. In order to reliably classify SPECT bone images for the automated diagnosis of metastasis on which the SPECT imaging solely focuses, in this paper, we present several deep classifiers based on the deep networks. Specifically, original SPECT images are cropped to extract the thoracic region, followed by a geometric transformation that contributes to augment the original data. We then construct deep classifiers based on the widely used deep networks including VGG, ResNet and DenseNet by fine-tuning their parameters and structures or self-defining new network structures. Experiments on a set of real-world SPECT bone images show that the proposed classifiers perform well in identifying bone metastasis with SPECT imaging. It achieves 0.9807, 0.9900, 0.9830, 0.9890, 0.9802 and 0.9933 for accuracy, precision, recall, specificity, F-1 score and AUC, respectively, on the test samples from the augmented dataset without normalization.

2020 ◽  
Author(s):  
Eiichiro Uchino ◽  
Kanata Suzuki ◽  
Noriaki Sato ◽  
Ryosuke Kojima ◽  
Yoshinori Tamada ◽  
...  

AbstractBackgroundAutomated classification of glomerular pathological findings is potentially beneficial in establishing an efficient and objective diagnosis in renal pathology. While previous studies have verified the artificial intelligence (AI) models for the classification of global sclerosis and glomerular cell proliferation, there are several other glomerular pathological findings required for diagnosis, and the comprehensive models for the classification of these major findings have not yet been reported. Whether the cooperation between these AI models and clinicians improves diagnostic performance also remains unknown. Here, we developed AI models to classify glomerular images for major findings required for pathological diagnosis and investigated whether those models could improve the diagnostic performance of nephrologists.MethodsWe used a dataset of 283 kidney biopsy cases comprising 15888 glomerular images that were annotated by a total of 25 nephrologists. AI models to classify seven pathological findings: global sclerosis, segmental sclerosis, endocapillary proliferation, mesangial matrix accumulation, mesangial cell proliferation, crescent, and basement membrane structural changes, were constructed using deep learning by fine-tuning of InceptionV3 convolutional neural network. Subsequently, we compared the agreement to truth labels between majority decision among nephrologists with or without the AI model as a voter.ResultsOur model for global sclerosis showed high performance (area under the curve: periodic acid-Schiff, 0.986; periodic acid methenamine silver, 0.983); the models for the other findings also showed performance close to those of nephrologists. By adding the AI model output to majority decision among nephrologists, the sensitivity and specificity were significantly improved in 9 of 14 constructed models compared to those of nephrologists alone.ConclusionOur study showed a proof-of-concept for the classification of multiple glomerular findings in a comprehensive method of deep learning and suggested its potential effectiveness in improving diagnostic accuracy of clinicians.


Healthcare ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1579
Author(s):  
Wansuk Choi ◽  
Seoyoon Heo

The purpose of this study was to classify ULTT videos through transfer learning with pre-trained deep learning models and compare the performance of the models. We conducted transfer learning by combining a pre-trained convolution neural network (CNN) model into a Python-produced deep learning process. Videos were processed on YouTube and 103,116 frames converted from video clips were analyzed. In the modeling implementation, the process of importing the required modules, performing the necessary data preprocessing for training, defining the model, compiling, model creation, and model fit were applied in sequence. Comparative models were Xception, InceptionV3, DenseNet201, NASNetMobile, DenseNet121, VGG16, VGG19, and ResNet101, and fine tuning was performed. They were trained in a high-performance computing environment, and validation and loss were measured as comparative indicators of performance. Relatively low validation loss and high validation accuracy were obtained from Xception, InceptionV3, and DenseNet201 models, which is evaluated as an excellent model compared with other models. On the other hand, from VGG16, VGG19, and ResNet101, relatively high validation loss and low validation accuracy were obtained compared with other models. There was a narrow range of difference between the validation accuracy and the validation loss of the Xception, InceptionV3, and DensNet201 models. This study suggests that training applied with transfer learning can classify ULTT videos, and that there is a difference in performance between models.


2020 ◽  
Vol 10 (11) ◽  
pp. 3755
Author(s):  
Eun Kyeong Kim ◽  
Hansoo Lee ◽  
Jin Yong Kim ◽  
Sungshin Kim

Deep learning is applied in various manufacturing domains. To train a deep learning network, we must collect a sufficient amount of training data. However, it is difficult to collect image datasets required to train the networks to perform object recognition, especially because target items that are to be classified are generally excluded from existing databases, and the manual collection of images poses certain limitations. Therefore, to overcome the data deficiency that is present in many domains including manufacturing, we propose a method of generating new training images via image pre-processing steps, background elimination, target extraction while maintaining the ratio of the object size in the original image, color perturbation considering the predefined similarity between the original and generated images, geometric transformations, and transfer learning. Specifically, to demonstrate color perturbation and geometric transformations, we compare and analyze the experiments of each color space and each geometric transformation. The experimental results show that the proposed method can effectively augment the original data, correctly classify similar items, and improve the image classification accuracy. In addition, it also demonstrates that the effective data augmentation method is crucial when the amount of training data is small.


2018 ◽  
Vol 63 (18) ◽  
pp. 185012 ◽  
Author(s):  
Faisal Mahmood ◽  
Richard Chen ◽  
Sandra Sudarsky ◽  
Daphne Yu ◽  
Nicholas J Durr

2021 ◽  
Vol 14 (1) ◽  
pp. 171
Author(s):  
Qingyan Wang ◽  
Meng Chen ◽  
Junping Zhang ◽  
Shouqiang Kang ◽  
Yujing Wang

Hyperspectral image (HSI) data classification often faces the problem of the scarcity of labeled samples, which is considered to be one of the major challenges in the field of remote sensing. Although active deep networks have been successfully applied in semi-supervised classification tasks to address this problem, their performance inevitably meets the bottleneck due to the limitation of labeling cost. To address the aforementioned issue, this paper proposes a semi-supervised classification method for hyperspectral images that improves active deep learning. Specifically, the proposed model introduces the random multi-graph algorithm and replaces the expert mark in active learning with the anchor graph algorithm, which can label a considerable amount of unlabeled data precisely and automatically. In this way, a large number of pseudo-labeling samples would be added to the training subsets such that the model could be fine-tuned and the generalization performance could be improved without extra efforts for data manual labeling. Experiments based on three standard HSIs demonstrate that the proposed model can get better performance than other conventional methods, and they also outperform other studied algorithms in the case of a small training set.


Author(s):  
Kasikrit Damkliang ◽  
Thakerng Wongsirichot ◽  
Paramee Thongsuksai

Since the introduction of image pattern recognition and computer vision processing, the classification of cancer tissues has been a challenge at pixel-level, slide-level, and patient-level. Conventional machine learning techniques have given way to Deep Learning (DL), a contemporary, state-of-the-art approach to texture classification and localization of cancer tissues. Colorectal Cancer (CRC) is the third ranked cause of death from cancer worldwide. This paper proposes image-level texture classification of a CRC dataset by deep convolutional neural networks (CNN). Simple DL techniques consisting of transfer learning and fine-tuning were exploited. VGG-16, a Keras pre-trained model with initial weights by ImageNet, was applied. The transfer learning architecture and methods responding to VGG-16 are proposed. The training, validation, and testing sets included 5000 images of 150 × 150 pixels. The application set for detection and localization contained 10 large original images of 5000 × 5000 pixels. The model achieved F1-score and accuracy of 0.96 and 0.99, respectively, and produced a false positive rate of 0.01. AUC-based evaluation was also measured. The model classified ten large previously unseen images from the application set represented in false color maps. The reported results show the satisfactory performance of the model. The simplicity of the architecture, configuration, and implementation also contributes to the outcome this work.


Author(s):  
Vinit Kumar Gunjan ◽  
Rashmi Pathak ◽  
Omveer Singh

This article describes how to establish the neural network technique for various image groupings in a convolution neural network (CNN) training. In addition, it also suggests initial classification results using CNN learning characteristics and classification of images from different categories. To determine the correct architecture, we explore a transfer learning technique, called Fine-Tuning of Deep Learning Technology, a dataset used to provide solutions for individually classified image-classes.


Sign in / Sign up

Export Citation Format

Share Document