scholarly journals Development of in-house fully residual deep convolutional neural network-based segmentation software for the male pelvic CT

2021 ◽  
Vol 16 (1) ◽  
Author(s):  
Hideaki Hirashima ◽  
Mitsuhiro Nakamura ◽  
Pascal Baillehache ◽  
Yusuke Fujimoto ◽  
Shota Nakagawa ◽  
...  

Abstract Background This study aimed to (1) develop a fully residual deep convolutional neural network (CNN)-based segmentation software for computed tomography image segmentation of the male pelvic region and (2) demonstrate its efficiency in the male pelvic region. Methods A total of 470 prostate cancer patients who had undergone intensity-modulated radiotherapy or volumetric-modulated arc therapy were enrolled. Our model was based on FusionNet, a fully residual deep CNN developed to semantically segment biological images. To develop the CNN-based segmentation software, 450 patients were randomly selected and separated into the training, validation and testing groups (270, 90, and 90 patients, respectively). In Experiment 1, to determine the optimal model, we first assessed the segmentation accuracy according to the size of the training dataset (90, 180, and 270 patients). In Experiment 2, the effect of varying the number of training labels on segmentation accuracy was evaluated. After determining the optimal model, in Experiment 3, the developed software was used on the remaining 20 datasets to assess the segmentation accuracy. The volumetric dice similarity coefficient (DSC) and the 95th-percentile Hausdorff distance (95%HD) were calculated to evaluate the segmentation accuracy for each organ in Experiment 3. Results In Experiment 1, the median DSC for the prostate were 0.61 for dataset 1 (90 patients), 0.86 for dataset 2 (180 patients), and 0.86 for dataset 3 (270 patients), respectively. The median DSCs for all the organs increased significantly when the number of training cases increased from 90 to 180 but did not improve upon further increase from 180 to 270. The number of labels applied during training had a little effect on the DSCs in Experiment 2. The optimal model was built by 270 patients and four organs. In Experiment 3, the median of the DSC and the 95%HD values were 0.82 and 3.23 mm for prostate; 0.71 and 3.82 mm for seminal vesicles; 0.89 and 2.65 mm for the rectum; 0.95 and 4.18 mm for the bladder, respectively. Conclusions We have developed a CNN-based segmentation software for the male pelvic region and demonstrated that the CNN-based segmentation software is efficient for the male pelvic region.

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Pei Yang ◽  
Yong Pi ◽  
Tao He ◽  
Jiangming Sun ◽  
Jianan Wei ◽  
...  

Abstract Background 99mTc-pertechnetate thyroid scintigraphy is a valid complementary avenue for evaluating thyroid disease in the clinic, the image feature of thyroid scintigram is relatively simple but the interpretation still has a moderate consistency among physicians. Thus, we aimed to develop an artificial intelligence (AI) system to automatically classify the four patterns of thyroid scintigram. Methods We collected 3087 thyroid scintigrams from center 1 to construct the training dataset (n = 2468) and internal validating dataset (n = 619), and another 302 cases from center 2 as external validating datasets. Four pre-trained neural networks that included ResNet50, DenseNet169, InceptionV3, and InceptionResNetV2 were implemented to construct AI models. The models were trained separately with transfer learning. We evaluated each model’s performance with metrics as following: accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), recall, precision, and F1-score. Results The overall accuracy of four pre-trained neural networks in classifying four common uptake patterns of thyroid scintigrams all exceeded 90%, and the InceptionV3 stands out from others. It reached the highest performance with an overall accuracy of 92.73% for internal validation and 87.75% for external validation, respectively. As for each category of thyroid scintigrams, the area under the receiver operator characteristic curve (AUC) was 0.986 for ‘diffusely increased,’ 0.997 for ‘diffusely decreased,’ 0.998 for ‘focal increased,’ and 0.945 for ‘heterogeneous uptake’ in internal validation, respectively. Accordingly, the corresponding performances also obtained an ideal result of 0.939, 1.000, 0.974, and 0.915 in external validation, respectively. Conclusions Deep convolutional neural network-based AI model represented considerable performance in the classification of thyroid scintigrams, which may help physicians improve the interpretation of thyroid scintigrams more consistently and efficiently.


2020 ◽  
Author(s):  
Yuwei Sun ◽  
Hideya Ochiai ◽  
Hiroshi Esaki

Abstract This article illustrates a method of visualizing network traffic in LAN based on the Hilbert Curve structure and the array exchange and projection, with nine types of protocols’ communication frequency information as the discriminators, the results of which we call them feature maps of network events. Several known scan cases are simulated in LANs and network traffic is collected for generating feature maps under each case. In order to solve this multi-label classification task, we adopt and train a deep convolutional neural network (DCNN), in two different network environments with feature maps as the input data, and different scan cases as the labels. We separate datasets with a ratio of 4:1 into the training dataset and the validation dataset. Then, based on the micro scores and the macro scores of the validation, we evaluate performance of the scheme, achieving macro-F-measure scores of 0.982 and 0.975, and micro-F-measure scores of 0.976 and 0.965 separately in these two LANs.


Author(s):  
Devon Livingstone ◽  
Aron S. Talai ◽  
Justin Chau ◽  
Nils D. Forkert

Abstract Background Otologic diseases are often difficult to diagnose accurately for primary care providers. Deep learning methods have been applied with great success in many areas of medicine, often outperforming well trained human observers. The aim of this work was to develop and evaluate an automatic software prototype to identify otologic abnormalities using a deep convolutional neural network. Material and methods A database of 734 unique otoscopic images of various ear pathologies, including 63 cerumen impactions, 120 tympanostomy tubes, and 346 normal tympanic membranes were acquired. 80% of the images were used for the training of a convolutional neural network and the remaining 20% were used for algorithm validation. Image augmentation was employed on the training dataset to increase the number of training images. The general network architecture consisted of three convolutional layers plus batch normalization and dropout layers to avoid over fitting. Results The validation based on 45 datasets not used for model training revealed that the proposed deep convolutional neural network is capable of identifying and differentiating between normal tympanic membranes, tympanostomy tubes, and cerumen impactions with an overall accuracy of 84.4%. Conclusion Our study shows that deep convolutional neural networks hold immense potential as a diagnostic adjunct for otologic disease management.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Yanyan Pan ◽  
Huiping Zhang ◽  
Jinsuo Yang ◽  
Jing Guo ◽  
Zhiguo Yang ◽  
...  

This study aimed to explore the application value of multimodal magnetic resonance imaging (MRI) images based on the deep convolutional neural network (Conv.Net) in the diagnosis of strokes. Specifically, four automatic segmentation algorithms were proposed to segment multimodal MRI images of stroke patients. The segmentation effects were evaluated factoring into DICE, accuracy, sensitivity, and segmentation distance coefficient. It was found that although two-dimensional (2D) full convolutional neural network-based segmentation algorithm can locate and segment the lesion, its accuracy was low; the three-dimensional one exhibited higher accuracy, with various objective indicators improved, and the segmentation accuracy of the training set and the test set was 0.93 and 0.79, respectively, meeting the needs of automatic diagnosis. The asymmetric 3D residual U-Net network had good convergence and high segmentation accuracy, and the 3D deep residual network proposed on its basis had good segmentation coefficients, which can not only ensure segmentation accuracy but also avoid network degradation problems. In conclusion, the Conv.Net model can accurately segment the foci of patients with ischemic stroke and is suggested in clinic.


2021 ◽  
Author(s):  
Piotr Wzorek ◽  
Tomasz Kryjak

This paper presents a method for automatic generation of a training dataset for a deep convolutional neural network used for playing card detection. The solution allows to skip the time-consuming processes of manual image collecting and labelling recognised objects. The YOLOv4 network trained on the generated dataset achieved an efficiency of 99.8% in the cards detection task. The proposed method is a part of a project that aims to automate the process of broadcasting duplicate bridge competitions using a vision system and neural networks.


2018 ◽  
Vol 10 (12) ◽  
pp. 2053 ◽  
Author(s):  
Yunfeng Hu ◽  
Qianli Zhang ◽  
Yunzhi Zhang ◽  
Huimin Yan

Land cover and its dynamic information is the basis for characterizing surface conditions, supporting land resource management and optimization, and assessing the impacts of climate change and human activities. In land cover information extraction, the traditional convolutional neural network (CNN) method has several problems, such as the inability to be applied to multispectral and hyperspectral satellite imagery, the weak generalization ability of the model and the difficulty of automating the construction of a training database. To solve these problems, this study proposes a new type of deep convolutional neural network based on Landsat-8 Operational Land Imager (OLI) imagery. The network integrates cascaded cross-channel parametric pooling and average pooling layer, applies a hierarchical sampling strategy to realize the automatic construction of the training dataset, determines the technical scheme of model-related parameters, and finally performs the automatic classification of remote sensing images. This study used the new type of deep convolutional neural network to extract land cover information from Qinhuangdao City, Hebei Province, and compared the experimental results with those obtained by traditional methods. The results show that: (1) The proposed deep convolutional neural network (DCNN) model can automatically construct the training dataset and classify images. This model performs the classification of multispectral and hyperspectral satellite images using deep neural networks, which improves the generalization ability of the model and simplifies the application of the model. (2) The proposed DCNN model provides the best classification results in the Qinhuangdao area. The overall accuracy of the land cover data obtained is 82.0%, and the kappa coefficient is 0.76. The overall accuracy is improved by 5% and 14% compared to the support vector machine method and the maximum likelihood classification method, respectively.


Author(s):  
Yuri Galindo ◽  
Ana Carolina Lorena

In this paper, a pre-trained deep Convolutional Neural Network is applied to the problem of detecting meteors. Trained with limited data, the best model achieved an error rate of 0.04 and an F1 score of 0.94. Different approaches to perform transfer learning are tested, revealing that the choice of a proper pre-training dataset can provide better off-the-shelf features and lead to better results, and that the use of very deep representations for transfer learning does not worsen performance for Deep Residual Networks.


2021 ◽  
Author(s):  
Piotr Wzorek ◽  
Tomasz Kryjak

This paper presents a method for automatic generation of a training dataset for a deep convolutional neural network used for playing card detection. The solution allows to skip the time-consuming processes of manual image collecting and labelling recognised objects. The YOLOv4 network trained on the generated dataset achieved an efficiency of 99.8% in the cards detection task. The proposed method is a part of a project that aims to automate the process of broadcasting duplicate bridge competitions using a vision system and neural networks.


2020 ◽  
Vol 15 (2) ◽  
pp. 94-108
Author(s):  
R. Kala ◽  
P. Deepa

Background: Accurate detection of brain tumor and its severity is a challenging task in the medical field. So there is a need for developing brain tumor detecting algorithms and it is an emerging one for diagnosis, planning the treatment and outcome evaluation. Materials and Methods: Brain tumor segmentation method using deep learning classification and multi-modal composition has been developed using the deep convolutional neural networks. The different modalities of MRI such as T1, flair, T1C and T2 are given as input for the proposed method. The MR images from the different modalities are used in proportion to the information contents in the particular modality. The weights for the different modalities are calculated blockwise and the standard deviation of the block is taken as a proxy for the information content of the block. Then the convolution is performed between the input image of the T1, flair, T1C and T2 MR images and corresponding to the weight of the T1, flair, T1C, and T2 images. The convolution is summed between the different modalities of the MR images and its corresponding weight of the different modalities of the MR images to obtain a new composite image which is given as an input image to the deep convolutional neural network. The deep convolutional neural network performs segmentation through the different layers of CNN and different filter operations are performed in each layer to obtain the enhanced classification and segmented spatial consistency results. The analysis of the proposed method shows that the discriminatory information from the different modalities is effectively combined to increase the overall accuracy of segmentation. Results: The proposed deep convolutional neural network for brain tumor segmentation method has been analysed by using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013). The complete, core and enhancing regions are validated with Dice Similarity Coefficient and Jaccard similarity index metric for the Challenge, Leaderboard, and Synthetic data set. To evaluate the classification rates, the metrics such as accuracy, precision, sensitivity, specificity, under-segmentation, incorrect segmentation and over segmentation also evaluated and compared with the existing methods. Experimental results exhibit a higher degree of precision in the segmentation compared to existing methods. Conclusion: In this work, deep convolution neural network with different modalities of MR image are used to detect the brain tumor. The new input image was created by convoluting the input image of the different modalities and their weights. The weights are determined using the standard deviation of the block. Segmentation accuracy is high with efficient appearance and spatial consistency. The assessment of segmented images is completely evaluated by using well-established metrics. In future, the proposed method will be considered and evaluated with other databases and the segmentation accuracy results should be analysed with the presence of different kind of noises.


2020 ◽  
Vol 2020 (10) ◽  
pp. 181-1-181-7
Author(s):  
Takahiro Kudo ◽  
Takanori Fujisawa ◽  
Takuro Yamaguchi ◽  
Masaaki Ikehara

Image deconvolution has been an important issue recently. It has two kinds of approaches: non-blind and blind. Non-blind deconvolution is a classic problem of image deblurring, which assumes that the PSF is known and does not change universally in space. Recently, Convolutional Neural Network (CNN) has been used for non-blind deconvolution. Though CNNs can deal with complex changes for unknown images, some CNN-based conventional methods can only handle small PSFs and does not consider the use of large PSFs in the real world. In this paper we propose a non-blind deconvolution framework based on a CNN that can remove large scale ringing in a deblurred image. Our method has three key points. The first is that our network architecture is able to preserve both large and small features in the image. The second is that the training dataset is created to preserve the details. The third is that we extend the images to minimize the effects of large ringing on the image borders. In our experiments, we used three kinds of large PSFs and were able to observe high-precision results from our method both quantitatively and qualitatively.


Sign in / Sign up

Export Citation Format

Share Document