scholarly journals Using spatial-temporal ensembles of convolutional neural networks for lumen segmentation in ureteroscopy

Author(s):  
Jorge F. Lazo ◽  
Aldo Marzullo ◽  
Sara Moccia ◽  
Michele Catellani ◽  
Benoit Rosa ◽  
...  

Abstract Purpose Ureteroscopy is an efficient endoscopic minimally invasive technique for the diagnosis and treatment of upper tract urothelial carcinoma. During ureteroscopy, the automatic segmentation of the hollow lumen is of primary importance, since it indicates the path that the endoscope should follow. In order to obtain an accurate segmentation of the hollow lumen, this paper presents an automatic method based on convolutional neural networks (CNNs). Methods The proposed method is based on an ensemble of 4 parallel CNNs to simultaneously process single and multi-frame information. Of these, two architectures are taken as core-models, namely U-Net based in residual blocks ($$m_1$$ m 1 ) and Mask-RCNN ($$m_2$$ m 2 ), which are fed with single still-frames I(t). The other two models ($$M_1$$ M 1 , $$M_2$$ M 2 ) are modifications of the former ones consisting on the addition of a stage which makes use of 3D convolutions to process temporal information. $$M_1$$ M 1 , $$M_2$$ M 2 are fed with triplets of frames ($$I(t-1)$$ I ( t - 1 ) , I(t), $$I(t+1)$$ I ( t + 1 ) ) to produce the segmentation for I(t). Results The proposed method was evaluated using a custom dataset of 11 videos (2673 frames) which were collected and manually annotated from 6 patients. We obtain a Dice similarity coefficient of 0.80, outperforming previous state-of-the-art methods. Conclusion The obtained results show that spatial-temporal information can be effectively exploited by the ensemble model to improve hollow lumen segmentation in ureteroscopic images. The method is effective also in the presence of poor visibility, occasional bleeding, or specular reflections.

2020 ◽  
Author(s):  
Abhinav Sagar ◽  
J Dheeba

AbstractIn this work, we address the problem of skin cancer classification using convolutional neural networks. A lot of cancer cases early on are misdiagnosed as something else leading to severe consequences including the death of a patient. Also there are cases in which patients have some other problems and doctors think they might have skin cancer. This leads to unnecessary time and money spent for further diagnosis. In this work, we address both of the above problems using deep neural networks and transfer learning architecture. We have used publicly available ISIC databases for both training and testing our model. Our work achieves an accuracy of 0.935, precision of 0.94, recall of 0.77, F1 score of 0.85 and ROC-AUC of 0.861 which is better than the previous state of the art approaches.


2021 ◽  
Vol 10 ◽  
Author(s):  
Zhikai Liu ◽  
Fangjie Liu ◽  
Wanqi Chen ◽  
Xia Liu ◽  
Xiaorong Hou ◽  
...  

BackgroundThis study aims to construct and validate a model based on convolutional neural networks (CNNs), which can fulfil the automatic segmentation of clinical target volumes (CTVs) of breast cancer for radiotherapy.MethodsIn this work, computed tomography (CT) scans of 110 patients who underwent modified radical mastectomies were collected. The CTV contours were confirmed by two experienced oncologists. A novel CNN was constructed to automatically delineate the CTV. Quantitative evaluation metrics were calculated, and a clinical evaluation was conducted to evaluate the performance of our model.ResultsThe mean Dice similarity coefficient (DSC) of the proposed model was 0.90, and the 95th percentile Hausdorff distance (95HD) was 5.65 mm. The evaluation results of the two clinicians showed that 99.3% of the chest wall CTV slices could be accepted by clinician A, and this number was 98.9% for clinician B. In addition, 9/10 of patients had all slices accepted by clinician A, while 7/10 could be accepted by clinician B. The score differences between the AI (artificial intelligence) group and the GT (ground truth) group showed no statistically significant difference for either clinician. However, the score differences in the AI group were significantly different between the two clinicians. The Kappa consistency index was 0.259. It took 3.45 s to delineate the chest wall CTV using the model.ConclusionOur model could automatically generate the CTVs for breast cancer. AI-generated structures of the proposed model showed a trend that was comparable, or was even better, than those of human-generated structures. Additional multicentre evaluations should be performed for adequate validation before the model can be completely applied in clinical practice.


Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 624
Author(s):  
Stefan Rohrmanstorfer ◽  
Mikhail Komarov ◽  
Felix Mödritscher

With the always increasing amount of image data, it has become a necessity to automatically look for and process information in these images. As fashion is captured in images, the fashion sector provides the perfect foundation to be supported by the integration of a service or application that is built on an image classification model. In this article, the state of the art for image classification is analyzed and discussed. Based on the elaborated knowledge, four different approaches will be implemented to successfully extract features out of fashion data. For this purpose, a human-worn fashion dataset with 2567 images was created, but it was significantly enlarged by the performed image operations. The results show that convolutional neural networks are the undisputed standard for classifying images, and that TensorFlow is the best library to build them. Moreover, through the introduction of dropout layers, data augmentation and transfer learning, model overfitting was successfully prevented, and it was possible to incrementally improve the validation accuracy of the created dataset from an initial 69% to a final validation accuracy of 84%. More distinct apparel like trousers, shoes and hats were better classified than other upper body clothes.


Author(s):  
Sebastian Nowak ◽  
Narine Mesropyan ◽  
Anton Faron ◽  
Wolfgang Block ◽  
Martin Reuter ◽  
...  

Abstract Objectives To investigate the diagnostic performance of deep transfer learning (DTL) to detect liver cirrhosis from clinical MRI. Methods The dataset for this retrospective analysis consisted of 713 (343 female) patients who underwent liver MRI between 2017 and 2019. In total, 553 of these subjects had a confirmed diagnosis of liver cirrhosis, while the remainder had no history of liver disease. T2-weighted MRI slices at the level of the caudate lobe were manually exported for DTL analysis. Data were randomly split into training, validation, and test sets (70%/15%/15%). A ResNet50 convolutional neural network (CNN) pre-trained on the ImageNet archive was used for cirrhosis detection with and without upstream liver segmentation. Classification performance for detection of liver cirrhosis was compared to two radiologists with different levels of experience (4th-year resident, board-certified radiologist). Segmentation was performed using a U-Net architecture built on a pre-trained ResNet34 encoder. Differences in classification accuracy were assessed by the χ2-test. Results Dice coefficients for automatic segmentation were above 0.98 for both validation and test data. The classification accuracy of liver cirrhosis on validation (vACC) and test (tACC) data for the DTL pipeline with upstream liver segmentation (vACC = 0.99, tACC = 0.96) was significantly higher compared to the resident (vACC = 0.88, p < 0.01; tACC = 0.91, p = 0.01) and to the board-certified radiologist (vACC = 0.96, p < 0.01; tACC = 0.90, p < 0.01). Conclusion This proof-of-principle study demonstrates the potential of DTL for detecting cirrhosis based on standard T2-weighted MRI. The presented method for image-based diagnosis of liver cirrhosis demonstrated expert-level classification accuracy. Key Points • A pipeline consisting of two convolutional neural networks (CNNs) pre-trained on an extensive natural image database (ImageNet archive) enables detection of liver cirrhosis on standard T2-weighted MRI. • High classification accuracy can be achieved even without altering the pre-trained parameters of the convolutional neural networks. • Other abdominal structures apart from the liver were relevant for detection when the network was trained on unsegmented images.


2020 ◽  
Vol 2 (1) ◽  
pp. 23-36
Author(s):  
Syed Aamir Ali Shah ◽  
Muhammad Asif Manzoor ◽  
Abdul Bais

Forest structure estimation is very important in geological, ecological and environmental studies. It provides the basis for the carbon stock estimation and effective means of sequestration of carbon sources and sinks. Multiple parameters are used to estimate the forest structure like above ground biomass, leaf area index and diameter at breast height. Among all these parameters, vegetation height has unique standing. In addition to forest structure estimation it provides the insight into long term historical changes and the estimates of stand age of the forests as well. There are multiple techniques available to estimate the canopy height. Light detection and ranging (LiDAR) based methods, being the accurate and useful ones, are very expensive to obtain and have no global coverage. There is a need to establish a mechanism to estimate the canopy height using freely available satellite imagery like Landsat images. Multiple studies are available which contribute in this area. The majority use Landsat images with random forest models. Although random forest based models are widely used in remote sensing applications, they lack the ability to utilize the spatial association of neighboring pixels in modeling process. In this research work, we define Convolutional Neural Network based model and analyze that model for three test configurations. We replicate the random forest based setup of Grant et al., which is a similar state-of-the-art study, and compare our results and show that the convolutional neural networks (CNN) based models not only capture the spatial association of neighboring pixels but also outperform the state-of-the-art.


2021 ◽  
Vol 159 (6) ◽  
pp. 824-835.e1
Author(s):  
Rosalia Leonardi ◽  
Antonino Lo Giudice ◽  
Marco Farronato ◽  
Vincenzo Ronsivalle ◽  
Silvia Allegrini ◽  
...  

2017 ◽  
Vol 25 (1) ◽  
pp. 93-98 ◽  
Author(s):  
Yuan Luo ◽  
Yu Cheng ◽  
Özlem Uzuner ◽  
Peter Szolovits ◽  
Justin Starren

Abstract We propose Segment Convolutional Neural Networks (Seg-CNNs) for classifying relations from clinical notes. Seg-CNNs use only word-embedding features without manual feature engineering. Unlike typical CNN models, relations between 2 concepts are identified by simultaneously learning separate representations for text segments in a sentence: preceding, concept1, middle, concept2, and succeeding. We evaluate Seg-CNN on the i2b2/VA relation classification challenge dataset. We show that Seg-CNN achieves a state-of-the-art micro-average F-measure of 0.742 for overall evaluation, 0.686 for classifying medical problem–treatment relations, 0.820 for medical problem–test relations, and 0.702 for medical problem–medical problem relations. We demonstrate the benefits of learning segment-level representations. We show that medical domain word embeddings help improve relation classification. Seg-CNNs can be trained quickly for the i2b2/VA dataset on a graphics processing unit (GPU) platform. These results support the use of CNNs computed over segments of text for classifying medical relations, as they show state-of-the-art performance while requiring no manual feature engineering.


Sign in / Sign up

Export Citation Format

Share Document