lumen segmentation
Recently Published Documents


TOTAL DOCUMENTS

84
(FIVE YEARS 22)

H-INDEX

12
(FIVE YEARS 2)

Author(s):  
Jorge F. Lazo ◽  
Aldo Marzullo ◽  
Sara Moccia ◽  
Michele Catellani ◽  
Benoit Rosa ◽  
...  

Abstract Purpose Ureteroscopy is an efficient endoscopic minimally invasive technique for the diagnosis and treatment of upper tract urothelial carcinoma. During ureteroscopy, the automatic segmentation of the hollow lumen is of primary importance, since it indicates the path that the endoscope should follow. In order to obtain an accurate segmentation of the hollow lumen, this paper presents an automatic method based on convolutional neural networks (CNNs). Methods The proposed method is based on an ensemble of 4 parallel CNNs to simultaneously process single and multi-frame information. Of these, two architectures are taken as core-models, namely U-Net based in residual blocks ($$m_1$$ m 1 ) and Mask-RCNN ($$m_2$$ m 2 ), which are fed with single still-frames I(t). The other two models ($$M_1$$ M 1 , $$M_2$$ M 2 ) are modifications of the former ones consisting on the addition of a stage which makes use of 3D convolutions to process temporal information. $$M_1$$ M 1 , $$M_2$$ M 2 are fed with triplets of frames ($$I(t-1)$$ I ( t - 1 ) , I(t), $$I(t+1)$$ I ( t + 1 ) ) to produce the segmentation for I(t). Results The proposed method was evaluated using a custom dataset of 11 videos (2673 frames) which were collected and manually annotated from 6 patients. We obtain a Dice similarity coefficient of 0.80, outperforming previous state-of-the-art methods. Conclusion The obtained results show that spatial-temporal information can be effectively exploited by the ensemble model to improve hollow lumen segmentation in ureteroscopic images. The method is effective also in the presence of poor visibility, occasional bleeding, or specular reflections.


Author(s):  
Jorge F. Lazo ◽  
Aldo Marzullo ◽  
Sara Moccia ◽  
Michele Catellani ◽  
Benoit Rosa ◽  
...  

Author(s):  
Yaolei Qi ◽  
Han Xu ◽  
Yuting He ◽  
Guanyu Li ◽  
Zehang Li ◽  
...  

2020 ◽  
Vol 1 (1) ◽  
pp. 75-82
Author(s):  
Paulo G P Ziemer ◽  
Carlos A Bulant ◽  
José I Orlando ◽  
Gonzalo D Maso Talou ◽  
Luis A Mansilla Álvarez ◽  
...  

Abstract Aims Assessment of minimum lumen areas in intravascular ultrasound (IVUS) pullbacks is time-consuming and demands adequately trained personnel. In this work, we introduce a novel and fully automated pipeline to segment the lumen boundary in IVUS datasets. Methods and results First, an automated gating is applied to select end-diastolic frames and bypass saw-tooth artefacts. Second, within a machine learning (ML) environment, we automatically segment the lumen boundary using a multi-frame (MF) convolutional neural network (MFCNN). Finally, we use the theory of Gaussian processes (GPs) to regress the final lumen boundary. The dataset consisted of 85 IVUS pullbacks (52 patients). The dataset was partitioned at the pullback-level using 73 pullbacks for training (20 586 frames), 6 pullbacks for validation (1692 frames), and 6 for testing (1692 frames). The degree of overlapping, between the ground truth and ML contours, median (interquartile range, IQR) systematically increased from 0.896 (0.874–0.933) for MF1 to 0.925 (0.911–0.948) for MF11. The median (IQR) of the distance error was also reduced from 3.83 (2.94–4.98)% for MF1 to 3.02 (2.25–3.95)% for MF11-GP. The corresponding median (IQR) in the lumen area error remained between 5.49 (2.50–10.50)% for MF1 and 5.12 (2.15–9.00)% for MF11-GP. The dispersion in the relative distance and area errors consistently decreased as we increased the number of frames, and also when the GP regressor was coupled to the MFCNN output. Conclusion These results demonstrate that the proposed ML approach is suitable to effectively segment the lumen boundary in IVUS scans, reducing the burden of costly and time-consuming manual delineation.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-16
Author(s):  
Kun Zhang ◽  
JunHong Fu ◽  
Liang Hua ◽  
Peijian Zhang ◽  
Yeqin Shao ◽  
...  

Histological assessment of glands is one of the major concerns in colon cancer grading. Considering that poorly differentiated colorectal glands cannot be accurately segmented, we propose an approach for segmentation of glands in colon cancer images, based on the characteristics of lumens and rough gland boundaries. First, we use a U-net for stain separation to obtain H-stain, E-stain, and background stain intensity maps. Subsequently, epithelial nucleus is identified on the histopathology images, and the lumen segmentation is performed on the background intensity map. Then, we use the axis of least inertia-based similar triangles as the spatial characteristics of lumens and epithelial nucleus, and a triangle membership is used to select glandular contour candidates from epithelial nucleus. By connecting lumens and epithelial nucleus, more accurate gland segmentation is performed based on the rough gland boundary. The proposed stain separation approach is unsupervised, and the stain separation makes the category information contained in the H&E image easy to identify and deal with the uneven stain intensity and the inconspicuous stain difference. In this project, we use deep learning to achieve stain separation by predicting the stain coefficient. Under the deep learning framework, we design a stain coefficient interval model to improve the stain generalization performance. Another innovation is that we propose the combination of the internal lumen contour of adenoma and the outer contour of epithelial cells to obtain a precise gland contour. We compare the performance of the proposed algorithm against that of several state-of-the-art technologies on publicly available datasets. The results show that the segmentation approach combining the characteristics of lumens and rough gland boundary has better segmentation accuracy.


Sign in / Sign up

Export Citation Format

Share Document