dice similarity coefficient
Recently Published Documents


TOTAL DOCUMENTS

349
(FIVE YEARS 303)

H-INDEX

11
(FIVE YEARS 5)

Author(s):  
Nermeen Elmenabawy ◽  
Mervat El-Seddek ◽  
Hossam El-Din Moustafa ◽  
Ahmed Elnakib

A pipelined framework is proposed for accurate, automated, simultaneous segmentation of the liver as well as the hepatic tumors from computed tomography (CT) images. The introduced framework composed of three pipelined levels. First, two different transfers deep convolutional neural networks (CNN) are applied to get high-level compact features of CT images. Second, a pixel-wise classifier is used to obtain two output-classified maps for each CNN model. Finally, a fusion neural network (FNN) is used to integrate the two maps. Experimentations performed on the MICCAI’2017 database of the liver tumor segmentation (LITS) challenge, result in a dice similarity coefficient (DSC) of 93.5% for the segmentation of the liver and of 74.40% for the segmentation of the lesion, using a 5-fold cross-validation scheme. Comparative results with the state-of-the-art techniques on the same data show the competing performance of the proposed framework for simultaneous liver and tumor segmentation.


2022 ◽  
Vol 3 (1) ◽  
pp. 1-16
Author(s):  
Bradley Feiger ◽  
Erick Lorenzana-Saldivar ◽  
Colin Cooke ◽  
Roarke Horstmeyer ◽  
Muath Bishawi ◽  
...  

Segmentation and reconstruction of arteries is important for a variety of medical and engineering fields, such as surgical planning and physiological modeling. However, manual methods can be laborious and subject to a high degree of human variability. In this work, we developed various convolutional neural network ( CNN ) architectures to segment Stanford type B aortic dissections ( TBADs ), characterized by a tear in the descending aortic wall creating a normal channel of blood flow called a true lumen and a pathologic channel within the wall called a false lumen. We introduced several variations to the two-dimensional ( 2D ) and three-dimensional (3 D ) U-Net, where small stacks of slices were inputted into the networks instead of individual slices or whole geometries. We compared these variations with a variety of CNN segmentation architectures and found that stacking the input data slices in the upward direction with 2D U-Net improved segmentation accuracy, as measured by the Dice similarity coefficient ( DC ) and point-by-point average distance ( AVD ), by more than 15\% . Our optimal architecture produced DC scores of 0.94, 0.88, and 0.90 and AVD values of 0.074, 0.22, and 0.11 in the whole aorta, true lumen, and false lumen, respectively. Altogether, the predicted reconstructions closely matched manual reconstructions.


Biology ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 134
Author(s):  
Xiang Yu ◽  
Shui-Hua Wang ◽  
Juan Manuel Górriz ◽  
Xian-Wei Jiang ◽  
David S. Guttery ◽  
...  

As an important imaging modality, mammography is considered to be the global gold standard for early detection of breast cancer. Computer-Aided (CAD) systems have played a crucial role in facilitating quicker diagnostic procedures, which otherwise could take weeks if only radiologists were involved. In some of these CAD systems, breast pectoral segmentation is required for breast region partition from breast pectoral muscle for specific analysis tasks. Therefore, accurate and efficient breast pectoral muscle segmentation frameworks are in high demand. Here, we proposed a novel deep learning framework, which we code-named PeMNet, for breast pectoral muscle segmentation in mammography images. In the proposed PeMNet, we integrated a novel attention module called the Global Channel Attention Module (GCAM), which can effectively improve the segmentation performance of Deeplabv3+ using minimal parameter overheads. In GCAM, channel attention maps (CAMs) are first extracted by concatenating feature maps after paralleled global average pooling and global maximum pooling operation. CAMs are then refined and scaled up by multi-layer perceptron (MLP) for elementwise multiplication with CAMs in next feature level. By iteratively repeating this procedure, the global CAMs (GCAMs) are then formed and multiplied elementwise with final feature maps to lead to final segmentation. By doing so, CAMs in early stages of a deep convolution network can be effectively passed on to later stages of the network and therefore leads to better information usage. The experiments on a merged dataset derived from two datasets, INbreast and OPTIMAM, showed that PeMNet greatly outperformed state-of-the-art methods by achieving an IoU of 97.46%, global pixel accuracy of 99.48%, Dice similarity coefficient of 96.30%, and Jaccard of 93.33%, respectively.


2022 ◽  
Vol 9 ◽  
Author(s):  
Jinqiang You ◽  
Qingxin Wang ◽  
Ruoxi Wang ◽  
Qin An ◽  
Jing Wang ◽  
...  

Purpose: The aim of this study is to develop a practicable automatic clinical target volume (CTV) delineation method for radiotherapy of breast cancer after modified radical mastectomy.Methods: Unlike breast conserving surgery, the radiotherapy CTV for modified radical mastectomy involves several regions, including CTV in the chest wall (CTVcw), supra- and infra-clavicular region (CTVsc), and internal mammary lymphatic region (CTVim). For accurate and efficient segmentation of the CTVs in radiotherapy of breast cancer after modified radical mastectomy, a multi-scale convolutional neural network with an orientation attention mechanism is proposed to capture the corresponding features in different perception fields. A channel-specific local Dice loss, alongside several data augmentation methods, is also designed specifically to stabilize the model training and improve the generalization performance of the model. The segmentation performance is quantitatively evaluated by statistical metrics and qualitatively evaluated by clinicians in terms of consistency and time efficiency.Results: The proposed method is trained and evaluated on the self-collected dataset, which contains 110 computed tomography scans from patients with breast cancer who underwent modified mastectomy. The experimental results show that the proposed segmentation method achieved superior performance in terms of Dice similarity coefficient (DSC), Hausdorff distance (HD) and Average symmetric surface distance (ASSD) compared with baseline approaches.Conclusion: Both quantitative and qualitative evaluation results demonstrated that the specifically designed method is practical and effective in automatic contouring of CTVs for radiotherapy of breast cancer after modified radical mastectomy. Clinicians can significantly save time on manual delineation while obtaining contouring results with high consistency by employing this method.


2022 ◽  
Author(s):  
Jing Shen ◽  
Yinjie TAO ◽  
Hui GUAN ◽  
Hongnan ZHEN ◽  
Lei HE ◽  
...  

Abstract Purpose Clinical target volumes (CTV) and organs at risk (OAR) could be auto-contoured to save workload. The goal of this study was to assess a convolutional neural network (CNN) for totally automatic and accurate CTV and OAR in prostate cancer, while also comparing anticipated treatment plans based on auto-contouring CTV to clinical plans. Methods From January 2013 to January 2019, 217 computed tomography (CT) scans of patients with locally advanced prostate cancer treated at our hospital were collected and analyzed. CTV and OAR were delineated with a deep learning based method, which named CUNet. The performance of this strategy was evaluated using the mean Dice similarity coefficient (DSC), 95th percentile Hausdorff distance (95HD), and subjective evaluation. Treatment plans were graded using predetermined evaluation criteria, and % errors for clinical doses to the planned target volume (PTV) and organs at risk(OARs) were calculated. Results The defined CTVs had mean DSC and 95HD values of 0.84 and 5.04 mm, respectively. For one patient's CT scans, the average delineation time was less than 15 seconds. When CTV outlines from CUNetwere blindly chosen and compared to GT, the overall positive rate in clinicians A and B was 53.15% vs 46.85%, and 54.05% vs 45.95%, respectively (P>0.05), demonstrating that our deep machine learning model performed as good as or better than human demarcation Furthermore, 8 testing patients were chosen at random to design the predicted plan based on the auto-courtoring CTV and OAR, demonstrating acceptable agreement with the clinical plan: average absolute dose differences of D2, D50, D98, Dmean for PTV are within 0.74%, and average absolute volume differences of V45, V50 for OARs are within 3.4%. Without statistical significance (p>0.05), the projected findings are comparable to clinical truth. Conclusion The experimental results show that the CTV and OARs defined by CUNet for prostate cancer were quite close to the ground reality.CUNet has the potential to cut radiation oncologists' contouring time in half. When compared to clinical plans, the differences between estimated doses to CTV and OAR based on auto-courtoring were small, with no statistical significance, indicating that treatment planning for prostate cancer based on auto-courtoring has potential.


Mathematics ◽  
2022 ◽  
Vol 10 (2) ◽  
pp. 206
Author(s):  
Yanshan Zhang ◽  
Yuru Tian

Image segmentation technology is dedicated to the segmentation of intensity inhomogeneous at present. In this paper, we propose a new method that incorporates fractional varying-order differential and local fitting energy to construct a new variational level set active contour model. The energy functions in this paper mainly include three parts: the local term, the regular term and the penalty term. The local term combined with fractional varying-order differential can obtain more details of the image. The regular term is used to regularize the image contour length. The penalty term is used to keep the evolution curve smooth. True positive (TP) rate, false positive (FP) rate, precision (P) rate, Jaccard similarity coefficient (JSC), and Dice similarity coefficient (DSC) are employed as the comparative measures for the segmentation results. Experimental results for both synthetic and real images show that our method has more accurate segmentation results than other models, and it is robust to intensity inhomogeneous or noises.


Author(s):  
Enrica Cavedo ◽  
Philippe Tran ◽  
Urielle Thoprakarn ◽  
Jean-Baptiste Martini ◽  
Antoine Movschin ◽  
...  

Abstract Objectives QyScore® is an imaging analysis tool certified in Europe (CE marked) and the US (FDA cleared) for the automatic volumetry of grey and white matter (GM and WM respectively), hippocampus (HP), amygdala (AM), and white matter hyperintensity (WMH). Here we compare QyScore® performances with the consensus of expert neuroradiologists. Methods Dice similarity coefficient (DSC) and the relative volume difference (RVD) for GM, WM volumes were calculated on 50 3DT1 images. DSC and the F1 metrics were calculated for WMH on 130 3DT1 and FLAIR images. For each index, we identified thresholds of reliability based on current literature review results. We hypothesized that DSC/F1 scores obtained using QyScore® markers would be higher than the threshold. In contrast, RVD scores would be lower. Regression analysis and Bland–Altman plots were obtained to evaluate QyScore® performance in comparison to the consensus of three expert neuroradiologists. Results The lower bound of the DSC/F1 confidence intervals was higher than the threshold for the GM, WM, HP, AM, and WMH, and the higher bounds of the RVD confidence interval were below the threshold for the WM, GM, HP, and AM. QyScore®, compared with the consensus of three expert neuroradiologists, provides reliable performance for the automatic segmentation of the GM and WM volumes, and HP and AM volumes, as well as WMH volumes. Conclusions QyScore® represents a reliable medical device in comparison with the consensus of expert neuroradiologists. Therefore, QyScore® could be implemented in clinical trials and clinical routine to support the diagnosis and longitudinal monitoring of neurological diseases. Key Points • QyScore® provides reliable automatic segmentation of brain structures in comparison with the consensus of three expert neuroradiologists. • QyScore® automatic segmentation could be performed on MRI images using different vendors and protocols of acquisition. In addition, the fast segmentation process saves time over manual and semi-automatic methods. • QyScore® could be implemented in clinical trials and clinical routine to support the diagnosis and longitudinal monitoring of neurological diseases.


Tomography ◽  
2022 ◽  
Vol 8 (1) ◽  
pp. 45-58
Author(s):  
Bing Li ◽  
Chuang Liu ◽  
Shaoyong Wu ◽  
Guangqing Li

Due to the complex shape of the vertebrae and the background containing a lot of interference information, it is difficult to accurately segment the vertebrae from the computed tomography (CT) volume by manual segmentation. This paper proposes a convolutional neural network for vertebrae segmentation, named Verte-Box. Firstly, in order to enhance feature representation and suppress interference information, this paper places a robust attention mechanism on the central processing unit, including a channel attention module and a dual attention module. The channel attention module is used to explore and emphasize the interdependence between channel graphs of low-level features. The dual attention module is used to enhance features along the location and channel dimensions. Secondly, we design a multi-scale convolution block to the network, which can make full use of different combinations of receptive field sizes and significantly improve the network’s perception of the shape and size of the vertebrae. In addition, we connect the rough segmentation prediction maps generated by each feature in the feature box to generate the final fine prediction result. Therefore, the deep supervision network can effectively capture vertebrae information. We evaluated our method on the publicly available dataset of the CSI 2014 Vertebral Segmentation Challenge and achieved a mean Dice similarity coefficient of 92.18 ± 0.45%, an intersection over union of 87.29 ± 0.58%, and a 95% Hausdorff distance of 7.7107 ± 0.5958, outperforming other algorithms.


Electronics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 130
Author(s):  
Shuangcai Yin ◽  
Hongmin Deng ◽  
Zelin Xu ◽  
Qilin Zhu ◽  
Junfeng Cheng

Due to the outbreak of lung infections caused by the coronavirus disease (COVID-19), humans have to face an unprecedented and devastating global health crisis. Since chest computed tomography (CT) images of COVID-19 patients contain abundant pathological features closely related to this disease, rapid detection and diagnosis based on CT images is of great significance for the treatment of patients and blocking the spread of the disease. In particular, the segmentation of the COVID-19 CT lung-infected area can quantify and evaluate the severity of the disease. However, due to the blurred boundaries and low contrast between the infected and the non-infected areas in COVID-19 CT images, the manual segmentation of the COVID-19 lesion is laborious and places high demands on the operator. Quick and accurate segmentation of COVID-19 lesions from CT images based on deep learning has drawn increasing attention. To effectively improve the segmentation effect of COVID-19 lung infection, a modified UNet network that combines the squeeze-and-attention (SA) and dense atrous spatial pyramid pooling (Dense ASPP) modules) (SD-UNet) is proposed, fusing global context and multi-scale information. Specifically, the SA module is introduced to strengthen the attention of pixel grouping and fully exploit the global context information, allowing the network to better mine the differences and connections between pixels. The Dense ASPP module is utilized to capture multi-scale information of COVID-19 lesions. Moreover, to eliminate the interference of background noise outside the lungs and highlight the texture features of the lung lesion area, we extract in advance the lung area from the CT images in the pre-processing stage. Finally, we evaluate our method using the binary-class and multi-class COVID-19 lung infection segmentation datasets. The experimental results show that the metrics of Sensitivity, Dice Similarity Coefficient, Accuracy, Specificity, and Jaccard Similarity are 0.8988 (0.6169), 0.8696 (0.5936), 0.9906 (0.9821), 0.9932 (0.9907), and 0.7702 (0.4788), respectively, for the binary-class (multi-class) segmentation task in the proposed SD-UNet. The result of the COVID-19 lung infection area segmented by SD-UNet is closer to the ground truth compared to several existing models such as CE-Net, DeepLab v3+, UNet++, and other models, which further proves that a more accurate segmentation effect can be achieved by our method. It has the potential to assist doctors in making more accurate and rapid diagnosis and quantitative assessment of COVID-19.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 245
Author(s):  
Seok Oh ◽  
Young-Jae Kim ◽  
Young-Taek Park ◽  
Kwang-Gi Kim

The automatic segmentation of the pancreatic cyst lesion (PCL) is essential for the automated diagnosis of pancreatic cyst lesions on endoscopic ultrasonography (EUS) images. In this study, we proposed a deep-learning approach for PCL segmentation on EUS images. We employed the Attention U-Net model for automatic PCL segmentation. The Attention U-Net was compared with the Basic U-Net, Residual U-Net, and U-Net++ models. The Attention U-Net showed a better dice similarity coefficient (DSC) and intersection over union (IoU) scores than the other models on the internal test. Although the Basic U-Net showed a higher DSC and IoU scores on the external test than the Attention U-Net, there was no statistically significant difference. On the internal test of the cross-over study, the Attention U-Net showed the highest DSC and IoU scores. However, there was no significant difference between the Attention U-Net and Residual U-Net or between the Attention U-Net and U-Net++. On the external test of the cross-over study, all models showed no significant difference from each other. To the best of our knowledge, this is the first study implementing segmentation of PCL on EUS images using a deep-learning approach. Our experimental results show that a deep-learning approach can be applied successfully for PCL segmentation on EUS images.


Sign in / Sign up

Export Citation Format

Share Document