scholarly journals Paraspinal Muscle Segmentation Based on Deep Neural Network

Sensors ◽  
2019 ◽  
Vol 19 (12) ◽  
pp. 2650 ◽  
Author(s):  
Haixing Li ◽  
Haibo Luo ◽  
Yunpeng Liu

The accurate segmentation of the paraspinal muscle in Magnetic Resonance (MR) images is a critical step in the automated analysis of lumbar diseases such as chronic low back pain, disc herniation and lumbar spinal stenosis. However, the automatic segmentation of multifidus and erector spinae has not yet been achieved due to three unusual challenges: (1) the muscle boundary is unclear; (2) the gray histogram distribution of the target overlaps with the background; (3) the intra- and inter-patient shape is variable. We propose to tackle the problem of the automatic segmentation of paravertebral muscles using a deformed U-net consisting of two main modules: the residual module and the feature pyramid attention (FPA) module. The residual module can directly return the gradient while preserving the details of the image to make the model easier to train. The FPA module fuses different scales of context information and provides useful salient features for high-level feature maps. In this paper, 120 cases were used for experiments, which were provided and labeled by the spine surgery department of Shengjing Hospital of China Medical University. The experimental results show that the model can achieve higher predictive capability. The dice coefficient of the multifidus is as high as 0.949, and the Hausdorff distance is 4.62 mm. The dice coefficient of the erector spinae is 0.913 and the Hausdorff distance is 7.89 mm. The work of this paper will contribute to the development of an automatic measurement system for paraspinal muscles, which is of great significance for the treatment of spinal diseases.

Author(s):  
Guangmin Sun ◽  
Zhongxiang Zhang ◽  
Junjie Zhang ◽  
Meilong Zhu ◽  
Xiao-rong Zhu ◽  
...  

AbstractAutomatic segmentation of optic disc (OD) and optic cup (OC) is an essential task for analysing colour fundus images. In clinical practice, accurate OD and OC segmentation assist ophthalmologists in diagnosing glaucoma. In this paper, we propose a unified convolutional neural network, named ResFPN-Net, which learns the boundary feature and the inner relation between OD and OC for automatic segmentation. The proposed ResFPN-Net is mainly composed of multi-scale feature extractor, multi-scale segmentation transition and attention pyramid architecture. The multi-scale feature extractor achieved the feature encoding of fundus images and captured the boundary representations. The multi-scale segmentation transition is employed to retain the features of different scales. Moreover, an attention pyramid architecture is proposed to learn rich representations and the mutual connection in the OD and OC. To verify the effectiveness of the proposed method, we conducted extensive experiments on two public datasets. On the Drishti-GS database, we achieved a Dice coefficient of 97.59%, 89.87%, the accuracy of 99.21%, 98.77%, and the Averaged Hausdorff distance of 0.099, 0.882 on the OD and OC segmentation, respectively. We achieved a Dice coefficient of 96.41%, 83.91%, the accuracy of 99.30%, 99.24%, and the Averaged Hausdorff distance of 0.166, 1.210 on the RIM-ONE database for OD and OC segmentation, respectively. Comprehensive results show that the proposed method outperforms other competitive OD and OC segmentation methods and appears more adaptable in cross-dataset scenarios. The introduced multi-scale loss function achieved significantly lower training loss and higher accuracy compared with other loss functions. Furthermore, the proposed method is further validated in OC to OD ratio calculation task and achieved the best MAE of 0.0499 and 0.0630 on the Drishti-GS and RIM-ONE datasets, respectively. Finally, we evaluated the effectiveness of the glaucoma screening on Drishti-GS and RIM-ONE datasets, achieving the AUC of 0.8947 and 0.7964. These results proved that the proposed ResFPN-Net is effective in analysing fundus images for glaucoma screening and can be applied in other relative biomedical image segmentation applications.


2021 ◽  
Vol 7 (1) ◽  
pp. 96-100
Author(s):  
Lennart Bargsten ◽  
Katharina A. Riedl ◽  
Tobias Wissel ◽  
Fabian J. Brunner ◽  
Klaus Schaefers ◽  
...  

Abstract Knowing the shape of vascular calcifications is crucial for appropriate planning and conductance of percutaneous coronary interventions. The clinical workflow can therefore benefit from automatic segmentation of calcified plaques in intravascular ultrasound (IVUS) images. To solve segmentation problems with convolutional neural networks (CNNs), large datasets are usually required. However, datasets are often rather small in the medical domain. Hence, developing and investigating methods for increasing CNN performance on small datasets can help on the way towards clinically relevant results. We compared two state-of-the-art CNN architectures for segmentation, U-Net and DeepLabV3, and investigated how incorporating auxiliary image data with vessel wall and lumen annotations improves the calcium segmentation performance by using these either for pretraining or multi-task training. DeepLabV3 outperforms U-Net with up to 6.3 % by means of the Dice coefficient and 36.5 % by means of the average Hausdorff distance. Using auxiliary data improves the segmentation performance in both cases, whereas the multi-task approach outperforms the pre-training approach. The improvements of the multi-task approach in contrast to not using auxiliary data at all is 5.7 % for the Dice coefficient and 42.9 % for the average Hausdorff distance. Automatic segmentation of calcified plaques in IVUS images is a demanding task due to their relatively small size compared to the image dimensions and due to visual ambiguities with other image structures. We showed that this problem can generally be tackled by CNNs. Furthermore, we were able to improve the performance by a multi-task learning approach with auxiliary segmentation data.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
C. A. Neves ◽  
E. D. Tran ◽  
I. M. Kessler ◽  
N. H. Blevins

AbstractMiddle- and inner-ear surgery is a vital treatment option in hearing loss, infections, and tumors of the lateral skull base. Segmentation of otologic structures from computed tomography (CT) has many potential applications for improving surgical planning but can be an arduous and time-consuming task. We propose an end-to-end solution for the automated segmentation of temporal bone CT using convolutional neural networks (CNN). Using 150 manually segmented CT scans, a comparison of 3 CNN models (AH-Net, U-Net, ResNet) was conducted to compare Dice coefficient, Hausdorff distance, and speed of segmentation of the inner ear, ossicles, facial nerve and sigmoid sinus. Using AH-Net, the Dice coefficient was 0.91 for the inner ear; 0.85 for the ossicles; 0.75 for the facial nerve; and 0.86 for the sigmoid sinus. The average Hausdorff distance was 0.25, 0.21, 0.24 and 0.45 mm, respectively. Blinded experts assessed the accuracy of both techniques, and there was no statistical difference between the ratings for the two methods (p = 0.93). Objective and subjective assessment confirm good correlation between automated segmentation of otologic structures and manual segmentation performed by a specialist. This end-to-end automated segmentation pipeline can help to advance the systematic application of augmented reality, simulation, and automation in otologic procedures.


2020 ◽  
Vol 4 (1) ◽  
Author(s):  
Francesco Rizzetto ◽  
Francesca Calderoni ◽  
Cristina De Mattia ◽  
Arianna Defeudis ◽  
Valentina Giannini ◽  
...  

Abstract Background Radiomics is expected to improve the management of metastatic colorectal cancer (CRC). We aimed at evaluating the impact of liver lesion contouring as a source of variability on radiomic features (RFs). Methods After Ethics Committee approval, 70 liver metastases in 17 CRC patients were segmented on contrast-enhanced computed tomography scans by two residents and checked by experienced radiologists. RFs from grey level co-occurrence and run length matrices were extracted from three-dimensional (3D) regions of interest (ROIs) and the largest two-dimensional (2D) ROIs. Inter-reader variability was evaluated with Dice coefficient and Hausdorff distance, whilst its impact on RFs was assessed using mean relative change (MRC) and intraclass correlation coefficient (ICC). For the main lesion of each patient, one reader also segmented a circular ROI on the same image used for the 2D ROI. Results The best inter-reader contouring agreement was observed for 2D ROIs according to both Dice coefficient (median 0.85, interquartile range 0.78–0.89) and Hausdorff distance (0.21 mm, 0.14–0.31 mm). Comparing RF values, MRC ranged 0–752% for 2D and 0–1567% for 3D. For 24/32 RFs (75%), MRC was lower for 2D than for 3D. An ICC > 0.90 was observed for more RFs for 2D (53%) than for 3D (34%). Only 2/32 RFs (6%) showed a variability between 2D and circular ROIs higher than inter-reader variability. Conclusions A 2D contouring approach may help mitigate overall inter-reader variability, albeit stable RFs can be extracted from both 3D and 2D segmentations of CRC liver metastases.


Author(s):  
V. K. Deepak ◽  
R. Sarath

In the medical image-processing field brain tumor segmentation is aquintessential task. Thereby early diagnosis gives us a chance of increasing survival rate. It will be way much complex and time consuming when comes to processing large amount of MRI images manually, so for that we need an automatic way of brain tumor image segmentation process. This paper aims to gives a comparative study of brain tumor segmentation, which are MRI-based. So recent methods of automatic segmentation along with advanced techniques gives us an improved result and can solve issue better than any other methods. Therefore, this paper brings comparative analysis of three models such as Deformable model of Fuzzy C-Mean clustering (DMFCM), Adaptive Cluster with Super Pixel Segmentation (ACSP) and Grey Wolf Optimization based ACSP (GWO_ACSP) and these are tested on CANCER IMAGE ACHRCHIEVE which is a preparation information base containing High Grade and Low-Grade astrocytoma tumors. Here boundaries including Accuracy, Dice coefficient, Jaccard score and MCC are assessed and along these lines produce the outcomes. From this examination the test consequences of Grey Wolf Optimization based ACSP (GWO_ACSP) gives better answer for mind tumor division issue.


2021 ◽  
pp. 20210141
Author(s):  
Anne Schomöller ◽  
Lucie Risch ◽  
Hannes Kaplick ◽  
Monique Wochatz ◽  
Tilman Engel ◽  
...  

Objective: To assess the reliability of measurements of paraspinal muscle transverse relaxation times (T2 times) between two observers and within one observer on different time points. Methods: 14 participants (9f/5m, 33 ± 5 years, 176 ± 10 cm, 73 ± 12 kg) underwent 2 consecutive MRI scans (M1,M2) on the same day, followed by 1 MRI scan 13–14 days later (M3) in a mobile 1.5 Tesla MRI. T2 times were calculated in T2 weighted turbo spin-echo-sequences at the spinal level of the third lumbar vertebrae (11 slices, 2 mm slice thickness, 1 mm interslice gap, echo times: 20, 40, 60, 80, 100 ms) for M. erector spinae (ES) and M. multifidius (MF). The following reliability parameter were calculated for the agreement of T2 times between two different investigators (OBS1 & OBS2) on the same MRI (inter-rater reliability, IR) and by one investigator between different MRI of the same participant (intersession variability, IS): Test–Retest Variability (TRV, Differences/Mean*100); Coefficient of Variation (CV, Standard deviation/Mean*100); Bland–Altman Analysis (systematic bias = Mean of the Differences; Upper/Lower Limits of Agreement = Bias+/−1.96*SD); Intraclass Correlation Coefficient 3.1 (ICC) with absolute agreement, as well as its 95% confidence interval. Results: Mean TRV for IR was 2.6% for ES and 4.2% for MF. Mean TRV for IS was 3.5% (ES) and 5.1% (MF). Mean CV for IR was 1.9 (ES) and 3.0 (MF). Mean CV for IS was 2.5% (ES) and 3.6% (MF). A systematic bias of 1.3 ms (ES) and 2.1 ms (MF) were detected for IR and a systematic bias of 0.4 ms (ES) and 0.07 ms (MF) for IS. ICC for IR was 0.94 (ES) and 0.87 (MF). ICC for IS was 0.88 (ES) and 0.82 (MF). Conclusion: Reliable assessment of paraspinal muscle T2 time justifies its use for scientific purposes. The applied technique could be recommended to use for future studies that aim to assess changes of T2 times, e.g. after an intense bout of eccentric exercises.


2020 ◽  
Vol 12 (2) ◽  
pp. 37-45
Author(s):  
João Marcos Garcia Fagundes ◽  
Allan Rodrigues Rebelo ◽  
Luciano Antonio Digiampietri ◽  
Helton Hideraldo Bíscaro

Bee preservation is important because approximately 70% of all pollination of food crops is made by them and this service costs more than $ 65 billion annually. In order to help this preservation, the identification of the bee species is necessary, and since this is a costly and time-consuming process, techniques that automate and facilitate this identification become relevant. Images of bees' wings in conjunction with computer vision and artificial intelligence techniques can be used to automate this process. This paper presents an approach to do segmentation of bees' wing images and feature extraction. Our approach was evaluated using the modified Hausdorff distance and F measure. The results were, at least, 24% more precise than the related approaches and the proposed approach was able to deal with noisy images.


Author(s):  
James P. Howard ◽  
Sameer Zaman ◽  
Aaraby Ragavan ◽  
Kerry Hall ◽  
Greg Leonard ◽  
...  

Abstract The large number of available MRI sequences means patients cannot realistically undergo them all, so the range of sequences to be acquired during a scan are protocolled based on clinical details. Adapting this to unexpected findings identified early on in the scan requires experience and vigilance. We investigated whether deep learning of the images acquired in the first few minutes of a scan could provide an automated early alert of abnormal features. Anatomy sequences from 375 CMR scans were used as a training set. From these, we annotated 1500 individual slices and used these to train a convolutional neural network to perform automatic segmentation of the cardiac chambers, great vessels and any pleural effusions. 200 scans were used as a testing set. The system then assembled a 3D model of the thorax from which it made clinical measurements to identify important abnormalities. The system was successful in segmenting the anatomy slices (Dice 0.910) and identified multiple features which may guide further image acquisition. Diagnostic accuracy was 90.5% and 85.5% for left and right ventricular dilatation, 85% for left ventricular hypertrophy and 94.4% for ascending aorta dilatation. The area under ROC curve for diagnosing pleural effusions was 0.91. We present proof-of-concept that a neural network can segment and derive accurate clinical measurements from a 3D model of the thorax made from transaxial anatomy images acquired in the first few minutes of a scan. This early information could lead to dynamic adaptive scanning protocols, and by focusing scanner time appropriately and prioritizing cases for supervision and early reporting, improve patient experience and efficiency.


2021 ◽  
Vol 17 (4) ◽  
pp. 101-118
Author(s):  
Nandhini Abirami ◽  
Durai Raj Vincent ◽  
Seifedine Kadry

Early and automatic segmentation of lung infections from computed tomography images of COVID-19 patients is crucial for timely quarantine and effective treatment. However, automating the segmentation of lung infection from CT slices is challenging due to a lack of contrast between the normal and infected tissues. A CNN and GAN-based framework are presented to classify and then segment the lung infections automatically from COVID-19 lung CT slices. In this work, the authors propose a novel method named P2P-COVID-SEG to automatically classify COVID-19 and normal CT images and then segment COVID-19 lung infections from CT images using GAN. The proposed model outperformed the existing classification models with an accuracy of 98.10%. The segmentation results outperformed existing methods and achieved infection segmentation with accurate boundaries. The Dice coefficient achieved using GAN segmentation is 81.11%. The segmentation results demonstrate that the proposed model outperforms the existing models and achieves state-of-the-art performance.


2021 ◽  
Vol 11 (2) ◽  
pp. 487-496
Author(s):  
Li Liu ◽  
Chi Hua ◽  
Zixuan Cheng ◽  
Yunfeng Ji

Advances in medical imaging skills have promoted the influence of medical imaging in neuroscience. Having advanced medical imaging technology is essential for the medical industry. Magnetic resonance imaging (MRI) plays a central role in medical imaging. It plays a key role in the treatment of various human diseases. Doctors analyze brain size, shape, and location in brain MR images to assess brain disease and develop a medical plan. The manual division of brain tissue by experts is heavy and subjective. Therefore, the study of automatic segmentation of brain MR images has practical significance. Because the characteristics of brain MRI images are low contrast and high noise, which seriously affects the accuracy of image segmentation, the current image segmentation methods have some limitations in this application. In this paper, multiple self-organizing feature maps neural network (SOM-NN) are utilized to construct a parallel self-organizing feature maps neural network (PSOM-NN), which converts the segmentation problem of brain images into the classification problem of PSOMNN. The experiments show that SOM has strong self-learning ability in learning and training, and the parallel ability of PSOM-NN model greatly reduces the segmentation time, improves the real-time performance of the model, and helps to realize fully automatic or semi-automatic segmentation of the lesion area. PSOM can promote the improvement of segmentation accuracy and facilitate intelligent diagnosis.


Sign in / Sign up

Export Citation Format

Share Document