Renal parenchyma segmentation in abdominal MR images based on cascaded deep convolutional neural network with signal intensity correction

Author(s):  
Hyeonjin Kim ◽  
Helen Hong ◽  
Dae Chul Jung ◽  
Kidon Chang ◽  
Koon Ho Rha
Author(s):  
Hong Lu ◽  
Xiaofei Zou ◽  
Longlong Liao ◽  
Kenli Li ◽  
Jie Liu

Compressive Sensing for Magnetic Resonance Imaging (CS-MRI) aims to reconstruct Magnetic Resonance (MR) images from under-sampled raw data. There are two challenges to improve CS-MRI methods, i.e. designing an under-sampling algorithm to achieve optimal sampling, as well as designing fast and small deep neural networks to obtain reconstructed MR images with superior quality. To improve the reconstruction quality of MR images, we propose a novel deep convolutional neural network architecture for CS-MRI named MRCSNet. The MRCSNet consists of three sub-networks, a compressive sensing sampling sub-network, an initial reconstruction sub-network, and a refined reconstruction sub-network. Experimental results demonstrate that MRCSNet generates high-quality reconstructed MR images at various under-sampling ratios, and also meets the requirements of real-time CS-MRI applications. Compared to state-of-the-art CS-MRI approaches, MRCSNet offers a significant improvement in reconstruction accuracies, such as Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity (SSIM). Besides, it reduces the reconstruction error evaluated by the Normalized Root-Mean-Square Error (NRMSE). The source codes are available at https://github.com/TaihuLight/MRCSNet .


2021 ◽  
Vol 68 (2) ◽  
pp. 2413-2429
Author(s):  
Tapan Kumar Das ◽  
Pradeep Kumar Roy ◽  
Mohy Uddin ◽  
Kathiravan Srinivasan ◽  
Chuan-Yu Chang ◽  
...  

2018 ◽  
Vol 95 ◽  
pp. 43-54 ◽  
Author(s):  
Odelin Charron ◽  
Alex Lallement ◽  
Delphine Jarnet ◽  
Vincent Noblet ◽  
Jean-Baptiste Clavier ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (20) ◽  
pp. 6714
Author(s):  
Artur Klepaczko ◽  
Eli Eikefjord ◽  
Arvid Lundervold

Quantification of renal perfusion based on dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) requires determination of signal intensity time courses in the region of renal parenchyma. Thus, selection of voxels representing the kidney must be accomplished with special care and constitutes one of the major technical limitations which hampers wider usage of this technique as a standard clinical routine. Manual segmentation of renal compartments—even if performed by experts—is a common source of decreased repeatability and reproducibility. In this paper, we present a processing framework for the automatic kidney segmentation in DCE-MR images. The framework consists of two stages. Firstly, kidney masks are generated using a convolutional neural network. Then, mask voxels are classified to one of three regions—cortex, medulla, and pelvis–based on DCE-MRI signal intensity time courses. The proposed approach was evaluated on a cohort of 10 healthy volunteers who underwent the DCE-MRI examination. MRI scanning was repeated on two time events within a 10-day interval. For semantic segmentation task we employed a classic U-Net architecture, whereas experiments on voxel classification were performed using three alternative algorithms—support vector machines, logistic regression and extreme gradient boosting trees, among which SVM produced the most accurate results. Both segmentation and classification steps were accomplished by a series of models, each trained separately for a given subject using the data from other participants only. The mean achieved accuracy of the whole kidney segmentation was 94% in terms of IoU coefficient. Cortex, medulla and pelvis were segmented with IoU ranging from 90 to 93% depending on the tissue and body side. The results were also validated by comparing image-derived perfusion parameters with ground truth measurements of glomerular filtration rate (GFR). The repeatability of GFR calculation, as assessed by the coefficient of variation was determined at the level of 14.5 and 17.5% for the left and right kidney, respectively and it improved relative to manual segmentation. Reproduciblity, in turn, was evaluated by measuring agreement between image-derived and iohexol-based GFR values. The estimated absolute mean differences were equal to 9.4 and 12.9 mL/min/1.73 m2 for scanning sessions 1 and 2 and the proposed automated segmentation method. The result for session 2 was comparable with manual segmentation, whereas for session 1 reproducibility in the automatic pipeline was weaker.


2020 ◽  
Vol 15 (2) ◽  
pp. 94-108
Author(s):  
R. Kala ◽  
P. Deepa

Background: Accurate detection of brain tumor and its severity is a challenging task in the medical field. So there is a need for developing brain tumor detecting algorithms and it is an emerging one for diagnosis, planning the treatment and outcome evaluation. Materials and Methods: Brain tumor segmentation method using deep learning classification and multi-modal composition has been developed using the deep convolutional neural networks. The different modalities of MRI such as T1, flair, T1C and T2 are given as input for the proposed method. The MR images from the different modalities are used in proportion to the information contents in the particular modality. The weights for the different modalities are calculated blockwise and the standard deviation of the block is taken as a proxy for the information content of the block. Then the convolution is performed between the input image of the T1, flair, T1C and T2 MR images and corresponding to the weight of the T1, flair, T1C, and T2 images. The convolution is summed between the different modalities of the MR images and its corresponding weight of the different modalities of the MR images to obtain a new composite image which is given as an input image to the deep convolutional neural network. The deep convolutional neural network performs segmentation through the different layers of CNN and different filter operations are performed in each layer to obtain the enhanced classification and segmented spatial consistency results. The analysis of the proposed method shows that the discriminatory information from the different modalities is effectively combined to increase the overall accuracy of segmentation. Results: The proposed deep convolutional neural network for brain tumor segmentation method has been analysed by using the Brain Tumor Segmentation Challenge 2013 database (BRATS 2013). The complete, core and enhancing regions are validated with Dice Similarity Coefficient and Jaccard similarity index metric for the Challenge, Leaderboard, and Synthetic data set. To evaluate the classification rates, the metrics such as accuracy, precision, sensitivity, specificity, under-segmentation, incorrect segmentation and over segmentation also evaluated and compared with the existing methods. Experimental results exhibit a higher degree of precision in the segmentation compared to existing methods. Conclusion: In this work, deep convolution neural network with different modalities of MR image are used to detect the brain tumor. The new input image was created by convoluting the input image of the different modalities and their weights. The weights are determined using the standard deviation of the block. Segmentation accuracy is high with efficient appearance and spatial consistency. The assessment of segmented images is completely evaluated by using well-established metrics. In future, the proposed method will be considered and evaluated with other databases and the segmentation accuracy results should be analysed with the presence of different kind of noises.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Sign in / Sign up

Export Citation Format

Share Document