Sparse Representation Label Fusion Method Combining Pixel Grayscale Weight for Brain MR Segmentation

Author(s):  
Pengcheng Li ◽  
Monan Wang
2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Monan Wang ◽  
Pengcheng Li

AbstractMulti-atlas-based segmentation (MAS) methods have demonstrated superior performance in the field of automatic image segmentation, and label fusion is an important part of MAS methods. In this paper, we propose a label fusion method that incorporates pixel greyscale probability information. The proposed method combines the advantages of label fusion methods based on sparse representation (SRLF) and weighted voting methods using patch similarity weights (PSWV) and introduces pixel greyscale probability information to improve the segmentation accuracy. We apply the proposed method to the segmentation of deep brain tissues in challenging 3D brain MR images from publicly available IBSR datasets, including images of the thalamus, hippocampus, caudate, putamen, pallidum and amygdala. The experimental results show that the proposed method has higher segmentation accuracy and robustness than the related methods. Compared with the state-of-the-art methods, the proposed method obtains the best putamen, pallidum and amygdala segmentation results and hippocampus and caudate segmentation results that are similar to those of the comparison methods.


Author(s):  
Liu Xian-Hong ◽  
Chen Zhi-Bin

Background: A multi-scale multidirectional image fusion method is proposed, which introduces the Nonsubsampled Directional Filter Bank (NSDFB) into the multi-scale edge-preserving decomposition based on the fast guided filter. Methods: The proposed method has the advantages of preserving edges and extracting directional information simultaneously. In order to get better-fused sub-bands coefficients, a Convolutional Sparse Representation (CSR) based approximation sub-bands fusion rule is introduced and a Pulse Coupled Neural Network (PCNN) based detail sub-bands fusion strategy with New Sum of Modified Laplacian (NSML) to be the external input is also presented simultaneously. Results: Experimental results have demonstrated the superiority of the proposed method over conventional methods in terms of visual effects and objective evaluations. Conclusion: In this paper, combining fast guided filter and nonsubsampled directional filter bank, a multi-scale directional edge-preserving filter image fusion method is proposed. The proposed method has the features of edge-preserving and extracting directional information.


2021 ◽  
pp. 102535
Author(s):  
Rongge Zhao ◽  
Yi Liu ◽  
Zhe Zhao ◽  
Xia Zhao ◽  
Pengcheng Zhang ◽  
...  

2019 ◽  
Vol 13 (2) ◽  
pp. 240-248 ◽  
Author(s):  
Guiqing He ◽  
Siyuan Xing ◽  
Xingjian He ◽  
Jun Wang ◽  
Jianping Fan

2019 ◽  
Vol 90 ◽  
pp. 103806 ◽  
Author(s):  
Changda Xing ◽  
Zhisheng Wang ◽  
Quan Ouyang ◽  
Chong Dong ◽  
Chaowei Duan

2014 ◽  
Vol 67 ◽  
pp. 477-489 ◽  
Author(s):  
Jun Wang ◽  
Jinye Peng ◽  
Xiaoyi Feng ◽  
Guiqing He ◽  
Jianping Fan

2019 ◽  
Vol 9 (17) ◽  
pp. 3612
Author(s):  
Liao ◽  
Chen ◽  
Mo

As the focal length of an optical lens in a conventional camera is limited, it is usually arduous to obtain an image in which each object is focused. This problem can be solved by multi-focus image fusion. In this paper, we propose an entirely new multi-focus image fusion method based on decision map and sparse representation (DMSR). First, we obtained a decision map by analyzing low-scale images with sparse representation, measuring the effective clarity level, and using spatial frequency methods to process uncertain areas. Subsequently, the transitional area around the focus boundary was determined by the decision map, and we implemented the transitional area fusion based on sparse representation. The experimental results show that the proposed method is superior to the other five fusion methods, both in terms of visual effect and quantitative evaluation.


Sign in / Sign up

Export Citation Format

Share Document