scholarly journals Label fusion method combining pixel greyscale probability for brain MR segmentation

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Monan Wang ◽  
Pengcheng Li

AbstractMulti-atlas-based segmentation (MAS) methods have demonstrated superior performance in the field of automatic image segmentation, and label fusion is an important part of MAS methods. In this paper, we propose a label fusion method that incorporates pixel greyscale probability information. The proposed method combines the advantages of label fusion methods based on sparse representation (SRLF) and weighted voting methods using patch similarity weights (PSWV) and introduces pixel greyscale probability information to improve the segmentation accuracy. We apply the proposed method to the segmentation of deep brain tissues in challenging 3D brain MR images from publicly available IBSR datasets, including images of the thalamus, hippocampus, caudate, putamen, pallidum and amygdala. The experimental results show that the proposed method has higher segmentation accuracy and robustness than the related methods. Compared with the state-of-the-art methods, the proposed method obtains the best putamen, pallidum and amygdala segmentation results and hippocampus and caudate segmentation results that are similar to those of the comparison methods.

Author(s):  
A. Gommlich ◽  
F. Raschke ◽  
J. Petr ◽  
A. Seidlitz ◽  
C. Jentsch ◽  
...  

Abstract Objective Brain atrophy has the potential to become a biomarker for severity of radiation-induced side-effects. Particularly brain tumour patients can show great MRI signal changes over time caused by e.g. oedema, tumour progress or necrosis. The goal of this study was to investigate if such changes affect the segmentation accuracy of normal appearing brain and thus influence longitudinal volumetric measurements. Materials and methods T1-weighted MR images of 52 glioblastoma patients with unilateral tumours acquired before and three months after the end of radio(chemo)therapy were analysed. GM and WM volumes in the contralateral hemisphere were compared between segmenting the whole brain (full) and the contralateral hemisphere only (cl) with SPM and FSL. Relative GM and WM volumes were compared using paired t tests and correlated with the corresponding mean dose in GM and WM, respectively. Results Mean GM atrophy was significantly higher for full segmentation compared to cl segmentation when using SPM (mean ± std: ΔVGM,full = − 3.1% ± 3.7%, ΔVGM,cl = − 1.6% ± 2.7%; p < 0.001, d = 0.62). GM atrophy was significantly correlated with the mean GM dose with the SPM cl segmentation (r = − 0.4, p = 0.004), FSL full segmentation (r = − 0.4, p = 0.004) and FSL cl segmentation (r = -0.35, p = 0.012) but not with the SPM full segmentation (r = − 0.23, p = 0.1). Conclusions For accurate normal tissue volume measurements in brain tumour patients using SPM, abnormal tissue needs to be masked prior to segmentation, however, this is not necessary when using FSL.


2007 ◽  
Vol 107 (5) ◽  
pp. 989-997 ◽  
Author(s):  
Yasushi Miyagi ◽  
Fumio Shima ◽  
Tomio Sasaki

Object The goal of this study was to focus on the tendency of brain shift during stereotactic neurosurgery and the shift's impact on the unilateral and bilateral implantation of electrodes for deep brain stimulation (DBS). Methods Eight unilateral and 10 bilateral DBS electrodes at 10 nuclei ventrales intermedii and 18 subthalamic nuclei were implanted in patients at Kaizuka Hospital with the aid of magnetic resonance (MR) imaging–guided and microelectrode-guided methods. Brain shift was assessed as changes in the 3D coordinates of the anterior and posterior commissures (AC and PC) with MR images before and immediately after the implantation surgery. The positions of the implanted electrodes, based on the midcommissural point and AC–PC line, were measured both on x-ray films (virtual position) during surgery and the postoperative MR images (actual position) obtained on the 7th day postoperatively. Results Contralateral and posterior shift of the AC and PC were the characteristics of unilateral and bilateral procedures, respectively. The authors suggest the following. 1) The first unilateral procedure elicits a unilateral air invasion, resulting in a contralateral brain shift. 2) During the second procedure in the bilateral surgery, the contralateral shift is reset to the midline and, at the same time, the anteroposterior support by the contralateral hemisphere against gravity is lost due to a bilateral air invasion, resulting in a significant posterior (caudal) shift. Conclusions To note the tendency of the brain to shift is very important for accurate implantation of a DBS electrode or high frequency thermocoagulation, as well as for the prediction of therapeutic and adverse effects of stereotactic surgery.


Rank level fusion is one of the after matching fusion methods used in multibiometric systems. The problem of rank information aggregation has been raised before in various fields. This chapter extensively discusses the rank level fusion methodology, starting with existing literature from the last decade in different application scenarios. Several approaches of existing biometric rank level fusion methods, such as plurality voting method, highest rank method, Borda count method, logistic regression method, and quality-based rank fusion method, are discussed along with their advantages and disadvantages in the context of the current state-of-the-art in the discipline.


2020 ◽  
Vol 12 (23) ◽  
pp. 3979
Author(s):  
Shuwei Hou ◽  
Wenfang Sun ◽  
Baolong Guo ◽  
Cheng Li ◽  
Xiaobo Li ◽  
...  

Many spatiotemporal image fusion methods in remote sensing have been developed to blend highly resolved spatial images and highly resolved temporal images to solve the problem of a trade-off between the spatial and temporal resolution from a single sensor. Yet, none of the spatiotemporal fusion methods considers how the various temporal changes between different pixels affect the performance of the fusion results; to develop an improved fusion method, these temporal changes need to be integrated into one framework. Adaptive-SFSDAF extends the existing fusion method that incorporates sub-pixel class fraction change information in Flexible Spatiotemporal DAta Fusion (SFSDAF) by modifying spectral unmixing to select spectral unmixing adaptively in order to greatly improve the efficiency of the algorithm. Accordingly, the main contributions of the proposed adaptive-SFSDAF method are twofold. One is to address the detection of outliers of temporal change in the image during the period between the origin and prediction dates, as these pixels are the most difficult to estimate and affect the performance of the spatiotemporal fusion methods. The other primary contribution is to establish an adaptive unmixing strategy according to the guided mask map, thus effectively eliminating a great number of insignificant unmixed pixels. The proposed method is compared with the state-of-the-art Flexible Spatiotemporal DAta Fusion (FSDAF), SFSDAF, FIT-FC, and Unmixing-Based Data Fusion (UBDF) methods, and the fusion accuracy is evaluated both quantitatively and visually. The experimental results show that adaptive-SFSDAF achieves outstanding performance in balancing computational efficiency and the accuracy of the fusion results.


2020 ◽  
Vol 32 (5) ◽  
pp. 829-864 ◽  
Author(s):  
Jing Gao ◽  
Peng Li ◽  
Zhikui Chen ◽  
Jianing Zhang

With the wide deployments of heterogeneous networks, huge amounts of data with characteristics of high volume, high variety, high velocity, and high veracity are generated. These data, referred to multimodal big data, contain abundant intermodality and cross-modality information and pose vast challenges on traditional data fusion methods. In this review, we present some pioneering deep learning models to fuse these multimodal big data. With the increasing exploration of the multimodal big data, there are still some challenges to be addressed. Thus, this review presents a survey on deep learning for multimodal data fusion to provide readers, regardless of their original community, with the fundamentals of multimodal deep learning fusion method and to motivate new multimodal data fusion techniques of deep learning. Specifically, representative architectures that are widely used are summarized as fundamental to the understanding of multimodal deep learning. Then the current pioneering multimodal data fusion deep learning models are summarized. Finally, some challenges and future topics of multimodal data fusion deep learning models are described.


Sensors ◽  
2019 ◽  
Vol 19 (20) ◽  
pp. 4556 ◽  
Author(s):  
Yaochen Liu ◽  
Lili Dong ◽  
Yuanyuan Ji ◽  
Wenhai Xu

In many actual applications, fused image is essential to contain high-quality details for achieving a comprehensive representation of the real scene. However, existing image fusion methods suffer from loss of details because of the error accumulations of sequential tasks. This paper proposes a novel fusion method to preserve details of infrared and visible images by combining new decomposition, feature extraction, and fusion scheme. For decomposition, different from the most decomposition methods by guided filter, the guidance image contains only the strong edge of the source image but no other interference information so that rich tiny details can be decomposed into the detailed part. Then, according to the different characteristics of infrared and visible detail parts, a rough convolutional neural network (CNN) and a sophisticated CNN are designed so that various features can be fully extracted. To integrate the extracted features, we also present a multi-layer features fusion strategy through discrete cosine transform (DCT), which not only highlights significant features but also enhances details. Moreover, the base parts are fused by weighting method. Finally, the fused image is obtained by adding the fused detail and base part. Different from the general image fusion methods, our method not only retains the target region of source image but also enhances background in the fused image. In addition, compared with state-of-the-art fusion methods, our proposed fusion method has many advantages, including (i) better visual quality of fused-image subjective evaluation, and (ii) better objective assessment for those images.


Sign in / Sign up

Export Citation Format

Share Document