Segmentation of Magnetic Resonance Brain Images Based on Improved Gaussian Mixture Model with Spatial Information

2015 ◽  
Vol 5 (8) ◽  
pp. 1989-1992
Author(s):  
Z. J. Bian ◽  
W. J. Tan ◽  
J. Z. Yang ◽  
Z. X. Gong ◽  
M. J. Xu ◽  
...  
2016 ◽  
Vol 2016 ◽  
pp. 1-10
Author(s):  
Yunjie Chen ◽  
Tianming Zhan ◽  
Ji Zhang ◽  
Hongyuan Wang

We propose a novel segmentation method based on regional and nonlocal information to overcome the impact of image intensity inhomogeneities and noise in human brain magnetic resonance images. With the consideration of the spatial distribution of different tissues in brain images, our method does not need preestimation or precorrection procedures for intensity inhomogeneities and noise. A nonlocal information based Gaussian mixture model (NGMM) is proposed to reduce the effect of noise. To reduce the effect of intensity inhomogeneity, the multigrid nonlocal Gaussian mixture model (MNGMM) is proposed to segment brain MR images in each nonoverlapping multigrid generated by using a new multigrid generation method. Therefore the proposed model can simultaneously overcome the impact of noise and intensity inhomogeneity and automatically classify 2D and 3D MR data into tissues of white matter, gray matter, and cerebral spinal fluid. To maintain the statistical reliability and spatial continuity of the segmentation, a fusion strategy is adopted to integrate the clustering results from different grid. The experiments on synthetic and clinical brain MR images demonstrate the superior performance of the proposed model comparing with several state-of-the-art algorithms.


2019 ◽  
Vol 13 (01) ◽  
pp. 1950020
Author(s):  
Jinghong Wu ◽  
Sijie Niu ◽  
Qiang Chen ◽  
Wen Fan ◽  
Songtao Yuan ◽  
...  

We introduce a method based on Gaussian mixture model (GMM) clustering and level-set to automatically detect intraretina fluid on diabetic retinopathy (DR) from spectral domain optical coherence tomography (SD-OCT) images in this paper. First, each B-scan is segmented using GMM clustering. The original clustering results are refined using location and thickness information. Then, the spatial information among every consecutive five B-scans is used to search potential fluid. Finally, the improved level-set method is used to obtain the accurate boundaries. The high sensitivity and accuracy demonstrated here show its potential for detection of fluid.


2013 ◽  
Vol 380-384 ◽  
pp. 3702-3705
Author(s):  
Xiao Na Zhang ◽  
Ming Yao ◽  
Feng Zhu ◽  
Jie Ni

The application of classical gaussian mixture model to image segmentation has highly computer complexiton and have not taking into account spatial information except intensity values. A image segmentation based on Gaussian mixture model with sampling and spatially information is proposed in order to solve this problem. First, a spatial information function is defined as the neighbour information weighted class probabilities of very pixels; Secondly, the sampling theorem is given in this paper,and the size of the minimum sample has been derived according to the smallest cluster and cluster number; Finally, image pixels are sampled based on the size of the minimum sample to estimate the parameter of model , which are classifed to different clusters according to bayesian rules. The experimental results show the effectiveness of the algorithm.


2019 ◽  
Vol 1 (2) ◽  
pp. 145-153
Author(s):  
Jin-jun Tang ◽  
Jin Hu ◽  
Yi-wei Wang ◽  
He-lai Huang ◽  
Yin-hai Wang

Abstract The data collected from taxi vehicles using the global positioning system (GPS) traces provides abundant temporal-spatial information, as well as information on the activity of drivers. Using taxi vehicles as mobile sensors in road networks to collect traffic information is an important emerging approach in efforts to relieve congestion. In this paper, we present a hybrid model for estimating driving paths using a density-based spatial clustering of applications with noise (DBSCAN) algorithm and a Gaussian mixture model (GMM). The first step in our approach is to extract the locations from pick-up and drop-off records (PDR) in taxi GPS equipment. Second, the locations are classified into different clusters using DBSCAN. Two parameters (density threshold and radius) are optimized using real trace data recorded from 1100 drivers. A GMM is also utilized to estimate a significant number of locations; the parameters of the GMM are optimized using an expectation-maximum (EM) likelihood algorithm. Finally, applications are used to test the effectiveness of the proposed model. In these applications, locations distributed in two regions (a residential district and a railway station) are clustered and estimated automatically.


2013 ◽  
Author(s):  
Meena prakash R ◽  
Shantha Selva Kumari R

An automated method of MR Brain image segmentation is presented. A block based Expectation Maximization method is presented for the tissue classification of MR Brain images. The standard Gaussian Mixture Model is the most widely used method for MR Brain Image Segmentation and Expectation Maximization algorithm is used to estimate the model parameters. The Gaussian Mixture Model considers each pixel as independent and does not take into account the spatial correlation between the neighbouring pixels. Hence the segmentation result obtained using standard GMM is highly sensitive to Inensity Non-Uniformity and noise. The image is divided into blocks before applying EM since the GMM is preserved in the local image blocks. Also, Nonsubsampled Contourlet Transform is employed to incorporate the spatial correlation among the neighbouring pixels. The method is applied to the 12 MR Brain volumes of MRBRAINS13 test data and the White Matter, Gray Matter and CSF structures were segmented.


Author(s):  
Emily Esmeralda Carvajal-Camelo ◽  
Jose Bernal ◽  
Arnau Oliver ◽  
Xavier Lladó ◽  
Maria Trujillo

Atrophy quantification is fundamental for understanding brain development and diagnosing and monitoring brain diseases. FSL-SIENA is a well-known fully-automated method that has been widely used in brain magnetic resonance imaging studies. However, intensity variations arising during image acquisition that may compromise evaluation, analysis and even diagnosis. In this work, we study whether intensity standardisation can improve longitudinal atrophy quantification. We considered seven methods comprising z-score, fuzzy c-means, Gaussian mixture model, kernel density, histogram matching, white stripe, and removal of artificial voxel effects by linear regression (RAVEL). We used a total of 330 scans from two publicly-available datasets, ADNI and OASIS. In scan-rescan assessments, that measures robustness to subtle imaging variations, intensity standardisation did not compromise the robustness of FSL-SIENA significantly (p>0.1). In power analysis assessments, that measures the ability to discern between two groups of subjects, three methods led to consistent improvements in both datasets with respect to the original: fuzzy c-means, Gaussian mixture model, and kernel density estimation. Reduction in sample size using these three methods ranged from 17% to 95%. The performance of the other four methods was affected by spatial normalisation, skull stripping errors, presence of periventricular white matter hyperintensities, or tissue proportion variations over time. Our work evinces the relevance of appropriate intensity standardisation in longitudinal cerebral atrophy assessments using FSL-SIENA.


Sign in / Sign up

Export Citation Format

Share Document