scholarly journals Classifyber, a robust streamline-based linear classifier for white matter bundle segmentation

2020 ◽  
Author(s):  
Giulia Bertò ◽  
Daniel Bullock ◽  
Pietro Astolfi ◽  
Soichi Hayashi ◽  
Luca Zigiotto ◽  
...  

AbstractVirtual delineation of white matter bundles in the human brain is of paramount importance for multiple applications, such as pre-surgical planning and connectomics. A substantial body of literature is related to methods that automatically segment bundles from diffusion Magnetic Resonance Imaging (dMRI) data indirectly, by exploiting either the idea of connectivity between regions or the geometry of fiber paths obtained with tractography techniques, or, directly, through the information in volumetric data. Despite the remarkable improvement in automatic segmentation methods over the years, their segmentation quality is not yet satisfactory, especially when dealing with datasets with very diverse characteristics, such as different tracking methods, bundle sizes or data quality. In this work, we propose a novel, supervised streamline-based segmentation method, called Classifyber, which combines information from atlases, connectivity patterns, and the geometry of fiber paths into a simple linear model. With a wide range of experiments on multiple datasets that span from research to clinical domains, we show that Classifyber substantially improves the quality of segmentation as compared to other state-of-the-art methods and, more importantly, that it is robust across very diverse settings. We provide an implementation of the proposed method as open source code, as well as web service.

2020 ◽  
Vol 961 (7) ◽  
pp. 47-55
Author(s):  
A.G. Yunusov ◽  
A.J. Jdeed ◽  
N.S. Begliarov ◽  
M.A. Elshewy

Laser scanning is considered as one of the most useful and fast technologies for modelling. On the other hand, the size of scan results can vary from hundreds to several million points. As a result, the large volume of the obtained clouds leads to complication at processing the results and increases the time costs. One way to reduce the volume of a point cloud is segmentation, which reduces the amount of data from several million points to a limited number of segments. In this article, we evaluated effect on the performance, the accuracy of various segmentation methods and the geometric accuracy of the obtained models at density changes taking into account the processing time. The results of our experiment were compared with reference data in a form of comparative analysis. As a conclusion, some recommendations for choosing the best segmentation method were proposed.


2020 ◽  
Vol 2020 ◽  
pp. 1-27
Author(s):  
Jinghua Zhang ◽  
Chen Li ◽  
Frank Kulwa ◽  
Xin Zhao ◽  
Changhao Sun ◽  
...  

To assist researchers to identify Environmental Microorganisms (EMs) effectively, a Multiscale CNN-CRF (MSCC) framework for the EM image segmentation is proposed in this paper. There are two parts in this framework: The first is a novel pixel-level segmentation approach, using a newly introduced Convolutional Neural Network (CNN), namely, “mU-Net-B3”, with a dense Conditional Random Field (CRF) postprocessing. The second is a VGG-16 based patch-level segmentation method with a novel “buffer” strategy, which further improves the segmentation quality of the details of the EMs. In the experiment, compared with the state-of-the-art methods on 420 EM images, the proposed MSCC method reduces the memory requirement from 355 MB to 103 MB, improves the overall evaluation indexes (Dice, Jaccard, Recall, Accuracy) from 85.24%, 77.42%, 82.27%, and 96.76% to 87.13%, 79.74%, 87.12%, and 96.91%, respectively, and reduces the volume overlap error from 22.58% to 20.26%. Therefore, the MSCC method shows great potential in the EM segmentation field.


2018 ◽  
Vol 7 (2.5) ◽  
pp. 77
Author(s):  
Anis Farihan Mat Raffei ◽  
Rohayanti Hassan ◽  
Shahreen Kasim ◽  
Hishamudin Asmuni ◽  
Asraful Syifaa’ Ahmad ◽  
...  

The quality of eye image data become degraded particularly when the image is taken in the non-cooperative acquisition environment such as under visible wavelength illumination. Consequently, this environmental condition may lead to noisy eye images, incorrect localization of limbic and pupillary boundaries and eventually degrade the performance of iris recognition system. Hence, this study has compared several segmentation methods to address the abovementioned issues. The results show that Circular Hough transform method is the best segmentation method with the best overall accuracy, error rate and decidability index that more tolerant to ‘noise’ such as reflection.  


2019 ◽  
Vol 9 (12) ◽  
pp. 335 ◽  
Author(s):  
Gašper Zupan ◽  
Dušan Šuput ◽  
Zvezdan Pirtošek ◽  
Andrej Vovk

In Parkinson’s disease (PD), there is a reduction of neuromelanin (NM) in the substantia nigra (SN). Manual quantification of the NM volume in the SN is unpractical and time-consuming; therefore, we aimed to quantify NM in the SN with a novel semi-automatic segmentation method. Twenty patients with PD and twelve healthy subjects (HC) were included in this study. T1-weighted spectral pre-saturation with inversion recovery (SPIR) images were acquired on a 3T scanner. Manual and semi-automatic atlas-free local statistics signature-based segmentations measured the surface and volume of SN, respectively. Midbrain volume (MV) was calculated to normalize the data. Receiver operating characteristic (ROC) analysis was performed to determine the sensitivity and specificity of both methods. PD patients had significantly lower SN mean surface (37.7 ± 8.0 vs. 56.9 ± 6.6 mm2) and volume (235.1 ± 45.4 vs. 382.9 ± 100.5 mm3) than HC. After normalization with MV, the difference remained significant. For surface, sensitivity and specificity were 91.7 and 95 percent, respectively. For volume, sensitivity and specificity were 91.7 and 90 percent, respectively. Manual and semi-automatic segmentation methods of the SN reliably distinguished between PD patients and HC. ROC analysis shows the high sensitivity and specificity of both methods.


Author(s):  
I-SHENG KUO ◽  
LING-HWEI CHEN

The sprite generator introduced in MPEG-4 blends frames by averaging, which will make places, that are always occupied by moving objects, look blurred. Thus, providing segmented masks for moving objects is suggested. Several researchers have employed automatic segmentation methods to produce moving object masks. Based on these masks, they used a reliability-based blending strategy to generate sprites. Since perfect segmentation is impossible, some ghost-like shadows will appear in the generated sprite. To treat this problem, in this paper, an intelligent blending strategy without needing segmentation masks is proposed. It is based on the fact that for each point in the generated sprite, the corresponding pixels in most frames belong to background and only few belong to moving objects. A counting schema is provided to make only background points participate in average blending. The experimental result shows that the visual quality of the generated sprite using the proposed blending strategy is close to that using manually segmented masks and is better than that generated by Lu-Gao-Wu method. No ghostlike shadows are produced. Furthermore, a uniform feature point extraction method is proposed to increase the precision of global motion estimation, the effectiveness of this part is presented by showing the comparison results with other existing method.


Author(s):  
Г.В. Худов ◽  
І.А. Хижняк

The article discusses the methods of swarm intelligence, namely, an improved method based on the ant colony optimization and the method of an artificial bee colony. The goal of the work is to carry out a comparative assessment of the optical-electronic images segmentation quality by the ant colony optimization and the artificial bee colony. Segmentation of tonal optical-electronic images was carried out using the proposed methods of swarm intelligence. The results of the segmentation of optical-electronic images obtained from the spacecraft are presented. A visual assessment of the quality of segmentation results was carried out using improved methods. The classical errors of the first and second kind of segmentation of optoelectronic images are calculated for the proposed methods of swarm intelligence and for known segmentation methods. The features of using each of the proposed methods of swarm intelligence are determined. The tasks for which it is better to use each of the proposed methods of swarm intelligence are determined.


2021 ◽  
Vol 45 (1) ◽  
pp. 122-129
Author(s):  
Dang N.H. Thanh ◽  
Nguyen Hoang Hai ◽  
Le Minh Hieu ◽  
Prayag Tiwari ◽  
V.B. Surya Prasath

Melanoma skin cancer is one of the most dangerous forms of skin cancer because it grows fast and causes most of the skin cancer deaths. Hence, early detection is a very important task to treat melanoma. In this article, we propose a skin lesion segmentation method for dermoscopic images based on the U-Net architecture with VGG-16 encoder and the semantic segmentation. Base on the segmented skin lesion, diagnostic imaging systems can evaluate skin lesion features to classify them. The proposed method requires fewer resources for training, and it is suitable for computing systems without powerful GPUs, but the training accuracy is still high enough (above 95 %). In the experiments, we train the model on the ISIC dataset – a common dermoscopic image dataset. To assess the performance of the proposed skin lesion segmentation method, we evaluate the Sorensen-Dice and the Jaccard scores and compare to other deep learning-based skin lesion segmentation methods. Experimental results showed that skin lesion segmentation quality of the proposed method are better than ones of the compared methods.


2012 ◽  
Vol 220-223 ◽  
pp. 1292-1297
Author(s):  
Xing Ma ◽  
Jun Li Han ◽  
Chang Shun Liu

In recent years, the gray-scale thresholding segmentation has emerged as a primary tool for image segmentation. However, the application of segmentation algorithms to an image is often disappointing. Based on the characteristics analysis of infrared image, this paper develops several gray-scale thresholding segmentation methods capable of automatic segmentation in regions of pedestrians of infrared image. The approaches of gray-scale thresholding segmentation method are described. Then the experimental system is established by using the infrared CCD device for pedestrian image detection. The image segmentation results generated by the algorithm in the experiment demonstrate that the Otsu thresholding segmentation method has achieved a kind of algorithm on automatic detection and segmentation of infrared image information in regions of interest of image.


ACTA IMEKO ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 191
Author(s):  
Lorenzo Scalise ◽  
Rachele Napolitano ◽  
Lorenzo Verdenelli ◽  
Susanna Spinsante ◽  
Giorgio Rappelli

Masticatory efficiency in older adults is an important parameter for the assessment of their oral health and quality of life. This study presents a measurement method based on the automatic segmentation of two-coloured chewing gum based on a <em>K</em>-means clustering algorithm. The solution proposed aims to quantify the mixed areas of colour in order to evaluate masticatory performance in different dental conditions. The samples were provided by ‘two-colour mixing’ tests, currently the most used technique for the evaluation of masticatory efficacy, because of its simplicity, low acquisition times and reduced cost. The image analysis results demonstrated a high discriminative power, providing results in an automatic manner and reducing errors caused by manual segmentation. This approach thus provides a feasible and robust solution for the segmentation of chewed samples. Validation was carried out by means of a reference software, demonstrating a good correlation (<em>R</em><sup>2 </sup>= 0.64) and the higher sensitivity of the proposed method (+75 %). Tests on patients with different oral conditions demonstrated that the <em>K</em>-means segmentation method enabled the automatic classification of patients with different masticatory conditions, providing results in a shorter time period (20 chewing cycles instead of 50).


2015 ◽  
Vol 15 (04) ◽  
pp. 1550018 ◽  
Author(s):  
L. E. Carvalho ◽  
S. L. Mantelli Neto ◽  
A. C. Sobieranski ◽  
E. Comunello ◽  
A. von Wangenheim

We present a new segmentation method called weighted Felzenszwalb and Huttenlocher (WFH), an improved version of the well-known graph-based segmentation method, Felzenszwalb and Huttenlocher (FH). Our algorithm uses a nonlinear discrimination function based on polynomial Mahalanobis Distance (PMD) as the color similarity metric. Two empirical validation experiments were performed using as a golden standard ground truths (GTs) from a publicly available source, the Berkeley dataset, and an objective segmentation quality measure, the Rand dissimilarity index. In the first experiment the results were compared against the original FH method. In the second, WFH was compared against several well-known segmentation methods. In both cases, WFH presented significant better similarity results when compared with the golden standard and segmentation results presented a reduction of over-segmented regions.


Sign in / Sign up

Export Citation Format

Share Document