unsupervised segmentation
Recently Published Documents


TOTAL DOCUMENTS

459
(FIVE YEARS 92)

H-INDEX

29
(FIVE YEARS 4)

2021 ◽  
Vol 12 (1) ◽  
pp. 162
Author(s):  
Carmelo Militello ◽  
Andrea Ranieri ◽  
Leonardo Rundo ◽  
Ildebrando D’Angelo ◽  
Franco Marinozzi ◽  
...  

Unsupervised segmentation techniques, which do not require labeled data for training and can be more easily integrated into the clinical routine, represent a valid solution especially from a clinical feasibility perspective. Indeed, large-scale annotated datasets are not always available, undermining their immediate implementation and use in the clinic. Breast cancer is the most common cause of cancer death in women worldwide. In this study, breast lesion delineation in Dynamic Contrast Enhanced MRI (DCE-MRI) series was addressed by means of four popular unsupervised segmentation approaches: Split-and-Merge combined with Region Growing (SMRG), k-means, Fuzzy C-Means (FCM), and spatial FCM (sFCM). They represent well-established pattern recognition techniques that are still widely used in clinical research. Starting from the basic versions of these segmentation approaches, during our analysis, we identified the shortcomings of each of them, proposing improved versions, as well as developing ad hoc pre- and post-processing steps. The obtained experimental results, in terms of area-based—namely, Dice Index (DI), Jaccard Index (JI), Sensitivity, Specificity, False Positive Ratio (FPR), False Negative Ratio (FNR)—and distance-based metrics—Mean Absolute Distance (MAD), Maximum Distance (MaxD), Hausdorff Distance (HD)—encourage the use of unsupervised machine learning techniques in medical image segmentation. In particular, fuzzy clustering approaches (namely, FCM and sFCM) achieved the best performance. In fact, for area-based metrics, they obtained DI = 78.23% ± 6.50 (sFCM), JI = 65.90% ± 8.14 (sFCM), sensitivity = 77.84% ± 8.72 (FCM), specificity = 87.10% ± 8.24 (sFCM), FPR = 0.14 ± 0.12 (sFCM), and FNR = 0.22 ± 0.09 (sFCM). Concerning distance-based metrics, they obtained MAD = 1.37 ± 0.90 (sFCM), MaxD = 4.04 ± 2.87 (sFCM), and HD = 2.21 ± 0.43 (FCM). These experimental findings suggest that further research would be useful for advanced fuzzy logic techniques specifically tailored to medical image segmentation.


Author(s):  
Arif Ahmed Sekh ◽  
Ida S. Opstad ◽  
Gustav Godtliebsen ◽  
Åsa Birna Birgisdottir ◽  
Balpreet Singh Ahluwalia ◽  
...  

AbstractSegmenting subcellular structures in living cells from fluorescence microscope images is a ground truth (GT)-deficient problem. The microscopes’ three-dimensional blurring function, finite optical resolution due to light diffraction, finite pixel resolution and the complex morphological manifestations of the structures all contribute to GT-hardness. Unsupervised segmentation approaches are quite inaccurate. Therefore, manual segmentation relying on heuristics and experience remains the preferred approach. However, this process is tedious, given the countless structures present inside a single cell, and generating analytics across a large population of cells or performing advanced artificial intelligence tasks such as tracking are greatly limited. Here we bring modelling and deep learning to a nexus for solving this GT-hard problem, improving both the accuracy and speed of subcellular segmentation. We introduce a simulation-supervision approach empowered by physics-based GT, which presents two advantages. First, the physics-based GT resolves the GT-hardness. Second, computational modelling of all the relevant physical aspects assists the deep learning models in learning to compensate, to a great extent, for the limitations of physics and the instrument. We show extensive results on the segmentation of small vesicles and mitochondria in diverse and independent living- and fixed-cell datasets. We demonstrate the adaptability of the approach across diverse microscopes through transfer learning, and illustrate biologically relevant applications of automated analytics and motion analysis.


Author(s):  
Yulong Cai ◽  
Siheng Mi ◽  
Jiahao Yan ◽  
Hong Peng ◽  
Xiaohui Luo ◽  
...  

2021 ◽  
Author(s):  
Banafshe Felfeliyan ◽  
Abhilash Hareendranathan ◽  
Gregor Kuntze ◽  
Jacob Jaremko ◽  
Janet Ronsky

2021 ◽  
Author(s):  
Yao Hu ◽  
Wei Zhou ◽  
Guohua Geng ◽  
Kang Li ◽  
Xingxing Hao ◽  
...  

2021 ◽  
Author(s):  
Du Lianyu ◽  
Hu Liwei ◽  
Zhang Xiaoyun ◽  
Zhong Yumin ◽  
Zhang Ya ◽  
...  

2021 ◽  
Author(s):  
Jinbiao Yang ◽  
Antal van den Bosch ◽  
Stefan L. Frank

Words typically form the basis of psycholinguistic and computational linguistic studies about sentence processing. However, recent evidence shows the basic units during reading, i.e., the items in the mental lexicon, are not always words, but could also be sub-word and supra-word units. To recognize these units, human readers require a cognitive mechanism to learn and detect them. In this paper, we assume eye fixations during reading reveal the locations of the cognitive units, and that the cognitive units are analogous with the text units discovered by unsupervised segmentation models. We predict eye fixations by model-segmented units on both English and Dutch text. The results show the model-segmented units predict eye fixations better than word units. This finding suggests that the predictive performance of model-segmented units indicates their plausibility as cognitive units. The Less-is-Better (LiB) model, which finds the units that minimize both long-term and working memory load, offers advantages both in terms of prediction score and efficiency among alternative models. Our results also suggest that modeling the least-effort principle on the management of long-term and working memory can lead to inferring cognitive units. Overall, the study supports the theory that the mental lexicon stores not only words but also smaller and larger units, suggests that fixation locations during reading depend on these units, and shows that unsupervised segmentation models can discover these units.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5577
Author(s):  
Feng Mei ◽  
Qian Hu ◽  
Changxuan Yang ◽  
Lingfeng Liu

With the development of human motion capture (MoCap) equipment and motion analysis technologies, MoCap systems have been widely applied in many fields, including biomedicine, computer vision, virtual reality, etc. With the rapid increase in MoCap data collection in different scenarios and applications, effective segmentation of MoCap data is becoming a crucial issue for further human motion posture and behavior analysis, which requires both robustness and computation efficiency in the algorithm design. In this paper, we propose an unsupervised segmentation algorithm based on limb-bone partition angle body structural representation and autoregressive moving average (ARMA) model fitting. The collected MoCap data were converted into the angle sequence formed by the human limb-bone partition segment and the central spine segment. The limb angle sequences are matched by the ARMA model, and the segmentation points of the limb angle sequences are distinguished by analyzing the good of fitness of the ARMA model. A medial filtering algorithm is proposed to ensemble the segmentation results from individual limb motion sequences. A set of MoCap measurements were also conducted to evaluate the algorithm including typical body motions collected from subjects of different heights, and were labeled by manual segmentation. The proposed algorithm is compared with the principle component analysis (PCA), K-means clustering algorithm (K-means), and back propagation (BP) neural-network-based segmentation algorithms, which shows higher segmentation accuracy due to a more semantic description of human motions by limb-bone partition angles. The results highlight the efficiency and performance of the proposed algorithm, and reveals the potentials of this segmentation model on analyzing inter- and intra-motion sequence distinguishing.


2021 ◽  
Author(s):  
Soumydip Sarkar ◽  
Tamesh Halder ◽  
Vivek Poddar ◽  
Rintu Kumar Gayen ◽  
Arundhati Mishra Ray ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document