scholarly journals A General System for Automatic Biomedical Image Segmentation Using Intensity Neighborhoods

2011 ◽  
Vol 2011 ◽  
pp. 1-12 ◽  
Author(s):  
Cheng Chen ◽  
John A. Ozolek ◽  
Wei Wang ◽  
Gustavo K. Rohde

Image segmentation is important with applications to several problems in biology and medicine. While extensively researched, generally, current segmentation methods perform adequately in the applications for which they were designed, but often require extensive modifications or calibrations before being used in a different application. We describe an approach that, with few modifications, can be used in a variety of image segmentation problems. The approach is based on a supervised learning strategy that utilizes intensity neighborhoods to assign each pixel in a test image its correct class based on training data. We describe methods for modeling rotations and variations in scales as well as a subset selection for training the classifiers. We show that the performance of our approach in tissue segmentation tasks in magnetic resonance and histopathology microscopy images, as well as nuclei segmentation from fluorescence microscopy images, is similar to or better than several algorithms specifically designed for each of these applications.

Symmetry ◽  
2020 ◽  
Vol 12 (8) ◽  
pp. 1230
Author(s):  
Xiaofei Qin ◽  
Chengzi Wu ◽  
Hang Chang ◽  
Hao Lu ◽  
Xuedian Zhang

Medical image segmentation is a fundamental task in medical image analysis. Dynamic receptive field is very helpful for accurate medical image segmentation, which needs to be further studied and utilized. In this paper, we propose Match Feature U-Net, a novel, symmetric encoder– decoder architecture with dynamic receptive field for medical image segmentation. We modify the Selective Kernel convolution (a module proposed in Selective Kernel Networks) by inserting a newly proposed Match operation, which makes similar features in different convolution branches have corresponding positions, and then we replace the U-Net’s convolution with the redesigned Selective Kernel convolution. This network is a combination of U-Net and improved Selective Kernel convolution. It inherits the advantages of simple structure and low parameter complexity of U-Net, and enhances the efficiency of dynamic receptive field in Selective Kernel convolution, making it an ideal model for medical image segmentation tasks which often have small training data and large changes in targets size. Compared with state-of-the-art segmentation methods, the number of parameters in Match Feature U-Net (2.65 M) is 34% of U-Net (7.76 M), 29% of UNet++ (9.04 M), and 9.1% of CE-Net (29 M). We evaluated the proposed architecture in four medical image segmentation tasks: nuclei segmentation in microscopy images, breast cancer cell segmentation, gland segmentation in colon histology images, and disc/cup segmentation. Our experimental results show that Match Feature U-Net achieves an average Mean Intersection over Union (MIoU) gain of 1.8, 1.45, and 2.82 points over U-Net, UNet++, and CE-Net, respectively.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Changyong Li ◽  
Yongxian Fan ◽  
Xiaodong Cai

Abstract Background With the development of deep learning (DL), more and more methods based on deep learning are proposed and achieve state-of-the-art performance in biomedical image segmentation. However, these methods are usually complex and require the support of powerful computing resources. According to the actual situation, it is impractical that we use huge computing resources in clinical situations. Thus, it is significant to develop accurate DL based biomedical image segmentation methods which depend on resources-constraint computing. Results A lightweight and multiscale network called PyConvU-Net is proposed to potentially work with low-resources computing. Through strictly controlled experiments, PyConvU-Net predictions have a good performance on three biomedical image segmentation tasks with the fewest parameters. Conclusions Our experimental results preliminarily demonstrate the potential of proposed PyConvU-Net in biomedical image segmentation with resources-constraint computing.


Author(s):  
Réka Hollandi ◽  
Ákos Diósdi ◽  
Gábor Hollandi ◽  
Nikita Moshkov ◽  
Péter Horváth

AbstractAnnotatorJ combines single-cell identification with deep learning and manual annotation. Cellular analysis quality depends on accurate and reliable detection and segmentation of cells so that the subsequent steps of analyses e.g. expression measurements may be carried out precisely and without bias. Deep learning has recently become a popular way of segmenting cells, performing unimaginably better than conventional methods. However, such deep learning applications may be trained on a large amount of annotated data to be able to match the highest expectations. High-quality annotations are unfortunately expensive as they require field experts to create them, and often cannot be shared outside the lab due to medical regulations.We propose AnnotatorJ, an ImageJ plugin for the semi-automatic annotation of cells (or generally, objects of interest) on (not only) microscopy images in 2D that helps find the true contour of individual objects by applying U-Net-based pre-segmentation. The manual labour of hand-annotating cells can be significantly accelerated by using our tool. Thus, it enables users to create such datasets that could potentially increase the accuracy of state-of-the-art solutions, deep learning or otherwise, when used as training data.


Author(s):  
Xiang He ◽  
Sibei Yang ◽  
Guanbin Li ◽  
Haofeng Li ◽  
Huiyou Chang ◽  
...  

Recent progress in biomedical image segmentation based on deep convolutional neural networks (CNNs) has drawn much attention. However, its vulnerability towards adversarial samples cannot be overlooked. This paper is the first one that discovers that all the CNN-based state-of-the-art biomedical image segmentation models are sensitive to adversarial perturbations. This limits the deployment of these methods in safety-critical biomedical fields. In this paper, we discover that global spatial dependencies and global contextual information in a biomedical image can be exploited to defend against adversarial attacks. To this end, non-local context encoder (NLCE) is proposed to model short- and long-range spatial dependencies and encode global contexts for strengthening feature activations by channel-wise attention. The NLCE modules enhance the robustness and accuracy of the non-local context encoding network (NLCEN), which learns robust enhanced pyramid feature representations with NLCE modules, and then integrates the information across different levels. Experiments on both lung and skin lesion segmentation datasets have demonstrated that NLCEN outperforms any other state-of-the-art biomedical image segmentation methods against adversarial attacks. In addition, NLCE modules can be applied to improve the robustness of other CNN-based biomedical image segmentation methods.


2013 ◽  
Vol 860-863 ◽  
pp. 2888-2891
Author(s):  
Yu Bing Dong ◽  
Ming Jing Li ◽  
Ying Sun

Thresholding is one of the critical steps in pattern recognition and has a significant effect on the upcoming steps of image application, the important objectives of thresholding are as follows, and separating objects from background, decreasing the capacity of data consequently increases speed. Various threshold segmentation methods are studied. These methods are compared by using MATLAB7.0. The qualities of image segmentation are elaborated. The results show that iterative threshold segmentation method is better than others.


Author(s):  
H Khastavaneh ◽  
H Ebrahimpour-komleh

Nowadays, medical image modalities are almost available everywhere. These modalities are bases of diagnosis of various diseases sensitive to specific tissue type. Usually physicians look for abnormalities in these modalities in diagnostic procedures. Count and volume of abnormalities are very important for optimal treatment of patients. Segmentation is a preliminary step for these measurements and also further analysis. Manual segmentation of abnormalities is cumbersome, error prone, and subjective. As a result, automated segmentation of abnormal tissue is a need. In this study, representative techniques for segmentation of abnormal tissues are reviewed. Main focus is on the segmentation of multiple sclerosis lesions, breast cancer masses, lung nodules, and skin lesions. As experimental results demonstrate, the methods based on deep learning techniques perform better than other methods that are usually based on handy feature engineering techniques. Finally, the most common measures to evaluate automated abnormal tissue segmentation methods are reported.


2021 ◽  
Author(s):  
Kai Zhang ◽  
Yang Shi ◽  
Chengquan Hu ◽  
Hang Yu

Abstract Aiming at the problems of rough edges and low accuracy in processing cell nucleus image segmentation in existing image segmentation methods. A cell nucleus image segmentation technology based on generative adversarial network (GAN) network and fully convolutional network (FCN) model is proposed. First, the FCN model is used to perform preliminary segmentation of the cell nucleus image, in which the fully connected layer convolution and skip connection are used to improve the accuracy of image segmentation. Then, improve the GAN network, introduce splitting branches into the discriminator structure, and combine the GAN network and the splitting network into one. At the same time, pixel loss is introduced in the generator to obtain a nucleus image that is visually more similar to the real image. Finally, the segmented image output by the FCN model is used as the input of the GAN network to achieve high-precision segmentation of the nucleus image. The proposed method is experimentally demonstrated based on the 2018 data science bowl dataset. The results show that it can achieve rapid convergence, and the mean intersection over union (MIoU) is 85.34%, which is better than other comparison methods.


2020 ◽  
Vol 10 (22) ◽  
pp. 7982
Author(s):  
Lorenzo Putzu ◽  
Giorgio Fumera

Cell nuclei segmentation is a challenging task, especially in real applications, when the target images significantly differ between them. This task is also challenging for methods based on convolutional neural networks (CNNs), which have recently boosted the performance of cell nuclei segmentation systems. However, when training data are scarce or not representative of deployment scenarios, they may suffer from overfitting to a different extent, and may hardly generalise to images that differ from the ones used for training. In this work, we focus on real-world, challenging application scenarios when no annotated images from a given dataset are available, or when few images (even unlabelled) of the same domain are available to perform domain adaptation. To simulate this scenario, we performed extensive cross-dataset experiments on several CNN-based state-of-the-art cell nuclei segmentation methods. Our results show that some of the existing CNN-based approaches are capable of generalising to target images which resemble the ones used for training. In contrast, their effectiveness considerably degrades when target and source significantly differ in colours and scale.


2013 ◽  
Vol 303-306 ◽  
pp. 1105-1108
Author(s):  
Yin Hui Zhang ◽  
Zi Fen He ◽  
Sen Wang ◽  
Zhong Hai Shi

Image segmentation methods that exploit multiscale information about images to be estimated have been extensively studied, typically using the Hidden Markov Tree (HMT) framework. we incorporate wavelet coefficients information of the original image in the form of Hidden Markov Tree model prior for the object segmentation. In this paper, we derive a generalized closed form inference scheme to exact determine the posterior likelihood at each iteration with definite number of iteration steps. Extensive experiments show that this method performs better than many competitive multiscale image segmentation methods.


Sign in / Sign up

Export Citation Format

Share Document