scholarly journals Power Mean Based Image Segmentation in the Presence of Noise

Author(s):  
Afzal Rahman ◽  
Haider Ali ◽  
Noor Badshah ◽  
Muhammad Zakarya ◽  
Hameed Hussain ◽  
...  

Abstract In image segmentation and in general in image processing, noise and outliers distort contained information posing in this way a great challenge for accurate image segmentation results. To ensure a correct image segmentation in presence of noise and outliers, it is necessary to identify the outliers and isolate them during a denoising pre-processing or impose suitable constraints into a segmentation framework. In this paper, we impose suitable removing outliers constraints supported by a well-designed theory in a variational framework for accurate image segmentation. We investigate a novel approach based on the power mean function equipped with a well established theoretical base. The power mean function has the capability to distinguishes between true image pixels and outliers and, therefore, is robust against outliers. To deploy the novel image data term and to guaranteed unique segmentation results, a fuzzy-membership function is employed in the proposed energy functional. Based on qualitative and quantitative extensive analysis on various standard data sets, it has been observed that the proposed model works well in images having multi-objects with high noise and in images with intensity inhomogeneity in contrast with the latest and state of the art models.

2018 ◽  
Vol 2018 ◽  
pp. 1-14 ◽  
Author(s):  
Ying Li ◽  
Shuliang Wang ◽  
Caoyuan Li ◽  
Zhenkuan Pan ◽  
Weizhong Zhang

Color image segmentation is fundamental in image processing and computer vision. A novel approach, GDF-Ncut, is proposed to segment color images by integrating generalized data field (GDF) and improved normalized cuts (Ncut). To start with, the hierarchy-grid structure is constructed in the color feature space of an image in an attempt to reduce the time complexity but preserve the quality of image segmentation. Then a fast hierarchy-grid clustering is performed under GDF potential estimation and therefore image pixels are merged into disjoint oversegmented but meaningful initial regions. Finally, these regions are presented as a weighted undirected graph, upon which Ncut algorithm merges homogenous initial regions to achieve final image segmentation. The use of the fast clustering improves the effectiveness of Ncut because regions-based graph is constructed instead of pixel-based graph. Meanwhile, during the processes of Ncut matrix computation, oversegmented regions are grouped into homogeneous parts for greatly ameliorating the intermediate problems from GDF and accordingly decreasing the sensitivity to noise. Experimental results on a variety of color images demonstrate that the proposed method significantly reduces the time complexity while partitioning image into meaningful and physically connected regions. The method is potentially beneficial to serve object extraction and pattern recognition.


2018 ◽  
Vol 8 (12) ◽  
pp. 2393 ◽  
Author(s):  
Lin Sun ◽  
Xinchao Meng ◽  
Jiucheng Xu ◽  
Shiguang Zhang

When the level set algorithm is used to segment an image, the level set function must be initialized periodically to ensure that it remains a signed distance function (SDF). To avoid this defect, an improved regularized level set method-based image segmentation approach is presented. First, a new potential function is defined and introduced to reconstruct a new distance regularization term to solve this issue of periodically initializing the level set function. Second, by combining the distance regularization term with the internal and external energy terms, a new energy functional is developed. Then, the process of the new energy functional evolution is derived by using the calculus of variations and the steepest descent approach, and a partial differential equation is designed. Finally, an improved regularized level set-based image segmentation (IRLS-IS) method is proposed. Numerical experimental results demonstrate that the IRLS-IS method is not only effective and robust to segment noise and intensity-inhomogeneous images but can also analyze complex medical images well.


Author(s):  
Hao Zheng ◽  
Lin Yang ◽  
Jianxu Chen ◽  
Jun Han ◽  
Yizhe Zhang ◽  
...  

Deep learning has been applied successfully to many biomedical image segmentation tasks. However, due to the diversity and complexity of biomedical image data, manual annotation for training common deep learning models is very timeconsuming and labor-intensive, especially because normally only biomedical experts can annotate image data well. Human experts are often involved in a long and iterative process of annotation, as in active learning type annotation schemes. In this paper, we propose representative annotation (RA), a new deep learning framework for reducing annotation effort in biomedical image segmentation. RA uses unsupervised networks for feature extraction and selects representative image patches for annotation in the latent space of learned feature descriptors, which implicitly characterizes the underlying data while minimizing redundancy. A fully convolutional network (FCN) is then trained using the annotated selected image patches for image segmentation. Our RA scheme offers three compelling advantages: (1) It leverages the ability of deep neural networks to learn better representations of image data; (2) it performs one-shot selection for manual annotation and frees annotators from the iterative process of common active learning based annotation schemes; (3) it can be deployed to 3D images with simple extensions. We evaluate our RA approach using three datasets (two 2D and one 3D) and show our framework yields competitive segmentation results comparing with state-of-the-art methods.


Author(s):  
Lars J. Isaksson ◽  
Paul Summers ◽  
Sara Raimondi ◽  
Sara Gandini ◽  
Abhir Bhalerao ◽  
...  

Abstract Researchers address the generalization problem of deep image processing networks mainly through extensive use of data augmentation techniques such as random flips, rotations, and deformations. A data augmentation technique called mixup, which constructs virtual training samples from convex combinations of inputs, was recently proposed for deep classification networks. The algorithm contributed to increased performance on classification in a variety of datasets, but so far has not been evaluated for image segmentation tasks. In this paper, we tested whether the mixup algorithm can improve the generalization performance of deep segmentation networks for medical image data. We trained a standard U-net architecture to segment the prostate in 100 T2-weighted 3D magnetic resonance images from prostate cancer patients, and compared the results with and without mixup in terms of Dice similarity coefficient and mean surface distance from a reference segmentation made by an experienced radiologist. Our results suggest that mixup offers a statistically significant boost in performance compared to non-mixup training, leading to up to 1.9% increase in Dice and a 10.9% decrease in surface distance. The mixup algorithm may thus offer an important aid for medical image segmentation applications, which are typically limited by severe data scarcity.


Author(s):  
J. Choi ◽  
L. Zhu ◽  
H. Kurosu

In the current study, we developed a methodology for detecting cracks in the surface of paved road using 3D digital surface model of road created by measuring with three-dimensional laser scanner which works on the basis of the light-section method automatically. For the detection of cracks from the imagery data of the model, the background subtraction method (Rolling Ball Background Subtraction Algorithm) was applied to the data for filtering out the background noise originating from the undulation and gradual slope and also for filtering the ruts that were caused by wearing, aging and excessive use of road and other reasons. We confirmed the influence from the difference in height (depth) caused by forgoing reasons included in a data can be reduced significantly at this stage. Various parameters of ball radius were applied for checking how the result of data obtained with this process vary according to the change of parameter and it becomes clear that there are not important differences by the change of parameters if they are in a certain range radius. And then, image segmentation was performed by multi-resolution segmentation based on the object-based image analysis technique. The parameters for the image segmentation, scale, pixel value (height/depth) and the compactness of objects were used. For the classification of cracks in the database, the height, length and other geometric property are used and we confirmed the method is useful for the detection of cracks in a paved road surface.


The domain of image signal processing, image compression is the significant technique, which is mainly invented to reduce the redundancy of image data in order to able to transmit the image pixels with high quality resolution. The standard image compression techniques like losseless and lossy compression technique generates high compression ratio image with efficient storage and transmission requirement respectively. There are many image compression technique are available for example JPEG, DWT and DCT based compression algorithms which provides effective results in terms of high compression ratio with clear quality image transformation. But they have more computational complexities in terms of processing, encoding, energy consumption and hardware design. Thus, bringing out these challenges, the proposed paper considers the most prominent research papers and discuses FPGA architecture design and future scope in the state of art of image compression technique. The primary aim to investigate the research challenges toward VLSI designing and image compression. The core section of the proposed study includes three folds viz standard architecture designs, related work and open research challenges in the domain of image compression.


Sign in / Sign up

Export Citation Format

Share Document