scholarly journals Threshold Estimation Based on Local Minima for Nucleus and Cytoplasm Segmentation

Author(s):  
Simeon Mayala ◽  
Jonas Bull Haugsøen

Abstract Background: Image segmentation is a process of partitioning the input image into its separate objects or regions. It is an essential step in image processing to segment the regions of interest (ROI) for further processing. We propose a method for segmenting neucleus and cytoplasm from the white blood cells (WBC).Methods: Initially, the method computes an initial value based on the minimum and maximum values of the input image. Then, a histogram of the input image is computed and then approximated to obtain function values. The method searches for the first local maximum and local minimum from the approximated function values. We approximate the required threshold from the first local minimum and the computed initial value based on defined conditions. The threshold is applied to the input image to binarize it and then perform post-processing to obtain the final segmented nucleus. We segment the whole WBC before segmenting the cytoplasm depending on the complexity of the objects in the image. For WBCs which are well separated from the RBCs, n thresholds are generated and then produce n thresholded images. Then, a standard Otsu method is used to binarize the average of the produced images. Morphological operations are applied on the binarized image and then use a single-pixel point from the segmented nucleus to segment the WBCs. For images in which RBCs touch the WBCs, we segment the whole WBC using SLIC and watershed methods. The cytoplasm is obtained by subtracting the segmented nucleus from the segmented WBC. Results: The method is tested on two different public data sets. Performance analysis is done and the results show that the proposed method segments well the nucleus and cytoplasm. Conclusion: We propose a method for nuclei and cytoplasm segmentation based on the local minima of the approximated function values from the histogram of image. The method has demonstrated its utility in segmenting nuclei, WBCs and cytoplasm and the results are reasonable.

Author(s):  
Roland Winkler ◽  
Frank Klawonn ◽  
Rudolf Kruse

High dimensions have a devastating effect on the FCM algorithm and similar algorithms. One effect is that the prototypes run into the centre of gravity of the entire data set. The objective function must have a local minimum in the centre of gravity that causes FCM’s behaviour. In this paper, examine this problem. This paper answers the following questions: How many dimensions are necessary to cause an ill behaviour of FCM? How does the number of prototypes influence the behaviour? Why has the objective function a local minimum in the centre of gravity? How must FCM be initialised to avoid the local minima in the centre of gravity? To understand the behaviour of the FCM algorithm and answer the above questions, the authors examine the values of the objective function and develop three test environments that consist of artificially generated data sets to provide a controlled environment. The paper concludes that FCM can only be applied successfully in high dimensions if the prototypes are initialized very close to the cluster centres.


2011 ◽  
Vol 1 (1) ◽  
pp. 1-16 ◽  
Author(s):  
Roland Winkler ◽  
Frank Klawonn ◽  
Rudolf Kruse

High dimensions have a devastating effect on the FCM algorithm and similar algorithms. One effect is that the prototypes run into the centre of gravity of the entire data set. The objective function must have a local minimum in the centre of gravity that causes FCM’s behaviour. In this paper, examine this problem. This paper answers the following questions: How many dimensions are necessary to cause an ill behaviour of FCM? How does the number of prototypes influence the behaviour? Why has the objective function a local minimum in the centre of gravity? How must FCM be initialised to avoid the local minima in the centre of gravity? To understand the behaviour of the FCM algorithm and answer the above questions, the authors examine the values of the objective function and develop three test environments that consist of artificially generated data sets to provide a controlled environment. The paper concludes that FCM can only be applied successfully in high dimensions if the prototypes are initialized very close to the cluster centres.


Author(s):  
Liming Li ◽  
Xiaodong Chai ◽  
Shuguang Zhao ◽  
Shubin Zheng ◽  
Shengchao Su

This paper proposes an effective method to elevate the performance of saliency detection via iterative bootstrap learning, which consists of two tasks including saliency optimization and saliency integration. Specifically, first, multiscale segmentation and feature extraction are performed on the input image successively. Second, prior saliency maps are generated using existing saliency models, which are used to generate the initial saliency map. Third, prior maps are fed into the saliency regressor together, where training samples are collected from the prior maps at multiple scales and the random forest regressor is learned from such training data. An integration of the initial saliency map and the output of saliency regressor is deployed to generate the coarse saliency map. Finally, in order to improve the quality of saliency map further, both initial and coarse saliency maps are fed into the saliency regressor together, and then the output of the saliency regressor, the initial saliency map as well as the coarse saliency map are integrated into the final saliency map. Experimental results on three public data sets demonstrate that the proposed method consistently achieves the best performance and significant improvement can be obtained when applying our method to existing saliency models.


Author(s):  
Manpreet Kaur ◽  
Jasdev Bhatti ◽  
Mohit Kumar Kakkar ◽  
Arun Upmanyu

Introduction: Face Detection is used in many different steams like video conferencing, human-computer interface, in face detection, and in the database management of image. Therefore, the aim of our paper is to apply Red Green Blue ( Methods: The morphological operations are performed in the face region to a number of pixels as the proposed parameter to check either an input image contains face region or not. Canny edge detection is also used to show the boundaries of a candidate face region, in the end, the face can be shown detected by using bounding box around the face. Results: The reliability model has also been proposed for detecting the faces in single and multiple images. The results of the experiments reflect that the algorithm been proposed performs very well in each model for detecting the faces in single and multiple images and the reliability model provides the best fit by analyzing the precision and accuracy. Moreover Discussion: The calculated results show that HSV model works best for single faced images whereas YCbCr and TSL models work best for multiple faced images. Also, the evaluated results by this paper provides the better testing strategies that helps to develop new techniques which leads to an increase in research effectiveness. Conclusion: The calculated value of all parameters is helpful for proving that the proposed algorithm has been performed very well in each model for detecting the face by using a bounding box around the face in single as well as multiple images. The precision and accuracy of all three models are analyzed through the reliability model. The comparison calculated in this paper reflects that HSV model works best for single faced images whereas YCbCr and TSL models work best for multiple faced images.


2021 ◽  
Vol 16 (1) ◽  
pp. 1-24
Author(s):  
Yaojin Lin ◽  
Qinghua Hu ◽  
Jinghua Liu ◽  
Xingquan Zhu ◽  
Xindong Wu

In multi-label learning, label correlations commonly exist in the data. Such correlation not only provides useful information, but also imposes significant challenges for multi-label learning. Recently, label-specific feature embedding has been proposed to explore label-specific features from the training data, and uses feature highly customized to the multi-label set for learning. While such feature embedding methods have demonstrated good performance, the creation of the feature embedding space is only based on a single label, without considering label correlations in the data. In this article, we propose to combine multiple label-specific feature spaces, using label correlation, for multi-label learning. The proposed algorithm, mu lti- l abel-specific f eature space e nsemble (MULFE), takes consideration label-specific features, label correlation, and weighted ensemble principle to form a learning framework. By conducting clustering analysis on each label’s negative and positive instances, MULFE first creates features customized to each label. After that, MULFE utilizes the label correlation to optimize the margin distribution of the base classifiers which are induced by the related label-specific feature spaces. By combining multiple label-specific features, label correlation based weighting, and ensemble learning, MULFE achieves maximum margin multi-label classification goal through the underlying optimization framework. Empirical studies on 10 public data sets manifest the effectiveness of MULFE.


2021 ◽  
pp. 016555152199863
Author(s):  
Ismael Vázquez ◽  
María Novo-Lourés ◽  
Reyes Pavón ◽  
Rosalía Laza ◽  
José Ramón Méndez ◽  
...  

Current research has evolved in such a way scientists must not only adequately describe the algorithms they introduce and the results of their application, but also ensure the possibility of reproducing the results and comparing them with those obtained through other approximations. In this context, public data sets (sometimes shared through repositories) are one of the most important elements for the development of experimental protocols and test benches. This study has analysed a significant number of CS/ML ( Computer Science/ Machine Learning) research data repositories and data sets and detected some limitations that hamper their utility. Particularly, we identify and discuss the following demanding functionalities for repositories: (1) building customised data sets for specific research tasks, (2) facilitating the comparison of different techniques using dissimilar pre-processing methods, (3) ensuring the availability of software applications to reproduce the pre-processing steps without using the repository functionalities and (4) providing protection mechanisms for licencing issues and user rights. To show the introduced functionality, we created STRep (Spam Text Repository) web application which implements our recommendations adapted to the field of spam text repositories. In addition, we launched an instance of STRep in the URL https://rdata.4spam.group to facilitate understanding of this study.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Jiawei Lian ◽  
Junhong He ◽  
Yun Niu ◽  
Tianze Wang

Purpose The current popular image processing technologies based on convolutional neural network have the characteristics of large computation, high storage cost and low accuracy for tiny defect detection, which is contrary to the high real-time and accuracy, limited computing resources and storage required by industrial applications. Therefore, an improved YOLOv4 named as YOLOv4-Defect is proposed aim to solve the above problems. Design/methodology/approach On the one hand, this study performs multi-dimensional compression processing on the feature extraction network of YOLOv4 to simplify the model and improve the feature extraction ability of the model through knowledge distillation. On the other hand, a prediction scale with more detailed receptive field is added to optimize the model structure, which can improve the detection performance for tiny defects. Findings The effectiveness of the method is verified by public data sets NEU-CLS and DAGM 2007, and the steel ingot data set collected in the actual industrial field. The experimental results demonstrated that the proposed YOLOv4-Defect method can greatly improve the recognition efficiency and accuracy and reduce the size and computation consumption of the model. Originality/value This paper proposed an improved YOLOv4 named as YOLOv4-Defect for the detection of surface defect, which is conducive to application in various industrial scenarios with limited storage and computing resources, and meets the requirements of high real-time and precision.


2003 ◽  
Vol 10 (04) ◽  
pp. 649-660
Author(s):  
D. K. Mak

It has always been stated in electronics, semiconductor and solid state device textbooks that the hole drift and electron drift currents in the depletion region of a p–n junction are constant and independent of applied voltage (biasing). However, the explanations given are qualitative and unclear. We extrapolate the existing analytic theory of a p–n junction to give a quantitative explanation of why the currents are constant. We have also shown that the carrier concentrations in the depletion region, as depicted in some of the textbooks, are incorrect, and need to be revised. Our calculations further demonstrate that in reverse biasing, both hole and electron carrier concentrations each experience a local maximum and a local minimum, indicating that their diffusion currents change directions twice within the depletion region.


The mortality rate is increasing among the growing population and one of the leading causes is lung cancer. Early diagnosis is required to decrease the number of deaths and increase the survival rate of lung cancer patients. With the advancements in the medical field and its technologies CAD system has played a significant role to detect the early symptoms in the patients which cannot be carried out manually without any error in it. CAD is detection system which has combined the machine learning algorithms with image processing using computer vision. In this research a novel approach to CAD system is presented to detect lung cancer using image processing techniques and classifying the detected nodules by CNN approach. The proposed method has taken CT scan image as input image and different image processing techniques such as histogram equalization, segmentation, morphological operations and feature extraction have been performed on it. A CNN based classifier is trained to classify the nodules as cancerous or non-cancerous. The performance of the system is evaluated in the terms of sensitivity, specificity and accuracy


2011 ◽  
pp. 24-32 ◽  
Author(s):  
Nicoleta Rogovschi ◽  
Mustapha Lebbah ◽  
Younès Bennani

Most traditional clustering algorithms are limited to handle data sets that contain either continuous or categorical variables. However data sets with mixed types of variables are commonly used in data mining field. In this paper we introduce a weighted self-organizing map for clustering, analysis and visualization mixed data (continuous/binary). The learning of weights and prototypes is done in a simultaneous manner assuring an optimized data clustering. More variables has a high weight, more the clustering algorithm will take into account the informations transmitted by these variables. The learning of these topological maps is combined with a weighting process of different variables by computing weights which influence the quality of clustering. We illustrate the power of this method with data sets taken from a public data set repository: a handwritten digit data set, Zoo data set and other three mixed data sets. The results show a good quality of the topological ordering and homogenous clustering.


Sign in / Sign up

Export Citation Format

Share Document