scholarly journals Automatic Detection and Extraction of Lungs Cancer Nodules Using Connected Components Labeling and Distance Measure Based Classification

2021 ◽  
Author(s):  
Mamdouh Monif ◽  
Kinan Mansour ◽  
Waad Ammar ◽  
Maan Ammar

We introduce in this paper a method for reliable automatic extraction of lung area from CT chest images with a wide variety of lungs image shapes by using Connected Components Labeling (CCL) technique with some morphological operations. The paper introduces also a method using the CCL technique with distance measure based classification for the efficient detection of lungs nodules from extracted lung area. We further tested our complete detection and extraction approach using a performance consistency check by applying it to lungs CT images of healthy persons (contain no nodules). The experimental results have shown that the performance of the method in all stages is high.

2021 ◽  
Vol 12 (3) ◽  
pp. 25-43
Author(s):  
Maan Ammar ◽  
Muhammad Shamdeen ◽  
Mazen Kasedeh ◽  
Kinan Mansour ◽  
Waad Ammar

We introduce in this paper a reliable method for automatic extraction of lungs nodules from CT chest images and shed the light on the details of using the Weighted Euclidean Distance (WED) for classifying lungs connected components into nodule and not-nodule. We explain also using Connected Component Labeling (CCL) in an effective and flexible method for extraction of lungs area from chest CT images with a wide variety of shapes and sizes. This lungs extraction method makes use of, as well as CCL, some morphological operations. Our tests have shown that the performance of the introduce method is high. Finally, in order to check whether the method works correctly or not for healthy and patient CT images, we tested the method by some images of healthy persons and demonstrated that the overall performance of the method is satisfactory.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Roszaharah Yaacob ◽  
Chok Dong Ooi ◽  
Haidi Ibrahim ◽  
Nik Fakhuruddin Nik Hassan ◽  
Puwira Jaya Othman ◽  
...  

Palmprint has become one of the biometric modalities that can be used for personal identification. This modality contains critical identification features such as minutiae, ridges, wrinkles, and creases. In this research, feature from creases will be our focus. Feature from creases is a special salient feature of palmprint. It is worth noting that currently, the creases-based identification is still not common. In this research, we proposed a method to extract crease features from two regions. The first region of interest (ROI) is in the hypothenar region, whereas another ROI is in the interdigital region. To speed up the extraction, most of the processes involved are based on the processing of the image that has been a downsampled image by using a factor of 10. The method involved segmentations through thresholding, morphological operations, and the usage of the Hough line transform. Based on 101 palmprint input images, experimental results show that the proposed method successfully extracts the ROIs from both regions. The method has achieved an average sensitivity, specificity, and accuracy of 0.8159, 0.9975, and 0.9951, respectively.


2019 ◽  
Vol 5 (6) ◽  
pp. 57 ◽  
Author(s):  
Gang Wang ◽  
Bernard De Baets

Superpixel segmentation can benefit from the use of an appropriate method to measure edge strength. In this paper, we present such a method based on the first derivative of anisotropic Gaussian kernels. The kernels can capture the position, direction, prominence, and scale of the edge to be detected. We incorporate the anisotropic edge strength into the distance measure between neighboring superpixels, thereby improving the performance of an existing graph-based superpixel segmentation method. Experimental results validate the superiority of our method in generating superpixels over the competing methods. It is also illustrated that the proposed superpixel segmentation method can facilitate subsequent saliency detection.


Electronics ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 325
Author(s):  
Zhihao Wu ◽  
Baopeng Zhang ◽  
Tianchen Zhou ◽  
Yan Li ◽  
Jianping Fan

In this paper, we developed a practical approach for automatic detection of discrimination actions from social images. Firstly, an image set is established, in which various discrimination actions and relations are manually labeled. To the best of our knowledge, this is the first work to create a dataset for discrimination action recognition and relationship identification. Secondly, a practical approach is developed to achieve automatic detection and identification of discrimination actions and relationships from social images. Thirdly, the task of relationship identification is seamlessly integrated with the task of discrimination action recognition into one single network called the Co-operative Visual Translation Embedding++ network (CVTransE++). We also compared our proposed method with numerous state-of-the-art methods, and our experimental results demonstrated that our proposed methods can significantly outperform state-of-the-art approaches.


1980 ◽  
Vol 12 (02) ◽  
pp. 319-349 ◽  
Author(s):  
Bengt Von Bahr ◽  
Anders Martin-Löf

The Reed–Frost model for the spread of an infection is considered and limit theorems for the total size, T, of the epidemic are proved in the limit when n, the initial number of healthy persons, is large and the probability of an encounter between a healthy and an infected person per time unit, p, is λ/n. It is shown that there is a critical threshold λ = 1 in the following sense, when the initial number of infected persons, m, is finite: If λ ≦ 1, T remains finite and has a limit distribution which can be described. If λ > 1 this is still true with a probability σ m < 1, and with probability 1 – σ m T is close to n(1 – σ) and has an approximately Gaussian distribution around this value. When m → ∞ also, only the Gaussian part of the limit distribution is obtained. A randomized version of the Reed–Frost model is also considered, and this allows the same result to be proved for the Kermack–McKendrick model. It is also shown that the limit theorem can be used to study the number of connected components in a random graph, which can be considered as a crude description of a polymerization process. In this case polymerization takes place when λ > 1 and not when λ ≦ 1.


Author(s):  
Fangrui Wu ◽  
Menglong Yang

Recent end-to-end CNN-based stereo matching algorithms obtain disparities through regression from a cost volume, which is formed by concatenating the features of stereo pairs. Some downsampling steps are often embedded in constructing cost volume for global information aggregation and computational efficiency. However, many edge details are hard to recover due to the imprudent upsampling process and ambiguous boundary predictions. To tackle this problem without training another edge prediction sub-network, we developed a novel tightly-coupled edge refinement pipeline composed of two modules. The first module implements a gentle upsampling process by a cascaded cost volume filtering method, aggregating global information without losing many details. On this basis, the second module concentrates on generating a disparity residual map for boundary pixels by sub-pixel disparity consistency check, to further recover the edge details. The experimental results on public datasets demonstrate the effectiveness of the proposed method.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Huimin Xiao ◽  
Meiqi Wang ◽  
Xiaoning Xi

This paper proposes a consistency check method for hesitant fuzzy sets with confidence levels by employing a distance measure. Firstly, we analyze the difference between each fuzzy element and its corresponding attribute comprehensive decision value and then obtain a comprehensive distance measure for each attribute. Subsequently, by taking the relative credibility as the weight, we assess the consistency of hesitant fuzzy sets. Finally, numerical examples are put forward to verify the effectiveness and reliability of the proposed method.


2020 ◽  
Vol 40 (3) ◽  
pp. 1155-1173
Author(s):  
A. Touil ◽  
K. Kalti ◽  
P.-H. Conze ◽  
B. Solaiman ◽  
M.A. Mahjoub

2016 ◽  
Vol 64 (1) ◽  
pp. 103-113
Author(s):  
S. Skoneczny

Abstract This paper presents a novel approach to morphological contrast sharpening of image using the multilevel toggle operator. The concept presented here is a generalization of toggle based contrast operator for gray-level images. The multilevel toggle operator is used to enhance the contrast of multivalued images. In order to perform necessary morphological operations the modified pairwise ordering (MPO) algorithm is proposed. It gives the total order of color pixels. For comparison four other ordering methods are used. The main advantage of the proposed sharpener is its significant contrast enhancing ability when using MPO. Theoretical considerations as well as practical results are shown. Experimental results show its applicability to low-contrast color images.


2018 ◽  
Vol 6 (2) ◽  
pp. SD29-SD40 ◽  
Author(s):  
Aina J. Bugge ◽  
Stuart R. Clark ◽  
Jan E. Lie ◽  
Jan I. Faleide

Recently, there has been a growing interest in automatic and semiautomatic seismic interpretation, and we have developed methods for extraction of 3D unconformities and faults from seismic data as alternatives to conventional and time-consuming manual interpretation. Our methods can be used separately or together, and they are time efficient and based on easily available 2D and 3D image-processing algorithms, such as morphological operations and image region property operations. The method for extraction of unconformities defines seismic sequences, based on their stratigraphic stacking patterns and seismic amplitudes, and extracts the boundaries between these sequences. The fault-extraction method extracts connected components from a coherence-based fault-likelihood cube where interfering objects are addressed prior to the extraction. We have used industry-based data acquired in a complex geological area and implemented our methods with a case study on the Polhem Subplatform, located in the southwestern Barents Sea north of Norway. For this case study, our methods result in the extraction of two unconformities and twenty-five faults. The unconformities are assumed to be the Base Pleistocene, which separates preglacial and postglacial Cenozoic sediments, and the Base Cretaceous, which separates the severely faulted Mesozoic strata from prograding Paleocene deposits. The faults are assumed to be mainly Jurassic normal faults, and they follow the trends of the eastern and southwestern boundaries of the Polhem Subplatform; the north–south-trending Jason Fault complex; and the northwest–southeast-trending Ringvassøy-Loppa Fault complex.


Sign in / Sign up

Export Citation Format

Share Document