active contour models
Recently Published Documents


TOTAL DOCUMENTS

211
(FIVE YEARS 26)

H-INDEX

24
(FIVE YEARS 3)

2021 ◽  
Author(s):  
Du Lianyu ◽  
Hu Liwei ◽  
Zhang Xiaoyun ◽  
Zhong Yumin ◽  
Zhang Ya ◽  
...  

2021 ◽  
Vol 11 (17) ◽  
pp. 8039
Author(s):  
Younes Akbari ◽  
Hanadi Hassen ◽  
Somaya Al-Maadeed ◽  
Susu M. Zughaier

Pneumonia is a lung infection that threatens all age groups. In this paper, we use CT scans to investigate the effectiveness of active contour models (ACMs) for segmentation of pneumonia caused by the Coronavirus disease (COVID-19) as one of the successful methods for image segmentation. A comparison has been made between the performances of the state-of-the-art methods performed based on a database of lung CT scan images. This review helps the reader to identify starting points for research in the field of active contour models on COVID-19, which is a high priority for researchers and practitioners. Finally, the experimental results indicate that active contour methods achieve promising results when there are not enough images to use deep learning-based methods as one of the powerful tools for image segmentation.


Author(s):  
Vamisdhar Entireddy ◽  
Babu K Rajesh ◽  
R Sampathkumar ◽  
Jyothirmai Gandeti ◽  
Syed Shameem ◽  
...  

2021 ◽  
Vol 21 (03) ◽  
pp. 2150031
Author(s):  
YANG ZHENG ◽  
ZHONGPING CHEN ◽  
JIAKE WANG ◽  
SHU JIANG ◽  
YU LIU

Segmentation of the left ventricle in ultrasound images for viewing through different axes is a critical aspect. This paper proposes the development of novel active contour models with shape constraint to segment the left ventricle in three different axis views of the ultrasound images. The shapes observed in all the axis views of the left ventricle were not similar. According to the cardiac cycle, the valve opening in the end-diastolic phase influenced the left ventricle segmentation; hence, a shape constraint was embedded in the active contour model to keep ventricle’s shape, especially in the Apical long-axis view and Apical four-chamber view. Furthermore, for different axes views, diverse active contour models were proposed to fit each situation. The shape constraint in each function for different views exhibited a specific shape during the function iteration. In order to speed up the algorithm evolution, previous results were used for the initialization of the present active contour. We evaluated the proposed method on 57 patients with three different views: Apical long-axis view, Apical four-chamber view and Short-axis view at the papillary muscle level and obtained the Dice similarity coefficients of [Formula: see text], [Formula: see text] and [Formula: see text] and the Hausdorff distance metrics of [Formula: see text], [Formula: see text] and [Formula: see text], respectively. The qualitative and quantitative evaluations showed an advantage of our method in terms of segmentation accuracy.


Sensors ◽  
2021 ◽  
Vol 21 (5) ◽  
pp. 1908
Author(s):  
Marko Savic ◽  
Yanhe Ma ◽  
Giovanni Ramponi ◽  
Weiwei Du ◽  
Yahui Peng

When dealing with computed tomography volume data, the accurate segmentation of lung nodules is of great importance to lung cancer analysis and diagnosis, being a vital part of computer-aided diagnosis systems. However, due to the variety of lung nodules and the similarity of visual characteristics for nodules and their surroundings, robust segmentation of nodules becomes a challenging problem. A segmentation algorithm based on the fast marching method is proposed that separates the image into regions with similar features, which are then merged by combining regions growing with k-means. An evaluation was performed with two distinct methods (objective and subjective) that were applied on two different datasets, containing simulation data generated for this study and real patient data, respectively. The objective experimental results show that the proposed technique can accurately segment nodules, especially in solid cases, given the mean Dice scores of 0.933 and 0.901 for round and irregular nodules. For non-solid and cavitary nodules the performance dropped—0.799 and 0.614 mean Dice scores, respectively. The proposed method was compared to active contour models and to two modern deep learning networks. It reached better overall accuracy than active contour models, having comparable results to DBResNet but lesser accuracy than 3D-UNet. The results show promise for the proposed method in computer-aided diagnosis applications.


Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 192
Author(s):  
Umer Sadiq Khan ◽  
Xingjun Zhang ◽  
Yuanqi Su

The active contour model is a comprehensive research technique used for salient object detection. Most active contour models of saliency detection are developed in the context of natural scenes, and their role with synthetic and medical images is not well investigated. Existing active contour models perform efficiently in many complexities but facing challenges on synthetic and medical images due to the limited time like, precise automatic fitted contour and expensive initialization computational cost. Our intention is detecting automatic boundary of the object without re-initialization which further in evolution drive to extract salient object. For this, we propose a simple novel derivative of a numerical solution scheme, using fast Fourier transformation (FFT) in active contour (Snake) differential equations that has two major enhancements, namely it completely avoids the approximation of expansive spatial derivatives finite differences, and the regularization scheme can be generally extended more. Second, FFT is significantly faster compared to the traditional solution in spatial domain. Finally, this model practiced Fourier-force function to fit curves naturally and extract salient objects from the background. Compared with the state-of-the-art methods, the proposed method achieves at least a 3% increase of accuracy on three diverse set of images. Moreover, it runs very fast, and the average running time of the proposed methods is about one twelfth of the baseline.


Sign in / Sign up

Export Citation Format

Share Document