AN EFFECTIVE FRAMEWORK FOR AUTOMATIC SEGMENTATION OF HARD EXUDATES IN FUNDUS IMAGES

2013 ◽  
Vol 22 (01) ◽  
pp. 1250075 ◽  
Author(s):  
NAN YANG ◽  
HU-CHUAN LU ◽  
GUO-LIANG FANG ◽  
GANG YANG

In this paper, we propose an effective framework to automatically segment hard exudates (HEs) in fundus images. Our framework is based on a coarse-to-fine strategy, as we first get a coarse result allowed of some negative samples, then eliminate the negative samples step by step. In our framework, we make the most of the multi-channel information by employing a boosted soft segmentation algorithm. Additionally, we develop a multi-scale background subtraction method to obtain the coarse segmentation result. After subtracting the optical disc (OD) region from the coarse result, the HEs are extracted by a SVM classifier. The main contributions of this paper are: (1) propose an efficient and robust framework for automatic HEs segmentation; (2) present a boosted soft segmentation algorithm to combine multi-channel information; (3) employ a double ring filter to segment and adjust the OD region. We perform our experiments on the pubic DIARETDB1 dateset, which consists of 89 fundus images. The performance of our algorithm is assessed on both lesion-based criterion and image-based criterion. Our experimental results show that the proposed algorithm is very effective and robust.

2018 ◽  
Vol 7 (2.15) ◽  
pp. 154 ◽  
Author(s):  
Fanji Ari Mukti ◽  
C Eswaran ◽  
Noramiza Hashim ◽  
Ho Chiung Ching ◽  
Mohamed Uvaze Ahamed Ayoobkhan

In this paper, an automated system for grading the severity level of Diabetic Retinopathy (DR) disease based on fundus images is presented. Features are extracted using fast discrete curvelet transform. These features are applied to hierarchical support vector machine (SVM) classifier to obtain four types of grading levels, namely, normal, mild, moderate and severe. These grading levels are determined based on the number of anomalies such as microaneurysms, hard exudates and haemorrhages that are present in the fundus image. The performance of the proposed system is evaluated using fundus images from the Messidor database. Experiment results show that the proposed system can achieve an accuracy rate of 86.23%. 


Author(s):  
Guangmin Sun ◽  
Zhongxiang Zhang ◽  
Junjie Zhang ◽  
Meilong Zhu ◽  
Xiao-rong Zhu ◽  
...  

AbstractAutomatic segmentation of optic disc (OD) and optic cup (OC) is an essential task for analysing colour fundus images. In clinical practice, accurate OD and OC segmentation assist ophthalmologists in diagnosing glaucoma. In this paper, we propose a unified convolutional neural network, named ResFPN-Net, which learns the boundary feature and the inner relation between OD and OC for automatic segmentation. The proposed ResFPN-Net is mainly composed of multi-scale feature extractor, multi-scale segmentation transition and attention pyramid architecture. The multi-scale feature extractor achieved the feature encoding of fundus images and captured the boundary representations. The multi-scale segmentation transition is employed to retain the features of different scales. Moreover, an attention pyramid architecture is proposed to learn rich representations and the mutual connection in the OD and OC. To verify the effectiveness of the proposed method, we conducted extensive experiments on two public datasets. On the Drishti-GS database, we achieved a Dice coefficient of 97.59%, 89.87%, the accuracy of 99.21%, 98.77%, and the Averaged Hausdorff distance of 0.099, 0.882 on the OD and OC segmentation, respectively. We achieved a Dice coefficient of 96.41%, 83.91%, the accuracy of 99.30%, 99.24%, and the Averaged Hausdorff distance of 0.166, 1.210 on the RIM-ONE database for OD and OC segmentation, respectively. Comprehensive results show that the proposed method outperforms other competitive OD and OC segmentation methods and appears more adaptable in cross-dataset scenarios. The introduced multi-scale loss function achieved significantly lower training loss and higher accuracy compared with other loss functions. Furthermore, the proposed method is further validated in OC to OD ratio calculation task and achieved the best MAE of 0.0499 and 0.0630 on the Drishti-GS and RIM-ONE datasets, respectively. Finally, we evaluated the effectiveness of the glaucoma screening on Drishti-GS and RIM-ONE datasets, achieving the AUC of 0.8947 and 0.7964. These results proved that the proposed ResFPN-Net is effective in analysing fundus images for glaucoma screening and can be applied in other relative biomedical image segmentation applications.


Author(s):  
Haoyang Tang ◽  
Cong Song ◽  
Meng Qian

As the shapes of breast cell are diverse and there is adherent between cells, fast and accurate segmentation for breast cell remains a challenging task. In this paper, an automatic segmentation algorithm for breast cell image is proposed, which focuses on the segmentation of adherent cells. First of all, breast cell image enhancement is carried out by the staining regularization. Then, the cells and background are separated by Multi-scale Convolutional Neural Network (CNN) to obtain the initial segmentation results. Finally, the Curvature Scale Space (CSS) corner detection is used to segment adherent cells. Experimental results show that the proposed algorithm can achieve 93.01% accuracy, 93.93% sensitivity and 95.69% specificity. Compared with other segmentation algorithms of breast cell, the proposed algorithm can not only solve the difficulty of segmenting adherent cells, but also improve the segmentation accuracy of adherent cells.


2020 ◽  
Vol 392 ◽  
pp. 314-324 ◽  
Author(s):  
Song Guo ◽  
Kai Wang ◽  
Hong Kang ◽  
Teng Liu ◽  
Yingqi Gao ◽  
...  
Keyword(s):  

2021 ◽  
pp. 016173462110425
Author(s):  
Jianing Xi ◽  
Jiangang Chen ◽  
Zhao Wang ◽  
Dean Ta ◽  
Bing Lu ◽  
...  

Large scale early scanning of fetuses via ultrasound imaging is widely used to alleviate the morbidity or mortality caused by congenital anomalies in fetal hearts and lungs. To reduce the intensive cost during manual recognition of organ regions, many automatic segmentation methods have been proposed. However, the existing methods still encounter multi-scale problem at a larger range of receptive fields of organs in images, resolution problem of segmentation mask, and interference problem of task-irrelevant features, obscuring the attainment of accurate segmentations. To achieve semantic segmentation with functions of (1) extracting multi-scale features from images, (2) compensating information of high resolution, and (3) eliminating the task-irrelevant features, we propose a multi-scale model with skip connection framework and attention mechanism integrated. The multi-scale feature extraction modules are incorporated with additive attention gate units for irrelevant feature elimination, through a U-Net framework with skip connections for information compensation. The performance of fetal heart and lung segmentation indicates the superiority of our method over the existing deep learning based approaches. Our method also shows competitive performance stability during the task of semantic segmentations, showing a promising contribution on ultrasound based prognosis of congenital anomaly in the early intervention, and alleviating the negative effects caused by congenital anomaly.


2018 ◽  
Vol 7 (2.16) ◽  
pp. 29
Author(s):  
Gaurav Makwana ◽  
Lalita Gupta

Breast cancer is most common disease in women of all ages. To identify & confirm the state of tumor in breast cancer diagnosis, patients are undergo biopsy number of times to identify malignancy. Early detection of cancer can save the patient. In this paper a novel approach for automatic segmentation & classification of breast calcification is proposed. The diagnostic test technique for detection of breast condition is very costly & requires human expertise whereas proposed method can help in automatically identifying the disease by comparing the data with the standard database. In proposed method a database has been created to define various stage of breast calcification & testing images are pre-processed to resize, enhance & filtered to remove background noise. Clustering is performed by using k-means clustering algorithm. GLCM is used to extract out statistical feature like area, mean, variance, standard deviation, homogeneity, skewness etc. to classify the state of tumor. SVM classifier is used for the classification using extracted feature. 


Author(s):  
Zhao Sun ◽  
Yifu Wang ◽  
Lei Pan ◽  
Yunhong Xie ◽  
Bo Zhang ◽  
...  

AbstractPine wilt disease (PWD) is currently one of the main causes of large-scale forest destruction. To control the spread of PWD, it is essential to detect affected pine trees quickly. This study investigated the feasibility of using the object-oriented multi-scale segmentation algorithm to identify trees discolored by PWD. We used an unmanned aerial vehicle (UAV) platform equipped with an RGB digital camera to obtain high spatial resolution images, and multi-scale segmentation was applied to delineate the tree crown, coupling the use of object-oriented classification to classify trees discolored by PWD. Then, the optimal segmentation scale was implemented using the estimation of scale parameter (ESP2) plug-in. The feature space of the segmentation results was optimized, and appropriate features were selected for classification. The results showed that the optimal scale, shape, and compactness values of the tree crown segmentation algorithm were 56, 0.5, and 0.8, respectively. The producer’s accuracy (PA), user’s accuracy (UA), and F1 score were 0.722, 0.605, and 0.658, respectively. There were no significant classification errors in the final classification results, and the low accuracy was attributed to the low number of objects count caused by incorrect segmentation. The multi-scale segmentation and object-oriented classification method could accurately identify trees discolored by PWD with a straightforward and rapid processing. This study provides a technical method for monitoring the occurrence of PWD and identifying the discolored trees of disease using UAV-based high-resolution images.


Sign in / Sign up

Export Citation Format

Share Document