scholarly journals Retinal blood vessel segmentation from retinal image using B-COSFIRE and adaptive thresholding

Author(s):  
Aziah Ali ◽  
Wan Mimi Diyana Wan Zaki ◽  
Aini Hussain

<span>Segmentation of blood vessels (BVs) from retinal image is one of the important steps in developing a computer-assisted retinal diagnosis system and has been widely researched especially for implementing automatic BV segmentation methods. This paper proposes an improvement to an existing retinal BV (RBV) segmentation method by combining the trainable B-COSFIRE filter with adaptive thresholding methods. The proposed method can automatically configure its selectivity given a prototype pattern to be detected. Its segmentation performance is comparable to many published methods with the advantage of robustness against noise on retinal background. Instead of using grid search to find the optimal threshold value for a whole dataset, adaptive thresholding (AT) is used to determine the threshold for each retinal image. Two AT methods investigated in this study were ISODATA and Otsu’s method. The proposed method was validated using 40 images from two benchmark datasets for retinal BV segmentation validation, namely DRIVE and STARE. The validation results indicated that the segmentation performance of the proposed unsupervised method is comparable to the original B-COSFIRE method and other published methods, without requiring the availability of ground truth data for new dataset. The Sensitivity and Specificity values achieved for DRIVE and STARE are 0.7818, 0.9688, 0.7957 and 0.9648, respectively.</span>

2020 ◽  
Vol 34 (04) ◽  
pp. 4469-4476 ◽  
Author(s):  
Shumin Kong ◽  
Tianyu Guo ◽  
Shan You ◽  
Chang Xu

Recently, the teacher-student learning paradigm has drawn much attention in compressing neural networks on low-end edge devices, such as mobile phones and wearable watches. Current algorithms mainly assume the complete dataset for the teacher network is also available for the training of the student network. However, for real-world scenarios, users may only have access to part of training examples due to commercial profits or data privacy, and severe over-fitting issues would happen as a result. In this paper, we tackle the challenge of learning student networks with few data by investigating the ground-truth data-generating distribution underlying these few data. Taking Wasserstein distance as the measurement, we assume this ideal data distribution lies in a neighborhood of the discrete empirical distribution induced by the training examples. Thus we propose to safely optimize the worst-case cost within this neighborhood to boost the generalization. Furthermore, with theoretical analysis, we derive a novel and easy-to-implement loss for training the student network in an end-to-end fashion. Experimental results on benchmark datasets validate the effectiveness of our proposed method.


2013 ◽  
Vol 29 (4) ◽  
pp. 1521-1535 ◽  
Author(s):  
Pralhad Uprety ◽  
Fumio Yamazaki ◽  
Fabio Dell'Acqua

Satellite remote sensing is being used to monitor disaster-affected areas for post-disaster reconnaissance and recovery. One of the special features of Synthetic Aperture Radar (SAR) is that it can operate day and night and penetrate the cloud cover because of which it is being widely used in emergency situations. Building damage detection for the 6 April 2009 L'Aquila, Italy, earthquake was conducted using high-resolution TerraSAR-X images obtained before and after the event. The correlation coefficient and the difference of backscatter coefficients of the pre- and post-event images were calculated in a similar way as Matsuoka and Yamazaki (2004) . The threshold value of the correlation coefficient was suggested and used in detecting building damage. The results were compared with ground truth data and a post-event optical image. Based on the study, building damage could be observed in an urban setting of L'Aquila with overall accuracy of 89.8% and Kappa coefficient of 0.45.


2019 ◽  
Vol 6 (1) ◽  
pp. 32-37
Author(s):  
Ricky Ramadhan ◽  
Jayanti Yusmah Sari ◽  
Ika Purwanti Ningrum

The existence of counterfeit money is often troubling the public. The solution given by the government to be careful of counterfeit money is by means of 3D (seen, touched and looked at). However, this step has not been perfectly able to distinguish real money and fake money. So there is a need for a system to help detect the authenticity of money. Therefore, in this study a system was designed that can detect the authenticity of rupiah and its nominal value. For data acquisition, this system uses detection boxes, ultraviolet lights and smartphone cameras. As for feature extraction, this system uses segmentation methods. The segmentation method based on the threshold value is used to obtain an invisible ink pattern which is a characteristic of real money along with the nominal value of the money. The feature is then used in the stage of detection of money authenticity using FKNN (Fuzzy K-Nearest Neighbor) method. From 24 test data, obtained an average accuracy of 96%. This shows that the system built can detect the authenticity and nominal value of the rupiah well.


2021 ◽  
Vol 13 (13) ◽  
pp. 2619
Author(s):  
Joao Fonseca ◽  
Georgios Douzas ◽  
Fernando Bacao

In remote sensing, Active Learning (AL) has become an important technique to collect informative ground truth data ``on-demand'' for supervised classification tasks. Despite its effectiveness, it is still significantly reliant on user interaction, which makes it both expensive and time consuming to implement. Most of the current literature focuses on the optimization of AL by modifying the selection criteria and the classifiers used. Although improvements in these areas will result in more effective data collection, the use of artificial data sources to reduce human--computer interaction remains unexplored. In this paper, we introduce a new component to the typical AL framework, the data generator, a source of artificial data to reduce the amount of user-labeled data required in AL. The implementation of the proposed AL framework is done using Geometric SMOTE as the data generator. We compare the new AL framework to the original one using similar acquisition functions and classifiers over three AL-specific performance metrics in seven benchmark datasets. We show that this modification of the AL framework significantly reduces cost and time requirements for a successful AL implementation in all of the datasets used in the experiment.


2021 ◽  
Vol 11 (19) ◽  
pp. 8817
Author(s):  
Ángela Almela

In the last decade, fields such as psychology and natural language processing have devoted considerable attention to the automatization of the process of deception detection, developing and employing a wide array of automated and computer-assisted methods for this purpose. Similarly, another emerging research area is focusing on computer-assisted deception detection using linguistics, with promising results. Accordingly, in the present article, the reader is firstly provided with an overall review of the state of the art of corpus-based research exploring linguistic cues to deception as well as an overview on several approaches to the study of deception and on previous research into its linguistic detection. In an effort to promote corpus-based research in this context, this study explores linguistic cues to deception in the Spanish written language with the aid of an automatic text classification tool, by means of an ad hoc corpus containing ground truth data. Interestingly, the key findings reveal that, although there is a set of linguistic cues which contributes to the global statistical classification model, there are some discursive differences across the subcorpora, yielding better classification results on the analysis conducted on the subcorpus containing emotionally loaded language.


Author(s):  
T. Wu ◽  
B. Vallet ◽  
M. Pierrot-Deseilligny ◽  
E. Rupnik

Abstract. Stereo dense matching is a fundamental task for 3D scene reconstruction. Recently, deep learning based methods have proven effective on some benchmark datasets, for example Middlebury and KITTI stereo. However, it is not easy to find a training dataset for aerial photogrammetry. Generating ground truth data for real scenes is a challenging task. In the photogrammetry community, many evaluation methods use digital surface models (DSM) to generate the ground truth disparity for the stereo pairs, but in this case interpolation may bring errors in the estimated disparity. In this paper, we publish a stereo dense matching dataset based on ISPRS Vaihingen dataset, and use it to evaluate some traditional and deep learning based methods. The evaluation shows that learning-based methods outperform traditional methods significantly when the fine tuning is done on a similar landscape. The benchmark also investigates the impact of the base to height ratio on the performance of the evaluated methods. The dataset can be found in https://github.com/whuwuteng/benchmark_ISPRS2021.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Laura K. Young ◽  
Hannah E. Smithson

AbstractHigh resolution retinal imaging systems, such as adaptive optics scanning laser ophthalmoscopes (AOSLO), are increasingly being used for clinical research and fundamental studies in neuroscience. These systems offer unprecedented spatial and temporal resolution of retinal structures in vivo. However, a major challenge is the development of robust and automated methods for processing and analysing these images. We present ERICA (Emulated Retinal Image CApture), a simulation tool that generates realistic synthetic images of the human cone mosaic, mimicking images that would be captured by an AOSLO, with specified image quality and with corresponding ground-truth data. The simulation includes a self-organising mosaic of photoreceptors, the eye movements an observer might make during image capture, and data capture through a real system incorporating diffraction, residual optical aberrations and noise. The retinal photoreceptor mosaics generated by ERICA have a similar packing geometry to human retina, as determined by expert labelling of AOSLO images of real eyes. In the current implementation ERICA outputs convincingly realistic en face images of the cone photoreceptor mosaic but extensions to other imaging modalities and structures are also discussed. These images and associated ground-truth data can be used to develop, test and validate image processing and analysis algorithms or to train and validate machine learning approaches. The use of synthetic images has the advantage that neither access to an imaging system, nor to human participants is necessary for development.


2018 ◽  
Vol 42 (2) ◽  
pp. 312-319 ◽  
Author(s):  
F. R. Zakani ◽  
M. Bouksim ◽  
K. Arhid ◽  
M. Aboulfatah ◽  
T. Gadi

3D mesh segmentation has become an essential step in many applications in 3D shape analysis. In this paper, a new segmentation method is proposed based on a learning approach using the artificial neural networks classifier and the spectral clustering for segmentation. Firstly, a training step is done using the artificial neural network trained on existing segmentation, taken from the ground truth segmentation (done by humane operators) available in the benchmark proposed by Chen et al. to extract the candidate boundaries of a given 3D-model based on a set of geometric criteria. Then, we use this resulted knowledge to construct a new connectivity of the mesh and use the spectral clustering method to segment the 3D mesh into significant parts. Our approach was evaluated using different evaluation metrics. The experiments confirm that the proposed method yields significantly good results and outperforms some of the competitive segmentation methods in the literature.


2021 ◽  
Author(s):  
Laura K Young ◽  
Hannah E Smithson

ABSTRACTHigh resolution retinal imaging systems, such as adaptive optics scanning laser ophthalmoscopes (AOSLO), are increasingly being used for clinical and fundamental studies in neuroscience. These systems offer unprecedented spatial and temporal resolution of retinal structures in vivo. However, a major challenge is the development of robust and automated methods for processing and analysing these images. We present ERICA (Emulated Retinal Image CApture), a simulation tool that generates realistic synthetic images of the human cone mosaic, mimicking images that would be captured by an AOSLO, with specified image quality and with corresponding ground truth data. The simulation includes a self-organising mosaic of photoreceptors, the eye movements an observer might make during image capture, and data capture through a real system incorporating diffraction, residual optical aberrations and noise. The retinal photoreceptor mosaics generated by ERICA have a similar packing geometry to human retina, as determined by expert labelling of AOSLO images of real eyes. In the current implementation ERICA outputs convincingly realistic en face images of the cone photoreceptor mosaic but extensions to other imaging modalities and structures are also discussed. These images and associated ground-truth data can be used to develop, test and validate image processing and analysis algorithms or to train and validate machine learning approaches. The use of synthetic images has the advantage that neither access to an imaging system, nor to human participants is necessary for development.


2012 ◽  
Vol 3 (2) ◽  
pp. 253-255
Author(s):  
Raman Brar

Image segmentation plays a vital role in several medical imaging programs by assisting the delineation of physiological structures along with other parts. The objective of this research work is to segmentize human lung MRI (Medical resonance Imaging) images for early detection of cancer.Watershed Transform Technique is implemented as the Segmentation method in this work. Some comparative experiments using both directly applied watershed algorithm and after marking foreground and computed background segmentation methods show the improved lung segmentation accuracy in some image cases.


Sign in / Sign up

Export Citation Format

Share Document