scholarly journals A Deep Learning Approach for Retinal Image Feature Extraction

2021 ◽  
Vol 29 (4) ◽  
Author(s):  
Mohammed Enamul Hoque ◽  
Kuryati Kipli ◽  
Tengku Mohd Afendi Zulcaffle ◽  
Abdulrazak Yahya Saleh Al-Hababi ◽  
Dayang Azra Awang Mat ◽  
...  

Retinal image analysis is crucially important to detect the different kinds of life-threatening cardiovascular and ophthalmic diseases as human retinal microvasculature exhibits remarkable abnormalities responding to these disorders. The high dimensionality and random accumulation of retinal images enlarge the data size, that creating complexity in managing and understating the retinal image data. Deep Learning (DL) has been introduced to deal with this big data challenge by developing intelligent tools. Convolutional Neural Network (CNN), a DL approach, has been designed to extract hierarchical image features with more abstraction. To assist the ophthalmologist in eye screening and ophthalmic disease diagnosis, CNN is being explored to create automatic systems for microvascular pattern analysis, feature extraction, and quantification of retinal images. Extraction of the true vessel of retinal microvasculature is significant for further analysis, such as vessel diameter and bifurcation angle quantification. This study proposes a retinal image feature, true vessel segments extraction approach exploiting the Faster RCNN. The fundamental Image Processing principles have been employed for pre-processing the retinal image data. A combined database assembling image data from different publicly available databases have been used to train, test, and evaluate this proposed method. This proposed method has obtained 92.81% sensitivity and 63.34 positive predictive value in extracting true vessel segments from the top first tier of colour retinal images. It is expected to integrate this method into ophthalmic diagnostic tools with further evaluation and validation by analysing the performance.

2021 ◽  
Vol 17 (14) ◽  
pp. 103-118
Author(s):  
Mohammed Enamul Hoque ◽  
Kuryati Kipli

Image recognition and understanding is considered as a remarkable subfield of Artificial Intelligence (AI). In practice, retinal image data have high dimensionality leading to enormous size data. As the morphological retinal image datasets can be analyzed in an expansive and non-invasive way, AI more precisely Deep Learning (DL) methods are facilitating in developing intelligent retinal image analysis tools. The most recently developed DL technique, Convolutional Neural Network (CNN) showed remarkable efficiency in identifying, localizing, and quantifying the complex and hierarchical image features that are responsible for severe cardiovascular diseases. Different deep layered CNN architectures such as LeeNet, AlexNet, and ResNet have been developed exploiting CNN morphology. This wide variety of CNN structures can iteratively learn complex data structures of different datasets through supervised or unsupervised learning and perform exquisite analysis for feature recognition independently to diagnose threatening cardiovascular diseases. In modern ophthalmic practice, DL based automated methods are being used in retinopathy screening, grading, identifying, and quantifying the pathological features to employ further therapeutic approaches and offering a wide potentiality to get rid of ophthalmic system complexity. In this review, the recent advances of DL technologies in retinal image segmentation and feature extraction are extensively discussed. To accomplish this study the pertinent materials were extracted from different publicly available databases and online sources deploying the relevant keywords that includes retinal imaging, artificial intelligence, deep learning and retinal database. For the associated publications the reference lists of selected articles were further investigated.


2022 ◽  
Vol 2022 ◽  
pp. 1-12
Author(s):  
Chuanbao Niu ◽  
Mingzhu Zhang

This paper presents an in-depth study and analysis of the image feature extraction technique for ancient ceramic identification using an algorithm of partial differential equations. Image features of ancient ceramics are closely related to specific raw material selection and process technology, and complete acquisition of image features of ancient ceramics is a prerequisite for achieving image feature identification of ancient ceramics, since the quality of extracted area-grown ancient ceramic image feature extraction method is closely related to the background pixels and does not have generalizability. In this paper, we propose a deep learning-based extraction method, using Eased as a deep learning support platform, to extract and validate 5834 images of 272 types of ancient ceramics from kilns, celadon, and Yue kilns after manual labelling and training learning, and the results show that the average complete extraction rate is higher than 99%. The implementation of the deep learning method is summarized and compared with the traditional region growth extraction method, and the results show that the method is robust with the increase of the learning amount and has generalizability, which is a new method to effectively achieve the complete image feature extraction of ancient ceramics. The main content of the finite difference method is to use the ratio of the difference between the function values of two adjacent points and the distance between the two points to approximate the partial derivative of the function with respect to the variable. This idea was used to turn the problem of division into a problem of difference. Recognition of ancient ceramic image features was realized based on the extraction of the overall image features of ancient ceramics, the extraction and recognition of vessel type features, the quantitative recognition of multidimensional feature fusion ornamentation image features, and the implementation of deep learning based on inscription model recognition image feature classification recognition method; three-layer B/S architecture web application system and cross-platform system language called as the architectural support; and database services, deep learning packaging, and digital image processing. The specific implementation method is based on database service, deep learning encapsulation, digital image processing, and third-party invocation, and the service layer fusion and relearning mechanism is proposed to achieve the preliminary intelligent recognition system of ancient ceramic vessel type and ornament image features. The results of the validation test meet the expectation and verify the effectiveness of the ancient ceramic vessel type and ornament image feature recognition system.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5312
Author(s):  
Yanni Zhang ◽  
Yiming Liu ◽  
Qiang Li ◽  
Jianzhong Wang ◽  
Miao Qi ◽  
...  

Recently, deep learning-based image deblurring and deraining have been well developed. However, most of these methods fail to distill the useful features. What is more, exploiting the detailed image features in a deep learning framework always requires a mass of parameters, which inevitably makes the network suffer from a high computational burden. We propose a lightweight fusion distillation network (LFDN) for image deblurring and deraining to solve the above problems. The proposed LFDN is designed as an encoder–decoder architecture. In the encoding stage, the image feature is reduced to various small-scale spaces for multi-scale information extraction and fusion without much information loss. Then, a feature distillation normalization block is designed at the beginning of the decoding stage, which enables the network to distill and screen valuable channel information of feature maps continuously. Besides, an information fusion strategy between distillation modules and feature channels is also carried out by the attention mechanism. By fusing different information in the proposed approach, our network can achieve state-of-the-art image deblurring and deraining results with a smaller number of parameters and outperform the existing methods in model complexity.


2021 ◽  
Vol 2083 (4) ◽  
pp. 042007
Author(s):  
Xiaowen Liu ◽  
Juncheng Lei

Abstract Image recognition technology mainly includes image feature extraction and classification recognition. Feature extraction is the key link, which determines whether the recognition performance is good or bad. Deep learning builds a model by building a hierarchical model structure like the human brain, extracting features layer by layer from the data. Applying deep learning to image recognition can further improve the accuracy of image recognition. Based on the idea of clustering, this article establishes a multi-mix Gaussian model for engineering image information in RGB color space through offline learning and expectation-maximization algorithms, to obtain a multi-mix cluster representation of engineering image information. Then use the sparse Gaussian machine learning model on the YCrCb color space to quickly learn the distribution of engineering images online, and design an engineering image recognizer based on multi-color space information.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7467
Author(s):  
Shih-Lin Lin

Rolling bearings are important in rotating machinery and equipment. This research proposes variational mode decomposition (VMD)-DenseNet to diagnose faults in bearings. The research feature involves analyzing the Hilbert spectrum through VMD whereby the vibration signal is converted into an image. Healthy and various faults show different characteristics on the image, thus there is no need to select features. Coupled with the lightweight network, DenseNet, for image classification and prediction. DenseNet is used to build a model of motor fault diagnosis; its structure is simple, and the calculation speed is fast. The method of using DenseNet for image feature learning can perform feature extraction on each image block of the image, providing full play to the advantages of deep learning to obtain accurate results. This research method is verified by the data of the time-varying bearing experimental device at the University of Ottawa. Through the four links of signal acquisition, feature extraction, fault identification, and prediction, a mechanical intelligent fault diagnosis system has established the state of bearing. The experimental results show that the method can accurately identify four common motor faults, with a VMD-DenseNet prediction accuracy rate of 92%. It provides a more effective method for bearing fault diagnosis and has a wide range of application prospects in fault diagnosis engineering. In the future, online and timely diagnosis can be achieved for intelligent fault diagnosis.


2019 ◽  
Vol 2019 ◽  
pp. 1-12 ◽  
Author(s):  
Tsun-Kuo Lin

This paper developed a principal component analysis (PCA)-integrated algorithm for feature identification in manufacturing; this algorithm is based on an adaptive PCA-based scheme for identifying image features in vision-based inspection. PCA is a commonly used statistical method for pattern recognition tasks, but an effective PCA-based approach for identifying suitable image features in manufacturing has yet to be developed. Unsuitable image features tend to yield poor results when used in conventional visual inspections. Furthermore, research has revealed that the use of unsuitable or redundant features might influence the performance of object detection. To address these problems, the adaptive PCA-based algorithm developed in this study entails the identification of suitable image features using a support vector machine (SVM) model for inspecting of various object images; this approach can be used for solving the inherent problem of detection that occurs when the extraction contains challenging image features in manufacturing processes. The results of experiments indicated that the proposed algorithm can successfully be used to adaptively select appropriate image features. The algorithm combines image feature extraction and PCA/SVM classification to detect patterns in manufacturing. The algorithm was determined to achieve high-performance detection and to outperform the existing methods.


2014 ◽  
Vol 513-517 ◽  
pp. 2268-2272
Author(s):  
Fu Chao Cheng ◽  
Fang Miao ◽  
Wen Hui Yang

In existed distributed edge extraction method based on MapReduce, the inappropriate dataset split algorithms leaded to the loss problem of image features in result. We presented a distributed computing platform called Split Process Cluster (SPC) to resolve this problem. In SPC, the images are partitioned with the resilient image pyramid model (RIP), a multi-layer and redundant data structure we presented earlier, to ensure the integrity of original image features. And SPC packages the image data to the form of Key-Value pairs, which could be processed through Hadoop, and reduces the results with density-based spatial clustering of applications with noise (DBSCAN) algorithm. Compared to traditional method, the extraction rate of image feature by using SPC has been improved, which indicates that using SPC is an efficient way to improve the extraction rate of distributed edge extraction.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Chao Zhang ◽  
Haojin Hu ◽  
Yonghang Tai ◽  
Lijun Yun ◽  
Jun Zhang

To fuse infrared and visible images in wireless applications, the extraction and transmission of characteristic information security is an important task. The fused image quality depends on the effectiveness of feature extraction and the transmission of image pair characteristics. However, most fusion approaches based on deep learning do not make effective use of the features for image fusion, which results in missing semantic content in the fused image. In this paper, a novel trustworthy image fusion method is proposed to address these issues, which applies convolutional neural networks for feature extraction and blockchain technology to protect sensitive information. The new method can effectively reduce the loss of feature information by making the output of the feature extraction network in each convolutional layer to be fed to the next layer along with the production of the previous layer, and in order to ensure the similarity between the fused image and the original image, the original input image feature map is used as the input of the reconstruction network in the image reconstruction network. Compared to other methods, the experimental results show that our proposed method can achieve better quality and satisfy human perception.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Tianming Song ◽  
Xiaoyang Yu ◽  
Shuang Yu ◽  
Zhe Ren ◽  
Yawei Qu

Medical image technology is becoming more and more important in the medical field. It not only provides important information about internal organs of the body for clinical analysis and medical treatment but also assists doctors in diagnosing and treating various diseases. However, in the process of medical image feature extraction, there are some problems, such as inconspicuous feature extraction and low feature preparation rate. Combined with the learning idea of convolution neural network, the image multifeature vectors are quantized in a deeper level, which makes the image features further abstract and not only makes up for the one-sidedness of single feature description but also improves the robustness of feature descriptors. This paper presents a medical image processing method based on multifeature fusion, which has high feature extraction effect on medical images of chest, lung, brain and liver, and can better express the feature relationship of medical images. Experimental results show that the accuracy of the proposed method is more than 5% higher than that of other methods, which shows that the performance of the proposed method is better.


Sign in / Sign up

Export Citation Format

Share Document