automatic feature extraction
Recently Published Documents


TOTAL DOCUMENTS

245
(FIVE YEARS 98)

H-INDEX

18
(FIVE YEARS 5)

2022 ◽  
pp. 102358
Author(s):  
Miguel Molina-Moreno ◽  
Iván González-Díaz ◽  
Jon Sicilia ◽  
Georgiana Crainiciuc ◽  
Miguel Palomino-Segura ◽  
...  

2021 ◽  
pp. 1-10
Author(s):  
K. Seethalakshmi ◽  
S. Valli

Deep learning using fuzzy is highly modular and more accurate. Adaptive Fuzzy Anisotropy diffusion filter (FADF) is used to remove noise from the image while preserving edges, lines and improve smoothing effects. By detecting edge and noise information through pre-edge detection using fuzzy contrast enhancement, post-edge detection using fuzzy morphological gradient filter and noise detection technique. Convolution Neural Network (CNN) ResNet-164 architecture is used for automatic feature extraction. The resultant feature vectors are classified using ANFIS deep learning. Top-1 error rate is reduced from 21.43% to 18.8%. Top-5 error rate is reduced to 2.68%. The proposed work results in high accuracy rate with low computation cost. The recognition rate of 99.18% and accuracy of 98.24% is achieved on standard dataset. Compared to the existing techniques the proposed work outperforms in all aspects. Experimental results provide better result than the existing techniques on FACES 94, Feret, Yale-B, CMU-PIE, JAFFE dataset and other state-of-art dataset.


2021 ◽  
Vol 15 ◽  
Author(s):  
Chao He ◽  
Jialu Liu ◽  
Yuesheng Zhu ◽  
Wencai Du

Classification of electroencephalogram (EEG) is a key approach to measure the rhythmic oscillations of neural activity, which is one of the core technologies of brain-computer interface systems (BCIs). However, extraction of the features from non-linear and non-stationary EEG signals is still a challenging task in current algorithms. With the development of artificial intelligence, various advanced algorithms have been proposed for signal classification in recent years. Among them, deep neural networks (DNNs) have become the most attractive type of method due to their end-to-end structure and powerful ability of automatic feature extraction. However, it is difficult to collect large-scale datasets in practical applications of BCIs, which may lead to overfitting or weak generalizability of the classifier. To address these issues, a promising technique has been proposed to improve the performance of the decoding model based on data augmentation (DA). In this article, we investigate recent studies and development of various DA strategies for EEG classification based on DNNs. The review consists of three parts: what kind of paradigms of EEG-based on BCIs are used, what types of DA methods are adopted to improve the DNN models, and what kind of accuracy can be obtained. Our survey summarizes the current practices and performance outcomes that aim to promote or guide the deployment of DA to EEG classification in future research and development.


Author(s):  
T. Senthil ◽  
C. Rajan ◽  
J. Deepika

The predictions of characters/text/digits from the handwritten images have made the research community spotlight towards recognition. There are enormous applications and ambiguity that made prediction possible with Deep Learning (DL) approaches. Primarily, there are four necessary steps to be carried out with handwriting prediction. First, consideration of a dataset that is more appropriate for DL validation an inefficient manner. Here, Special Database 1 and Special Database 2 are used, which are combined and modified by the National Institute of Standards and Technology (NIST). Next is pre-processing of input handwritten digit recognition data by data normalization, extraction of efficient features which provides better prediction accuracy. The proposed idea uses pixel values as features with the analysis of hyper-parameters to enhance near-human performance. With SVM, non-linear and linear models are built to extract the appropriate features for further processing. The features are separate and placed over the Bag of Features (BoF), which is used by the next processing stage. Finally, a novel Convolutional Neural Network (CNN) is by built modifying the network structure with Orthogonal Learning Particle Swarm Optimization (CNN-OLPSO). This modification is adopted for evolutionarily optimizing the number of hyper-parameters. This proposed optimizer predicts the optimal values from the fitness computation and shows better efficiency when compared to various other conventional approaches. The novelty which relies on CNN adoption is to endeavor a suitable path towards digitalization and preserve the handwritten structure and help automatic feature extraction using CNN by offering better computation accuracy. The optimization approach helps to avoid over-fitting and under-fitting issues. Here, metrics like accuracy, elapsed time, recall, precision, and [Formula: see text]-measure are evaluated. The results of CNN-OLPSO give better accuracy, reduced error rate and better execution time (s) compared to other existing methods. Thus, the proposed model shows better tradeoff in the recognition rate of handwritten digits.


Author(s):  
Zh. A. Buribayev ◽  
◽  
Zh. Amirgaliyeva ◽  
S. K. Joldasbayev ◽  
M. S. Zhassuzak ◽  
...  

The implementation of robotic systems and digitalization in agriculture are important tasks today. In this paper, the possibility of using pattern recognition and machine learning methods as a computer model of an agricultural robot for harvesting is considered. The grading of tomato fruits can be classified based on their ripeness according to their life cycles, which can be identified by their color: green in the growing stage, yellow in the pre-ripening stage, and red when ripe. Conventional skill-based methods cannot meet the exact selection criteria for modern production management in the agricultural sector as they are time-consuming and of low accuracy. Automatic feature extraction behavior using machine learning is most effective in image classification and recognition tasks. Thus, the article presents the results of a study on the recognition of ripe tomato fruits by a robotic system, carried out within the framework of the grant project of the Ministry of Education and Science of the Republic of Kazakhstan AP08857573 and implemented classical algorithms based on the HSV color model and color segmentation using the k-means algorithm as comparative algorithms and based on machine learning, a universal intelligent tomato classification system is proposed for practical use using Yolo 5. This study aims to provide an inexpensive solution with the best performance and accuracy for assessing tomato ripeness. The results are collected in terms of accuracy, loss curves and confusion matrix. The results showed that the proposed model outperforms other machine learning (ML) methods used by researchers for tomato classification problems, providing 99% accuracy.


Author(s):  
Nguyen Chi Thanh

Colonoscopy image classification is an image classification task that predicts whether colonoscopy images contain polyps or not. It is an important task input for an automatic polyp detection system. Recently, deep neural networks have been widely used for colonoscopy image classification due to the automatic feature extraction with high accuracy. However, training these networks requires a large amount of manually annotated data, which is expensive to acquire and limited by the available resources of endoscopy specialists. We propose a novel method for training colonoscopy image classification networks by using self-supervised visual feature learning to overcome this challenge. We adapt image denoising as a pretext task for self-supervised visual feature learning from unlabeled colonoscopy image dataset, where noise is added to the image for input, and the original image serves as the label. We use an unlabeled colonoscopy image dataset containing 8,500 images collected from the PACS system of Hospital 103 to train the pretext network. The feature exactor of the pretext network trained in a self-supervised way is used for colonoscopy image classification. A small labeled dataset from the public colonoscopy image dataset Kvasir is used to fine-tune the classifier. Our experiments demonstrate that the proposed self-supervised learning method can achieve a high colonoscopy image classification accuracy better than the classifier trained from scratch, especially at a small training dataset. When a dataset with only annotated 200 images is used for training classifiers, the proposed method improves accuracy from 72,16% to 93,15% compared to the baseline classifier.


2021 ◽  
Vol 19 (11) ◽  
pp. 126-140
Author(s):  
Zahraa S. Aaraji ◽  
Hawraa H. Abbas

Neuroimaging data analysis has attracted a great deal of attention with respect to the accurate diagnosis of Alzheimer’s disease (AD). Magnetic Resonance Imaging (MRI) scanners have thus been commonly used to study AD-related brain structural variations, providing images that demonstrate both morphometric and anatomical changes in the human brain. Deep learning algorithms have already been effectively exploited in other medical image processing applications to identify features and recognise patterns for many diseases that affect the brain and other organs; this paper extends on this to describe a novel computer aided software pipeline for the classification and early diagnosis of AD. The proposed method uses two types of three-dimensional Convolutional Neural Networks (3D CNN) to facilitate brain MRI data analysis and automatic feature extraction and classification, so that pre-processing and post-processing are utilised to normalise the MRI data and facilitate pattern recognition. The experimental results show that the proposed approach achieves 97.5%, 82.5%, and 83.75% accuracy in terms of binary classification AD vs. cognitively normal (CN), CN vs. mild cognitive impairment (MCI) and MCI vs. AD, respectively, as well as 85% accuracy for multi class-classification, based on publicly available data sets from the Alzheimer’s disease Neuroimaging Initiative (ADNI).


Author(s):  
He Wang ◽  
Xinshan Zhu ◽  
Pinyin Chen ◽  
Yuxuan Yang ◽  
Chao Ma ◽  
...  

Abstract The Electroencephalogram (EEG) signal, as a data carrier that can contain a large amount of information about the human brain in different states, is one of the most widely used metrics for assessing human psychophysiological states. Among a variety of analysis methods, deep learning, especially convolutional neural network (CNN), has achieved remarkable results in recent years as a method to effectively extract features from EEG signals. Although deep learning has the advantages of automatic feature extraction and effective classification, it also faces difficulties in network structure design and requires an army of prior knowledge. Automating the design of these hyperparameters can therefore save experts' time and manpower. Neural architecture search techniques have thus emerged. In this paper, based on an existing gradient-based NAS algorithm, PC-DARTS, with targeted improvements and optimizations for the characteristics of EEG signals. Specifically, we establish the model architecture step by step based on the manually designed deep learning models for EEG discrimination by retaining the framework of the search algorithm and performing targeted optimization of the model search space. Corresponding features are extracted separately according to the frequency domain, time domain characteristics of the EEG signal and the spatial position of the EEG electrode. The architecture was applied to EEG-based emotion recognition and driver drowsiness assessment tasks. The results illustrate that compared with the existing methods, the model architecture obtained in this paper can achieve competitive overall accuracy and better standard deviation in both tasks. Therefore, this approach is an effective migration of NAS technology into the field of EEG analysis and has great potential to provide high-performance results for other types of classification and prediction tasks. This can effectively reduce the time cost for researchers and facilitate the application of CNN in more areas.


2021 ◽  
Vol 11 (12) ◽  
pp. 3103-3109
Author(s):  
G. Prema Arokia Mary ◽  
N. Suganthi ◽  
M. S. Hema

The early diagnosis of Parkinson’s Disease (PD) is a challenging practice for doctors. Currently, there are no separate diagnostics and tests to be done to predict onset PD. However, the PD can be predicted through repeated clinical trials and tests. Sometimes, early prediction of PD can become tedious based on trials and tests. The computer-aided prediction will help medical professionals predict PD accurately during one’s onset stages to improve the PD patients’ quality of life. Hence, early prediction of PD is essential. In this article, Convolution Neural Networks (CNN) is proposed to classify PD patients and healthy individuals. The brain MRI images are given as input for the proposed methodology. The CNN deep neural network will first extract the features from the images. Then, it will classify the PD patients and healthy individuals from the extracted features. The automatic feature extraction will improve the accuracy of the classifier and reduce human error. The brain MRI images are taken from the PPMI dataset for experimentation. The sensitivity, specificity, and accuracy are calculated to assess the performance of the proposed methodology. The loss is also calculated to verify the performance of the classifier. It is observed that the CNN classifier has produced a higher accuracy of more than 98% in classifying PD patients and healthy individuals when compared to multi-layer perceptron deep learning.


Sign in / Sign up

Export Citation Format

Share Document