scholarly journals A Multi-Feature Fusion Based on Transfer Learning for Chicken Embryo Eggs Classification

Symmetry ◽  
2019 ◽  
Vol 11 (5) ◽  
pp. 606 ◽  
Author(s):  
Lvwen Huang ◽  
Along He ◽  
Mengqun Zhai ◽  
Yuxi Wang ◽  
Ruige Bai ◽  
...  

The fertility detection of Specific Pathogen Free (SPF) chicken embryo eggs in vaccine preparation is a challenging task due to the high similarity among six kinds of hatching embryos (weak, hemolytic, crack, infected, infertile, and fertile). This paper firstly analyzes two classification difficulties of feature similarity with subtle variations on six kinds of five- to seven-day embryos, and proposes a novel multi-feature fusion based on Deep Convolutional Neural Network (DCNN) architecture in a small dataset. To avoid overfitting, data augmentation is employed to generate enough training images after the Region of Interest (ROI) of original images are cropped. Then, all the augmented ROI images are fed into pretrained AlexNet and GoogLeNet to learn the discriminative deep features by transfer learning, respectively. After the local features of Speeded Up Robust Feature (SURF) and Histogram of Oriented Gradient (HOG) are extracted, the multi-feature fusion with deep features and local features is implemented. Finally, the Support Vector Machine (SVM) is trained with the fused features. The verified experiments show that this proposed method achieves an average classification accuracy rate of 98.4%, and that the proposed transfer learning has superior generalization and better classification performance for small-scale agricultural image samples.

Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4365
Author(s):  
Kwangyong Jung ◽  
Jae-In Lee ◽  
Nammoon Kim ◽  
Sunjin Oh ◽  
Dong-Wook Seo

Radar target classification is an important task in the missile defense system. State-of-the-art studies using micro-doppler frequency have been conducted to classify the space object targets. However, existing studies rely highly on feature extraction methods. Therefore, the generalization performance of the classifier is limited and there is room for improvement. Recently, to improve the classification performance, the popular approaches are to build a convolutional neural network (CNN) architecture with the help of transfer learning and use the generative adversarial network (GAN) to increase the training datasets. However, these methods still have drawbacks. First, they use only one feature to train the network. Therefore, the existing methods cannot guarantee that the classifier learns more robust target characteristics. Second, it is difficult to obtain large amounts of data that accurately mimic real-world target features by performing data augmentation via GAN instead of simulation. To mitigate the above problem, we propose a transfer learning-based parallel network with the spectrogram and the cadence velocity diagram (CVD) as the inputs. In addition, we obtain an EM simulation-based dataset. The radar-received signal is simulated according to a variety of dynamics using the concept of shooting and bouncing rays with relative aspect angles rather than the scattering center reconstruction method. Our proposed model is evaluated on our generated dataset. The proposed method achieved about 0.01 to 0.39% higher accuracy than the pre-trained networks with a single input feature.


Author(s):  
Marina Milosevic ◽  
Dragan Jankovic ◽  
Aleksandar Peulic

AbstractIn this paper, we present a system based on feature extraction techniques for detecting abnormal patterns in digital mammograms and thermograms. A comparative study of texture-analysis methods is performed for three image groups: mammograms from the Mammographic Image Analysis Society mammographic database; digital mammograms from the local database; and thermography images of the breast. Also, we present a procedure for the automatic separation of the breast region from the mammograms. Computed features based on gray-level co-occurrence matrices are used to evaluate the effectiveness of textural information possessed by mass regions. A total of 20 texture features are extracted from the region of interest. The ability of feature set in differentiating abnormal from normal tissue is investigated using a support vector machine classifier, Naive Bayes classifier and K-Nearest Neighbor classifier. To evaluate the classification performance, five-fold cross-validation method and receiver operating characteristic analysis was performed.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6793
Author(s):  
Inzamam Mashood Nasir ◽  
Muhammad Attique Khan ◽  
Mussarat Yasmin ◽  
Jamal Hussain Shah ◽  
Marcin Gabryel ◽  
...  

Documents are stored in a digital form across several organizations. Printing this amount of data and placing it into folders instead of storing digitally is against the practical, economical, and ecological perspective. An efficient way of retrieving data from digitally stored documents is also required. This article presents a real-time supervised learning technique for document classification based on deep convolutional neural network (DCNN), which aims to reduce the impact of adverse document image issues such as signatures, marks, logo, and handwritten notes. The proposed technique’s major steps include data augmentation, feature extraction using pre-trained neural network models, feature fusion, and feature selection. We propose a novel data augmentation technique, which normalizes the imbalanced dataset using the secondary dataset RVL-CDIP. The DCNN features are extracted using the VGG19 and AlexNet networks. The extracted features are fused, and the fused feature vector is optimized by applying a Pearson correlation coefficient-based technique to select the optimized features while removing the redundant features. The proposed technique is tested on the Tobacco3482 dataset, which gives a classification accuracy of 93.1% using a cubic support vector machine classifier, proving the validity of the proposed technique.


2020 ◽  
Vol 8 (6) ◽  
pp. 3823-3832

This work proposes an finest mapping from features space to inherited space using kernel locality non zero eigen values protecting Fisher discriminant analysis subspace approach. This approach is designed by cascading analytical and non-inherited face texture features. Both Gabor magnitude feature vector (GMFV) and phase feature vector (GPFV) are independently accessed. Feature fusion is carried out by cascading geometrical distance feature vector (GDFV) with Gabor magnitude and phase vectors. Feature fusion dataset space is converted into short dimensional inherited space by kernel locality protecting Fisher discriminant analysis method and projected space is normalized by suitable normalization technique to prevent dissimilarity between scores. Final scores of projected domains are fused using greatest fusion rule. Expressions are classified using Euclidean distance matching and support vector machine radial basis function kernel classifier. An experimental outcome emphasizes that the proposed approach is efficient for dimension reduction, competent recognition and classification. Performance of proposed approach is deliberated in comparison with connected subspace approaches. The finest average recognition rate achieves 97.61% for JAFFE and 81.48% YALE database respectively.


2020 ◽  
Vol 9 (4) ◽  
pp. 238 ◽  
Author(s):  
Zhiqiang Xu ◽  
Yumin Chen ◽  
Fan Yang ◽  
Tianyou Chu ◽  
Hongyan Zhou

The recognition of postearthquake scenes plays an important role in postearthquake rescue and reconstruction. To overcome the over-reliance on expert visual interpretation and the poor recognition performance of traditional machine learning in postearthquake scene recognition, this paper proposes a postearthquake multiple scene recognition (PEMSR) model based on the classical deep learning Single Shot MultiBox Detector (SSD) method. In this paper, a labeled postearthquake scenes dataset is constructed by segmenting acquired remote sensing images, which are classified into six categories: landslide, houses, ruins, trees, clogged and ponding. Due to the insufficiency and imbalance of the original dataset, transfer learning and a data augmentation and balancing strategy are utilized in the PEMSR model. To evaluate the PEMSR model, the evaluation metrics of precision, recall and F1 score are used in the experiment. Multiple experimental test results demonstrate that the PEMSR model shows a stronger performance in postearthquake scene recognition. The PEMSR model improves the detection accuracy of each scene compared with SSD by transfer learning and data augmentation strategy. In addition, the average detection time of the PEMSR model only needs 0.4565s, which is far less than the 8.3472s of the traditional Histogram of Oriented Gradient + Support Vector Machine (HOG+SVM) method.


2020 ◽  
Vol 10 (14) ◽  
pp. 4966 ◽  
Author(s):  
Maryam Nisa ◽  
Jamal Hussain Shah ◽  
Shansa Kanwal ◽  
Mudassar Raza ◽  
Muhammad Attique Khan ◽  
...  

As the number of internet users increases so does the number of malicious attacks using malware. The detection of malicious code is becoming critical, and the existing approaches need to be improved. Here, we propose a feature fusion method to combine the features extracted from pre-trained AlexNet and Inception-v3 deep neural networks with features attained using segmentation-based fractal texture analysis (SFTA) of images representing the malware code. In this work, we use distinctive pre-trained models (AlexNet and Inception-V3) for feature extraction. The purpose of deep convolutional neural network (CNN) feature extraction from two models is to improve the malware classifier accuracy, because both models have characteristics and qualities to extract different features. This technique produces a fusion of features to build a multimodal representation of malicious code that can be used to classify the grayscale images, separating the malware into 25 malware classes. The features that are extracted from malware images are then classified using different variants of support vector machine (SVM), k-nearest neighbor (KNN), decision tree (DT), and other classifiers. To improve the classification results, we also adopted data augmentation based on affine image transforms. The presented method is evaluated on a Malimg malware image dataset, achieving an accuracy of 99.3%, which makes it the best among the competing approaches.


2020 ◽  
Vol 32 (2) ◽  
pp. 67-92 ◽  
Author(s):  
Muhammad Sharif ◽  
Muhammad Attique ◽  
Muhammad Zeeshan Tahir ◽  
Mussarat Yasmim ◽  
Tanzila Saba ◽  
...  

Gait is a vital biometric process for human identification in the domain of machine learning. In this article, a new method is implemented for human gait recognition based on accurate segmentation and multi-level features extraction. Four major steps are performed including: a) enhancement of motion region in frame by the implementation of linear transformation with HSI color space; b) Region of Interest (ROI) detection based on parallel implementation of optical flow and background subtraction; c) shape and geometric features extraction and parallel fusion; d) Multi-class support vector machine (MSVM) utilization for recognition. The presented approach reduces error rate and increases the CCR. Extensive experiments are done on three data sets namely CASIA-A, CASIA-B and CASIA-C which present different variations in clothing and carrying conditions. The proposed method achieved maximum recognition results of 98.6% on CASIA-A, 93.5% on CASIA-B and 97.3% on CASIA-C, respectively.


2021 ◽  
Vol 11 (3) ◽  
pp. 997
Author(s):  
Jiaping Li ◽  
Wai Lun Lo ◽  
Hong Fu ◽  
Henry Shu Hung Chung

Meteorological visibility is an important meteorological observation indicator to measure the weather transparency which is important for the transport safety. It is a challenging problem to estimate the visibilities accurately from the image characteristics. This paper proposes a transfer learning method for the meteorological visibility estimation based on image feature fusion. Different from the existing methods, the proposed method estimates the visibility based on the data processing and features’ extraction in the selected subregions of the whole image and therefore it had less computation load and higher efficiency. All the database images were gray-averaged firstly for the selection of effective subregions and features extraction. Effective subregions are extracted for static landmark objects which can provide useful information for visibility estimation. Four different feature extraction methods (Densest, ResNet50, Vgg16, and Vgg19) were used for the feature extraction of the subregions. The features extracted by the neural network were then imported into the proposed support vector regression (SVR) regression model, which derives the estimated visibilities of the subregions. Finally, based on the weight fusion of the visibility estimates from the subregion models, an overall comprehensive visibility was estimated for the whole image. Experimental results show that the visibility estimation accuracy is more than 90%. This method can estimate the visibility of the image, with high robustness and effectiveness.


Author(s):  
Balaji Sreenivasulu ◽  
◽  
Anjaneyulu Pasala ◽  
Gaikwad Vasanth ◽  
◽  
...  

In computer vision, domain adaptation or transfer learning plays an important role because it learns a target classifier characteristics using labeled data from various distribution. The existing researches mostly focused on minimizing the time complexity of neural networks and it effectively worked on low-level features. However, the existing method failed to concentrate on data augmentation time and cost of labeled data. Moreover, machine learning techniques face difficulty to obtain the large amount of distributed label data. In this research study, the pre-trained network called inception layer is fine-tuned with the augmented data. There are two phases present in this study, where the effectiveness of data augmentation for Inception pre-trained networks is investigated in the first phase. The transfer learning approach is used to enhance the results of the first phase and the Support Vector Machine (SVM) is used to learn all the features extracted from inception layers. The experiments are conducted on a publicly available dataset to estimate the effectiveness of proposed method. The results stated that the proposed method achieved 95.23% accuracy, where the existing techniques namely deep neural network and traditional convolutional networks achieved 87.32% and 91.32% accuracy respectively. This validation results proved that the developed method nearly achieved 4-8% improvement in accuracy than existing techniques.


2021 ◽  
Author(s):  
Nur Amirah Abd Hamid ◽  
Mohd Ibrahim Shapiai ◽  
Uzma Batool ◽  
Ranjit Singh Sarban Singh ◽  
Muhamad Kamal Mohammed Amin ◽  
...  

Alzheimer’s disease (AD) is a progressive and irreversible neurodegenerative disease that requires attentive medical evaluation. Therefore, diagnosing of AD accurately is crucial to provide the patients with appropriate treatment to slow down the progression of AD as well to facilitate the treatment interventions. To date, deep learning by means of convolutional neural networks (CNNs) has been widely used in diagnosing of AD. There are several well-established CNNs architectures that have been used in the image classification domain for magnetic resonance imaging (MRI) images analysis such as LeNet-5, Inception-V4, VGG-16 and Residual Network. However, these existing deep learning-based methods have lack of ability to be spatial invariance to the input data, due to overlooking some salient local features of the region of interest (ROI) (i.e., hippocampal). In medical image analysis, local features of MRI images are hard to exploit due to the small pixel size of ROI. On the other hand, CNNs requires large dataset sample to perform well, but we have limited number of MRI images to train, thus, leading to overfitting. Therefore, we propose a novel deep learning-based model without pre-processing techniques by incorporating attention mechanism and global average pooling (GAP) layer to VGG-16 architecture to capture the salient features of the MRI image for subtle discriminating of AD and normal control (NC). Also, we utilize transfer learning to surpass the overfitting issue. Experiment is performed on data collected from Open Access Series of Imaging Studies (OASIS) database. The accuracy performance of binary classification (AD vs NC) using proposed method significantly outperforms the existing methods, 12-layered CNNs (trained from scratch) and Inception-V4 (transfer learning) by increasing 1.93% and 3.43% of the accuracy. In conclusion, Attention-GAP model capable of improving and achieving notable classification accuracy in diagnosing AD.


Sign in / Sign up

Export Citation Format

Share Document