scholarly journals A Geometry-Based Deep Learning Feature Extraction Scheme for Airfoils

Author(s):  
Yu Xiang ◽  
Liwei Hu ◽  
Jun Zhang ◽  
Wenyong Wang

Abstract The perception of geometry-features of airfoils is the basis in aerodynamic area for performance prediction, parameterization, aircraft inverse design, etc. There are three approaches to percept the geometric shape of an airfoil, namely manual design of airfoil geometry parameter, polynomial definition and deep learning. The first two methods can directly extract geometry-features of airfoils or polynomial equations of airfoil curves, but the number of features extracted is limited. While deep learning algorithms can extract a large number of potential features (called latent features), however, the features extracted by deep learning are lacking of explicit geometrical meaning. Motivated by the advantages of polynomial definition and deep learning, we propose a geometry-based deep learning feature extraction scheme (named Bézier-based feature extraction, BFE) for airfoils, which consists of two parts: manifold metric feature extraction and geometry-feature fusion encoder (GF encoder). Manifold metric feature extraction, with the help of the Bézier curve, captures features from tangent space of airfoil curves, and GF encoder combines airfoil coordinate data and manifold metrics together to form a novel feature representation. A public UIUC airfoil dataset is used to verify the proposed BFE. Compared with classic Auto-Encoder, the mean square error (MSE) of BFE is reduced by 17.97% ~29.14%.

2022 ◽  
Author(s):  
Yu Xiang ◽  
Liwei Hu ◽  
Jun Zhang ◽  
Wenyong Wang

Abstract The perception of geometric-features of airfoils is the basis in aerodynamic area for performance prediction, parameterization, aircraft inverse design, etc. There are three approaches to percept the geometric shape of airfoils, namely manual design of airfoil geometry parameters, polynomial definition and deep learning. The first two methods directly define geometric-features or polynomials of airfoil curves, but the number of extracted features is limited. Deep learning algorithms can extract a large number of potential features (called latent features). However, the features extracted by deep learning lack explicit geometrical meaning. Motivated by the advantages of polynomial definition and deep learning, we propose a geometric-feature extraction method (named Bézier-based feature extraction, BFE) for airfoils, which consists of two parts: manifold metric feature extraction and geometric-feature fusion encoder (GF encoder). Manifold metric feature extraction, with the help of the Bézier curve, captures manifold metrics (a sort of geometric-features) from tangent space of airfoil curves, and the GF-encoder combines airfoil coordinate data and manifold metrics together to form novel fused geometric-features. To validate the feasibility of the fused geometric-features, two experiments based on the public UIUC airfoil dataset are conducted. Experiment I is used to extract manifold metrics of airfoils and export the fused geometric-features. Experiment II, based on the Multi-task learning (MTL), is used to fuse the discrepant data (i.e., the fused geometric-features and the flight conditions) to predict the aerodynamic performance of airfoils. The results show that the BFE can generate more smooth and realistic airfoils than Auto-Encoder, and the fused geometric-features extracted by BFE can be used to reduce the prediction errors of C L and C D .


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6762
Author(s):  
Jung Hyuk Lee ◽  
Geon Woo Lee ◽  
Guiyoung Bong ◽  
Hee Jeong Yoo ◽  
Hong Kook Kim

Autism spectrum disorder (ASD) is a developmental disorder with a life-span disability. While diagnostic instruments have been developed and qualified based on the accuracy of the discrimination of children with ASD from typical development (TD) children, the stability of such procedures can be disrupted by limitations pertaining to time expenses and the subjectivity of clinicians. Consequently, automated diagnostic methods have been developed for acquiring objective measures of autism, and in various fields of research, vocal characteristics have not only been reported as distinctive characteristics by clinicians, but have also shown promising performance in several studies utilizing deep learning models based on the automated discrimination of children with ASD from children with TD. However, difficulties still exist in terms of the characteristics of the data, the complexity of the analysis, and the lack of arranged data caused by the low accessibility for diagnosis and the need to secure anonymity. In order to address these issues, we introduce a pre-trained feature extraction auto-encoder model and a joint optimization scheme, which can achieve robustness for widely distributed and unrefined data using a deep-learning-based method for the detection of autism that utilizes various models. By adopting this auto-encoder-based feature extraction and joint optimization in the extended version of the Geneva minimalistic acoustic parameter set (eGeMAPS) speech feature data set, we acquire improved performance in the detection of ASD in infants compared to the raw data set.


Author(s):  
Chunmian Lin ◽  
Lin Li ◽  
Zhixing Cai ◽  
Kelvin C. P. Wang ◽  
Danny Xiao ◽  
...  

Automated lane marking detection is essential for advanced driver assistance system (ADAS) and pavement management work. However, prior research has mostly detected lane marking segments from a front-view image, which easily suffers from occlusion or noise disturbance. In this paper, we aim at accurate and robust lane marking detection from a top-view perspective, and propose a deep learning-based detector with adaptive anchor scheme, referred to as A2-LMDet. On the one hand, it is an end-to-end framework that fuses feature extraction and object detection into a single deep convolutional neural network. On the other hand, the adaptive anchor scheme is designed by formulating a bilinear interpolation algorithm, and is used to guide specific-anchor box generation and informative feature extraction. To validate the proposed method, a newly built lane marking dataset contained 24,000 high-resolution laser imaging data is further developed for case study. Quantitative and qualitative results demonstrate that A2-LMDet achieves highly accurate performance with 0.9927 precision, 0.9612 recall, and a 0.9767 [Formula: see text] score, which outperforms other advanced methods by a considerable margin. Moreover, ablation analysis illustrates the effectiveness of the adaptive anchor scheme for enhancing feature representation and performance improvement. We expect our work will help the development of related research.


2017 ◽  
Vol 2017 ◽  
pp. 1-9 ◽  
Author(s):  
Xiaojun Lu ◽  
Xu Duan ◽  
Xiuping Mao ◽  
Yuanyuan Li ◽  
Xiangde Zhang

This paper proposes a method that uses feature fusion to represent images better for face detection after feature extraction by deep convolutional neural network (DCNN). First, with Clarifai net and VGG Net-D (16 layers), we learn features from data, respectively; then we fuse features extracted from the two nets. To obtain more compact feature representation and mitigate computation complexity, we reduce the dimension of the fused features by PCA. Finally, we conduct face classification by SVM classifier for binary classification. In particular, we exploit offset max-pooling to extract features with sliding window densely, which leads to better matches of faces and detection windows; thus the detection result is more accurate. Experimental results show that our method can detect faces with severe occlusion and large variations in pose and scale. In particular, our method achieves 89.24% recall rate on FDDB and 97.19% average precision on AFW.


2020 ◽  
Vol 12 (2) ◽  
pp. 280 ◽  
Author(s):  
Liqin Liu ◽  
Zhenwei Shi ◽  
Bin Pan ◽  
Ning Zhang ◽  
Huanlin Luo ◽  
...  

In recent years, deep learning technology has been widely used in the field of hyperspectral image classification and achieved good performance. However, deep learning networks need a large amount of training samples, which conflicts with the limited labeled samples of hyperspectral images. Traditional deep networks usually construct each pixel as a subject, ignoring the integrity of the hyperspectral data and the methods based on feature extraction are likely to lose the edge information which plays a crucial role in the pixel-level classification. To overcome the limit of annotation samples, we propose a new three-channel image build method (virtual RGB image) by which the trained networks on natural images are used to extract the spatial features. Through the trained network, the hyperspectral data are disposed as a whole. Meanwhile, we propose a multiscale feature fusion method to combine both the detailed and semantic characteristics, thus promoting the accuracy of classification. Experiments show that the proposed method can achieve ideal results better than the state-of-art methods. In addition, the virtual RGB image can be extended to other hyperspectral processing methods that need to use three-channel images.


Diagnostics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 27 ◽  
Author(s):  
Omneya Attallah ◽  
Maha A. Sharkas ◽  
Heba Gadelkarim

The increasing rates of neurodevelopmental disorders (NDs) are threatening pregnant women, parents, and clinicians caring for healthy infants and children. NDs can initially start through embryonic development due to several reasons. Up to three in 1000 pregnant women have embryos with brain defects; hence, the primitive detection of embryonic neurodevelopmental disorders (ENDs) is necessary. Related work done for embryonic ND classification is very limited and is based on conventional machine learning (ML) methods for feature extraction and classification processes. Feature extraction of these methods is handcrafted and has several drawbacks. Deep learning methods have the ability to deduce an optimum demonstration from the raw images without image enhancement, segmentation, and feature extraction processes, leading to an effective classification process. This article proposes a new framework based on deep learning methods for the detection of END. To the best of our knowledge, this is the first study that uses deep learning techniques for detecting END. The framework consists of four stages which are transfer learning, deep feature extraction, feature reduction, and classification. The framework depends on feature fusion. The results showed that the proposed framework was capable of identifying END from embryonic MRI images of various gestational ages. To verify the efficiency of the proposed framework, the results were compared with related work that used embryonic images. The performance of the proposed framework was competitive. This means that the proposed framework can be successively used for detecting END.


2021 ◽  
Vol 13 (10) ◽  
pp. 1912
Author(s):  
Zhili Zhang ◽  
Meng Lu ◽  
Shunping Ji ◽  
Huafen Yu ◽  
Chenhui Nie

Extracting water-bodies accurately is a great challenge from very high resolution (VHR) remote sensing imagery. The boundaries of a water body are commonly hard to identify due to the complex spectral mixtures caused by aquatic vegetation, distinct lake/river colors, silts near the bank, shadows from the surrounding tall plants, and so on. The diversity and semantic information of features need to be increased for a better extraction of water-bodies from VHR remote sensing images. In this paper, we address these problems by designing a novel multi-feature extraction and combination module. This module consists of three feature extraction sub-modules based on spatial and channel correlations in feature maps at each scale, which extract the complete target information from the local space, larger space, and between-channel relationship to achieve a rich feature representation. Simultaneously, to better predict the fine contours of water-bodies, we adopt a multi-scale prediction fusion module. Besides, to solve the semantic inconsistency of feature fusion between the encoding stage and the decoding stage, we apply an encoder-decoder semantic feature fusion module to promote fusion effects. We carry out extensive experiments in VHR aerial and satellite imagery respectively. The result shows that our method achieves state-of-the-art segmentation performance, surpassing the classic and recent methods. Moreover, our proposed method is robust in challenging water-body extraction scenarios.


Entropy ◽  
2020 ◽  
Vol 22 (4) ◽  
pp. 484 ◽  
Author(s):  
Jose-Agustin Almaraz-Damian ◽  
Volodymyr Ponomaryov ◽  
Sergiy Sadovnychiy ◽  
Heydy Castillejos-Fernandez

In this paper, a new Computer-Aided Detection (CAD) system for the detection and classification of dangerous skin lesions (melanoma type) is presented, through a fusion of handcraft features related to the medical algorithm ABCD rule (Asymmetry Borders-Colors-Dermatoscopic Structures) and deep learning features employing Mutual Information (MI) measurements. The steps of a CAD system can be summarized as preprocessing, feature extraction, feature fusion, and classification. During the preprocessing step, a lesion image is enhanced, filtered, and segmented, with the aim to obtain the Region of Interest (ROI); in the next step, the feature extraction is performed. Handcraft features such as shape, color, and texture are used as the representation of the ABCD rule, and deep learning features are extracted using a Convolutional Neural Network (CNN) architecture, which is pre-trained on Imagenet (an ILSVRC Imagenet task). MI measurement is used as a fusion rule, gathering the most important information from both types of features. Finally, at the Classification step, several methods are employed such as Linear Regression (LR), Support Vector Machines (SVMs), and Relevant Vector Machines (RVMs). The designed framework was tested using the ISIC 2018 public dataset. The proposed framework appears to demonstrate an improved performance in comparison with other state-of-the-art methods in terms of the accuracy, specificity, and sensibility obtained in the training and test stages. Additionally, we propose and justify a novel procedure that should be used in adjusting the evaluation metrics for imbalanced datasets that are common for different kinds of skin lesions.


2019 ◽  
Vol 11 (24) ◽  
pp. 3006 ◽  
Author(s):  
Yafei Lv ◽  
Xiaohan Zhang ◽  
Wei Xiong ◽  
Yaqi Cui ◽  
Mi Cai

Remote sensing image scene classification (RSISC) is an active task in the remote sensing community and has attracted great attention due to its wide applications. Recently, the deep convolutional neural networks (CNNs)-based methods have witnessed a remarkable breakthrough in performance of remote sensing image scene classification. However, the problem that the feature representation is not discriminative enough still exists, which is mainly caused by the characteristic of inter-class similarity and intra-class diversity. In this paper, we propose an efficient end-to-end local-global-fusion feature extraction (LGFFE) network for a more discriminative feature representation. Specifically, global and local features are extracted from channel and spatial dimensions respectively, based on a high-level feature map from deep CNNs. For the local features, a novel recurrent neural network (RNN)-based attention module is first proposed to capture the spatial layout information and context information across different regions. Gated recurrent units (GRUs) is then exploited to generate the important weight of each region by taking a sequence of features from image patches as input. A reweighed regional feature representation can be obtained by focusing on the key region. Then, the final feature representation can be acquired by fusing the local and global features. The whole process of feature extraction and feature fusion can be trained in an end-to-end manner. Finally, extensive experiments have been conducted on four public and widely used datasets and experimental results show that our method LGFFE outperforms baseline methods and achieves state-of-the-art results.


Sign in / Sign up

Export Citation Format

Share Document