scholarly journals Research on Feature Extracted Method for Flutter Test Based on EMD and CNN

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Hua Zheng ◽  
Zhenglong Wu ◽  
Shiqiang Duan ◽  
Jiangtao Zhou

Due to the inevitable deviations between the results of theoretical calculations and physical experiments, flutter tests and flutter signal analysis often play significant roles in designing the aeroelasticity of a new aircraft. The measured structural response from aeroelastic models in both wind tunnel tests and real fight flutter tests contain an abundance of structural information, but traditional methods tend to have limited ability to extract features of concern. Inspired by deep learning concepts, a novel feature extraction method for flutter signal analysis was established in this study by combining the convolutional neural network (CNN) with empirical mode decomposition (EMD). It is widely hypothesized that when flutter occurs, the measured structural signals are harmonic or divergent in the time domain, and that the flutter modal (1) is singular and (2) its energy increases significantly in the frequency domain. A measured-signal feature extraction and flutter criterion framework was constructed accordingly. The measured signals from a wind tunnel test were manually labeled “flutter” and “no-flutter” as the foundational dataset for the deep learning algorithm. After the normalized preprocessing, the intrinsic mode functions (IMFs) of the flutter test signals are obtained by the EMD method. The IMFs are then reshaped to make them the suitable size to be input to the CNN. The CNN parameters are optimized though the training dataset, and the trained model is validated through the test dataset (i.e., cross-validation). The accuracy rate of the proposed method reached 100% on the test dataset. The training model appears to effectively distinguish whether or not the structural response signal contains flutter. The combination of EMD and CNN provides effective feature extraction of time series signals in flutter test data. This research explores the connection between structural response signals and flutter from the perspective of artificial intelligence. The method allows for real-time, online prediction with low computational complexity.

2019 ◽  
Vol 2019 ◽  
pp. 1-8 ◽  
Author(s):  
Shiqiang Duan ◽  
Hua Zheng ◽  
Junhao Liu

Necessary model calculation simplifications, uncertainty in actual wind tunnel test, and data acquisition system error altogether lead to error between a set of actual experimental results and a set of theoretical design results; wind tunnel test flutter data can be utilized to feedback this error. In this study, a signal processing method was established to use the structural response signals from an aeroelastic model to classify flutter signals via deep learning algorithm. This novel flutter signal processing and classification method works by combining a convolutional neural network (CNN) with time-frequency analysis. Flutter characteristics are revealed in both time and frequency domains, which are harmonic or divergent in the time series; the flutter model energy is singular and significantly increases in the frequency view, so the features of the time-frequency diagram can be extracted from the dataset-trained CNN model. As the foundation of the subsequent deep learning algorithm, the datasets are placed into a collection of time-frequency diagrams calculated by short-time Fourier transform (STFT) and labeled with two artificial states, flutter or no flutter, depending on the source of the signal measured from a wind tunnel test on the aeroelastic model. After preprocessing, a cross-validation schedule is implemented to update (and optimize) CNN parameters though the trained dataset. The trained models were compared against test datasets to validate their reliability and robustness. Our results indicate that the accuracy rate of test datasets reaches 90%. The trained models can effectively and automatically distinguish whether or not there is flutter in the measured signals.


Tomography ◽  
2021 ◽  
Vol 7 (4) ◽  
pp. 950-960
Author(s):  
Aymen Meddeb ◽  
Tabea Kossen ◽  
Keno K. Bressem ◽  
Bernd Hamm ◽  
Sebastian N. Nagel

The aim of this study was to develop a deep learning-based algorithm for fully automated spleen segmentation using CT images and to evaluate the performance in conditions directly or indirectly affecting the spleen (e.g., splenomegaly, ascites). For this, a 3D U-Net was trained on an in-house dataset (n = 61) including diseases with and without splenic involvement (in-house U-Net), and an open-source dataset from the Medical Segmentation Decathlon (open dataset, n = 61) without splenic abnormalities (open U-Net). Both datasets were split into a training (n = 32.52%), a validation (n = 9.15%) and a testing dataset (n = 20.33%). The segmentation performances of the two models were measured using four established metrics, including the Dice Similarity Coefficient (DSC). On the open test dataset, the in-house and open U-Net achieved a mean DSC of 0.906 and 0.897 respectively (p = 0.526). On the in-house test dataset, the in-house U-Net achieved a mean DSC of 0.941, whereas the open U-Net obtained a mean DSC of 0.648 (p < 0.001), showing very poor segmentation results in patients with abnormalities in or surrounding the spleen. Thus, for reliable, fully automated spleen segmentation in clinical routine, the training dataset of a deep learning-based algorithm should include conditions that directly or indirectly affect the spleen.


2021 ◽  
Vol 13 (9) ◽  
pp. 1779
Author(s):  
Xiaoyan Yin ◽  
Zhiqun Hu ◽  
Jiafeng Zheng ◽  
Boyong Li ◽  
Yuanyuan Zuo

Radar beam blockage is an important error source that affects the quality of weather radar data. An echo-filling network (EFnet) is proposed based on a deep learning algorithm to correct the echo intensity under the occlusion area in the Nanjing S-band new-generation weather radar (CINRAD/SA). The training dataset is constructed by the labels, which are the echo intensity at the 0.5° elevation in the unblocked area, and by the input features, which are the intensity in the cube including multiple elevations and gates corresponding to the location of bottom labels. Two loss functions are applied to compile the network: one is the common mean square error (MSE), and the other is a self-defined loss function that increases the weight of strong echoes. Considering that the radar beam broadens with distance and height, the 0.5° elevation scan is divided into six range bands every 25 km to train different models. The models are evaluated by three indicators: explained variance (EVar), mean absolute error (MAE), and correlation coefficient (CC). Two cases are demonstrated to compare the effect of the echo-filling model by different loss functions. The results suggest that EFnet can effectively correct the echo reflectivity and improve the data quality in the occlusion area, and there are better results for strong echoes when the self-defined loss function is used.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mu Sook Lee ◽  
Yong Soo Kim ◽  
Minki Kim ◽  
Muhammad Usman ◽  
Shi Sub Byon ◽  
...  

AbstractWe examined the feasibility of explainable computer-aided detection of cardiomegaly in routine clinical practice using segmentation-based methods. Overall, 793 retrospectively acquired posterior–anterior (PA) chest X-ray images (CXRs) of 793 patients were used to train deep learning (DL) models for lung and heart segmentation. The training dataset included PA CXRs from two public datasets and in-house PA CXRs. Two fully automated segmentation-based methods using state-of-the-art DL models for lung and heart segmentation were developed. The diagnostic performance was assessed and the reliability of the automatic cardiothoracic ratio (CTR) calculation was determined using the mean absolute error and paired t-test. The effects of thoracic pathological conditions on performance were assessed using subgroup analysis. One thousand PA CXRs of 1000 patients (480 men, 520 women; mean age 63 ± 23 years) were included. The CTR values derived from the DL models and diagnostic performance exhibited excellent agreement with reference standards for the whole test dataset. Performance of segmentation-based methods differed based on thoracic conditions. When tested using CXRs with lesions obscuring heart borders, the performance was lower than that for other thoracic pathological findings. Thus, segmentation-based methods using DL could detect cardiomegaly; however, the feasibility of computer-aided detection of cardiomegaly without human intervention was limited.


Animals ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1549
Author(s):  
Robert D. Chambers ◽  
Nathanael C. Yoder ◽  
Aletha B. Carson ◽  
Christian Junge ◽  
David E. Allen ◽  
...  

Collar-mounted canine activity monitors can use accelerometer data to estimate dog activity levels, step counts, and distance traveled. With recent advances in machine learning and embedded computing, much more nuanced and accurate behavior classification has become possible, giving these affordable consumer devices the potential to improve the efficiency and effectiveness of pet healthcare. Here, we describe a novel deep learning algorithm that classifies dog behavior at sub-second resolution using commercial pet activity monitors. We built machine learning training databases from more than 5000 videos of more than 2500 dogs and ran the algorithms in production on more than 11 million days of device data. We then surveyed project participants representing 10,550 dogs, which provided 163,110 event responses to validate real-world detection of eating and drinking behavior. The resultant algorithm displayed a sensitivity and specificity for detecting drinking behavior (0.949 and 0.999, respectively) and eating behavior (0.988, 0.983). We also demonstrated detection of licking (0.772, 0.990), petting (0.305, 0.991), rubbing (0.729, 0.996), scratching (0.870, 0.997), and sniffing (0.610, 0.968). We show that the devices’ position on the collar had no measurable impact on performance. In production, users reported a true positive rate of 95.3% for eating (among 1514 users), and of 94.9% for drinking (among 1491 users). The study demonstrates the accurate detection of important health-related canine behaviors using a collar-mounted accelerometer. We trained and validated our algorithms on a large and realistic training dataset, and we assessed and confirmed accuracy in production via user validation.


2020 ◽  
Vol 22 (Supplement_2) ◽  
pp. ii148-ii148
Author(s):  
Yoshihiro Muragaki ◽  
Yutaka Matsui ◽  
Takashi Maruyama ◽  
Masayuki Nitta ◽  
Taiichi Saito ◽  
...  

Abstract INTRODUCTION It is useful to know the molecular subtype of lower-grade gliomas (LGG) when deciding on a treatment strategy. This study aims to diagnose this preoperatively. METHODS A deep learning model was developed to predict the 3-group molecular subtype using multimodal data including magnetic resonance imaging (MRI), positron emission tomography (PET), and computed tomography (CT). The performance was evaluated using leave-one-out cross validation with a dataset containing information from 217 LGG patients. RESULTS The model performed best when the dataset contained MRI, PET, and CT data. The model could predict the molecular subtype with an accuracy of 96.6% for the training dataset and 68.7% for the test dataset. The model achieved test accuracies of 58.5%, 60.4%, and 59.4% when the dataset contained only MRI, MRI and PET, and MRI and CT data, respectively. The conventional method used to predict mutations in the isocitrate dehydrogenase (IDH) gene and the codeletion of chromosome arms 1p and 19q (1p/19q) sequentially had an overall accuracy of 65.9%. This is 2.8 percent point lower than the proposed method, which predicts the 3-group molecular subtype directly. CONCLUSIONS AND FUTURE PERSPECTIVE A deep learning model was developed to diagnose the molecular subtype preoperatively based on multi-modality data in order to predict the 3-group classification directly. Cross-validation showed that the proposed model had an overall accuracy of 68.7% for the test dataset. This is the first model to double the expected value for a 3-group classification problem, when predicting the LGG molecular subtype. We plan to apply the techniques of heat map and/or segmentation for an increase in prediction accuracy.


Sensors ◽  
2020 ◽  
Vol 20 (8) ◽  
pp. 2424 ◽  
Author(s):  
Md Atiqur Rahman Ahad ◽  
Thanh Trung Ngo ◽  
Anindya Das Antar ◽  
Masud Ahmed ◽  
Tahera Hossain ◽  
...  

Wearable sensor-based systems and devices have been expanded in different application domains, especially in the healthcare arena. Automatic age and gender estimation has several important applications. Gait has been demonstrated as a profound motion cue for various applications. A gait-based age and gender estimation challenge was launched in the 12th IAPR International Conference on Biometrics (ICB), 2019. In this competition, 18 teams initially registered from 14 countries. The goal of this challenge was to find some smart approaches to deal with age and gender estimation from sensor-based gait data. For this purpose, we employed a large wearable sensor-based gait dataset, which has 745 subjects (357 females and 388 males), from 2 to 78 years old in the training dataset; and 58 subjects (19 females and 39 males) in the test dataset. It has several walking patterns. The gait data sequences were collected from three IMUZ sensors, which were placed on waist-belt or at the top of a backpack. There were 67 solutions from ten teams—for age and gender estimation. This paper extensively analyzes the methods and achieved-results from various approaches. Based on analysis, we found that deep learning-based solutions lead the competitions compared with conventional handcrafted methods. We found that the best result achieved 24.23% prediction error for gender estimation, and 5.39 mean absolute error for age estimation by employing angle embedded gait dynamic image and temporal convolution network.


2021 ◽  
Vol 13 (18) ◽  
pp. 3691
Author(s):  
Fei Yang ◽  
Meng Wang

Heat waves may negatively impact the economy and human life under global warming. The use of air conditioners can reduce the vulnerability of humans to heat wave disasters. However, air conditioner usage has been not clear until now. Traditional registration investigation methods are cumbersome and require expensive labor and time. This study used a Labelme image tagging tool and an available street view images database to firstly establish a monographic dataset to detect external air conditioner unit features and proposed two deep learning algorithms of Mask-RCNN and YOLOv5 to automatically retrieve air conditioners. The training dataset used street view images in the 2nd Ring Road area of downtown Beijing. The model evaluation mAP of Mask-RCNN and YOLOv5 reached 0.99 and 0.9428. In comparison, the performance of YOLOv5 was superior, which is attributed to the YOLOv5 model being better at detecting smaller target entities equipped with a lighter network structure and an enhanced feature extraction network. We demonstrated the feasibility of using street view images to retrieve air conditioners and showed their great potential to detect air conditioners in the future.


2021 ◽  
Author(s):  
Jingyuan Wang ◽  
Xiujuan Chen ◽  
Yueshuai Pan ◽  
Kai Chen ◽  
Yan Zhang ◽  
...  

Abstract Purpose: To develop and verify an early prediction model of gestational diabetes mellitus (GDM) using machine learning algorithm.Methods: The dataset collected from a pregnant cohort study in eastern China, from 2017 to 2019. It was randomly divided into 75% as the training dataset and 25% as the test dataset using the train_test_split function. Based on Python, four classic machine learning algorithm and a New-Stacking algorithm were first trained by the training dataset, and then verified by the test dataset. The four models were Logical Regression (LR), Random Forest (RT), Artificial Neural Network (ANN) and Support Vector Machine (SVM). The sensitivity, specificity, accuracy, and area under the Receiver Operating Characteristic Curve (AUC) were used to analyse the performance of models.Results: Valid information from a total of 2811 pregnant women were obtained. The accuracies of the models ranged from 80.09% to 86.91% (RF), sensitivities ranged from 63.30% to 81.65% (SVM), specificities ranged from 79.38% to 97.53% (RF), and AUCs ranged from 0.80 to 0.82 (New-Stacking).Conclusion: This paper successfully constructed a New-Stacking model theoretically, for its better performance in specificity, accuracy and AUC. But the SVM model got the highest sensitivity, the SVM model was recommends as the prediction model for clinical.


Sign in / Sign up

Export Citation Format

Share Document