Application of a Convolutional Neural Network for Wave Mode Identification in a Rotating Detonation Combustor Using High-Speed Imaging

2021 ◽  
Author(s):  
Kristyn Johnson ◽  
Donald Ferguson ◽  
Andrew Nix ◽  
Robert Tempke
Author(s):  
Kristyn B. Johnson ◽  
Donald H. Ferguson ◽  
Robert S. Tempke ◽  
Andrew C. Nix

Abstract Utilizing a neural network, individual down-axis images of combustion waves in a Rotating Detonation Engine (RDE) can be classified according to the number of detonation waves present and their directional behavior. While the ability to identify the number of waves present within individual images might be intuitive, the further classification of wave rotational direction is a result of the detonation wave’s profile, which suggests its angular direction of movement. The application of deep learning is highly adaptive and therefore can be trained for a variety of image collection methods across RDE study platforms. In this study, a supervised approach is employed where a series of manually classified images is provided to a neural network for the purpose of optimizing the classification performance of the network. These images, referred to as the training set, are individually labeled as one of ten modes present in an experimental RDE. Possible classifications include deflagration, clockwise and counterclockwise variants of co-rotational detonation waves with quantities ranging from one to three waves, as well as single, double and triple counter-rotating detonation waves. After training the network, a second set of manually classified images, referred to as the validation set, is used to evaluate the performance of the model. The ability to predict the detonation wave mode in a single image using a trained neural network substantially reduces computational complexity by circumnavigating the need to evaluate the temporal behavior of individual pixels throughout time. Results suggest that while image quality is critical, it is possible to accurately identify the modal behavior of the detonation wave based on only a single image rather than a sequence of images or signal processing. Successful identification of wave behavior using image classification serves as a stepping stone for further machine learning integration in RDE research and comprehensive real-time diagnostics.


Author(s):  
Kristyn B. Johnson ◽  
Donald H. Ferguson ◽  
Robert S. Tempke ◽  
Andrew C. Nix

Abstract Utilizing a neural network, individual down-axis images of combustion waves in a Rotating Detonation Engine (RDE) can be classified according to the number of detonation waves present and their directional behavior. While the ability to identify the number of waves present within individual images might be intuitive, the further classification of wave rotational direction is a result of the detonation wave's profile, which suggests its angular direction of movement. The application of deep learning is highly adaptive and therefore can be trained for a variety of image collection methods across RDE study platforms. In this study, a supervised approach is employed where a series of manually classified images is provided to a neural network for the purpose of optimizing the classification performance of the network. These images, referred to as the training set, are individually labeled as one of ten modes present in an experimental RDE. Possible classifications include deflagration, clockwise and counterclockwise variants of corotational detonation waves with quantities ranging from one to three waves, as well as single, double and triple counter-rotating detonation waves. The ability to predict the detonation wave mode in a single image using a trained neural network substantially reduces computational complexity by circumnavigating the need to evaluate the temporal behavior of individual pixels throughout time. Results suggest that while image quality is critical, it is possible to accurately identify the modal behavior of the detonation wave based on only a single image rather than a sequence of images or signal processing.


Author(s):  
Young Hyun Kim ◽  
Eun-Gyu Ha ◽  
Kug Jin Jeon ◽  
Chena Lee ◽  
Sang-Sun Han

Objectives: This study aimed to develop a fully automated human identification method based on a convolutional neural network (CNN) with a large-scale dental panoramic radiograph (DPR) dataset. Methods: In total, 2,760 DPRs from 746 subjects who had 2 to 17 DPRs with various changes in image characteristics due to various dental treatments (tooth extraction, oral surgery, prosthetics, orthodontics, or tooth development) were collected. The test dataset included the latest DPR of each subject (746 images) and the other DPRs (2,014 images) were used for model training. A modified VGG16 model with two fully connected layers was applied for human identification. The proposed model was evaluated with rank-1, –3, and −5 accuracies, running time, and gradient-weighted class activation mapping (Grad-CAM)–applied images. Results: This model had rank-1,–3, and −5 accuracies of 82.84%, 89.14%, and 92.23%, respectively. All rank-1 accuracy values of the proposed model were above 80% regardless of changes in image characteristics. The average running time to train the proposed model was 60.9 sec per epoch, and the prediction time for 746 test DPRs was short (3.2 sec/image). The Grad-CAM technique verified that the model automatically identified humans by focusing on identifiable dental information. Conclusion: The proposed model showed good performance in fully automatic human identification despite differing image characteristics of DPRs acquired from the same patients. Our model is expected to assist in the fast and accurate identification by experts by comparing large amounts of images and proposing identification candidates at high speed.


Sign in / Sign up

Export Citation Format

Share Document