scholarly journals Off-Design Performance Prediction of a S-CO2 Turbine Based on Field Reconstruction Using Deep-Learning Approach

2020 ◽  
Vol 10 (14) ◽  
pp. 4999
Author(s):  
Dongbo Shi ◽  
Lei Sun ◽  
Yonghui Xie

The reliable design of the supercritical carbon dioxide (S-CO2) turbine is the core of the advanced S-CO2 power generation technology. However, the traditional computational fluid dynamics (CFD) method is usually applied in the S-CO2 turbine design-optimization, which is a high computational cost, high memory requirement, and long time-consuming solver. In this research, a flexible end-to-end deep learning approach is presented for the off-design performance prediction of the S-CO2 turbine based on physical fields reconstruction. Our approach consists of three steps: firstly, an optimal design of a 60,000 rpm S-CO2 turbine is established. Secondly, five design variables for off-design analysis are selected to reconstruct the temperature and pressure fields on the blade surface through a deconvolutional neural network. Finally, the power and efficiency of the turbine is predicted by a convolutional neural network according to reconstruction fields. The results show that the prediction approach not only outperforms five classical machine learning models but also focused on the physical mechanism of turbine design. In addition, once the deep model is well-trained, the calculation with graphics processing unit (GPU)-accelerated can quickly predict the physical fields and performance. This prediction approach requires less human intervention and has the advantages of being universal, flexible, and easy to implement.

Geophysics ◽  
2019 ◽  
Vol 84 (6) ◽  
pp. V333-V350 ◽  
Author(s):  
Siwei Yu ◽  
Jianwei Ma ◽  
Wenlong Wang

Compared with traditional seismic noise attenuation algorithms that depend on signal models and their corresponding prior assumptions, removing noise with a deep neural network is trained based on a large training set in which the inputs are the raw data sets and the corresponding outputs are the desired clean data. After the completion of training, the deep-learning (DL) method achieves adaptive denoising with no requirements of (1) accurate modelings of the signal and noise or (2) optimal parameters tuning. We call this intelligent denoising. We have used a convolutional neural network (CNN) as the basic tool for DL. In random and linear noise attenuation, the training set is generated with artificially added noise. In the multiple attenuation step, the training set is generated with the acoustic wave equation. The stochastic gradient descent is used to solve the optimal parameters for the CNN. The runtime of DL on a graphics processing unit for denoising has the same order as the [Formula: see text]-[Formula: see text] deconvolution method. Synthetic and field results indicate the potential applications of DL in automatic attenuation of random noise (with unknown variance), linear noise, and multiples.


2020 ◽  
Vol 10 (21) ◽  
pp. 7817
Author(s):  
Ivana Marin ◽  
Ana Kuzmanic Skelin ◽  
Tamara Grujic

The main goal of any classification or regression task is to obtain a model that will generalize well on new, previously unseen data. Due to the recent rise of deep learning and many state-of-the-art results obtained with deep models, deep learning architectures have become one of the most used model architectures nowadays. To generalize well, a deep model needs to learn the training data well without overfitting. The latter implies a correlation of deep model optimization and regularization with generalization performance. In this work, we explore the effect of the used optimization algorithm and regularization techniques on the final generalization performance of the model with convolutional neural network (CNN) architecture widely used in the field of computer vision. We give a detailed overview of optimization and regularization techniques with a comparative analysis of their performance with three CNNs on the CIFAR-10 and Fashion-MNIST image datasets.


2019 ◽  
Vol 34 (11) ◽  
pp. 4924-4931 ◽  
Author(s):  
Daichi Kitaguchi ◽  
Nobuyoshi Takeshita ◽  
Hiroki Matsuzaki ◽  
Hiroaki Takano ◽  
Yohei Owada ◽  
...  

2019 ◽  
Vol 1 (Supplement_1) ◽  
pp. i20-i21
Author(s):  
Min Zhang ◽  
Geoffrey Young ◽  
Huai Chen ◽  
Lei Qin ◽  
Xinhua Cao ◽  
...  

Abstract BACKGROUND AND OBJECTIVE: Brain metastases have been found to account for one-fourth of all cancer metastases seen in clinics. Magnetic resonance imaging (MRI) is widely used for detecting brain metastases. Accurate detection of the brain metastases is critical to design radiotherapy to treat the cancer and monitor their progression or response to the therapy and prognosis. However, finding metastases on brain MRI is very challenging as many metastases are small and manifest as objects of weak contrast on the images. In this work we present a deep learning approach integrated with a classification scheme to detect cancer metastases to the brain on MRI. MATERIALS AND METHODS: We retrospectively extracted 101 metastases patients, equal to 1535 metastases on 10192 slices of images in a total of 336 scans from our PACS and manually marked the lesions on T1-weighted contrast enhanced MRI as the ground-truth. We then randomly separated the cases into training, validation, and test sets for developing and optimizing the deep learning neural network. We designed a 2-step computer-aided detection (CAD) pipeline by first applying a fast region-based convolutional neural network method (R-CNN) to sequentially process each slice of an axial brain MRI to find abnormal hyper-intensity that may correspond to a brain metastasis and, second, applying a random under sampling boost (RUSBoost) classification method to reduce the false positive metastases. RESULTS: The computational pipeline was tested on real brain images. A sensitivity of 97.28% and false positive rate of 36.25 per scan over the images were achieved by using the proposed method. CONCLUSION: Our results demonstrated the deep learning-based method can detect metastases in very challenging cases and can serve as CAD tool to help radiologists interpret brain MRIs in a time-constrained environment.


Sensors ◽  
2019 ◽  
Vol 19 (5) ◽  
pp. 982 ◽  
Author(s):  
Hyo Lee ◽  
Ihsan Ullah ◽  
Weiguo Wan ◽  
Yongbin Gao ◽  
Zhijun Fang

Make and model recognition (MMR) of vehicles plays an important role in automatic vision-based systems. This paper proposes a novel deep learning approach for MMR using the SqueezeNet architecture. The frontal views of vehicle images are first extracted and fed into a deep network for training and testing. The SqueezeNet architecture with bypass connections between the Fire modules, a variant of the vanilla SqueezeNet, is employed for this study, which makes our MMR system more efficient. The experimental results on our collected large-scale vehicle datasets indicate that the proposed model achieves 96.3% recognition rate at the rank-1 level with an economical time slice of 108.8 ms. For inference tasks, the deployed deep model requires less than 5 MB of space and thus has a great viability in real-time applications.


Sign in / Sign up

Export Citation Format

Share Document