Aerodynamic Prediction on the Off-Design Performance of a S-CO2 Turbine Based on Deep Learning

2021 ◽  
Author(s):  
Yuqi Wang ◽  
Tianyuan Liu ◽  
Di Zhang

Abstract The research on the supercritical carbon dioxide (S-CO2) Brayton cycle has gradually become a hot spot in recent years. The off-design performance of turbine is an important reference for analyzing the variable operating conditions of the cycle. With the development of deep learning technology, the research of surrogate models based on neural network has received extensive attention. In order to improve the inefficiency in traditional off-design analyses, this research establishes a data-driven deep learning off-design aerodynamic prediction model for a S-CO2 centrifugal turbine, which is based on a deep convolutional neural network. The network can rapidly and adaptively provide dynamic aerodynamic performance prediction results for varying blade profiles and operating conditions. Meanwhile, it can illustrate the mechanism based on the field reconstruction results for the generated aerodynamic performance. The training results show that the off-design aerodynamic prediction convolutional neural network (OAP-CNN) has reduced the mean and maximum error of efficiency prediction compared with the traditional Gaussian Process Regression (GPR) and Artificial Neural Network (ANN). Aiming at the off-design conditions, the pressure and temperature distributions with acceptable error can be obtained without a CFD calculation. Besides, the influence of off-design parameters on the efficiency and power can be conveniently acquired, thus providing the reference for an optimized operation strategy. Analyzing the sensitivity of AOP-CNN to training data set size, the prediction accuracy is acceptable when the percentage of training samples exceeds 50%. The minimum error appears when the training data set size is 0.8. The mean and maximum errors are respectively 1.46% and 6.42%. In summary, this research provides a precise and fast aerodynamic performance prediction model in the analyses of off-design conditions for S-CO2 turbomachinery and Brayton cycle.

2019 ◽  
Vol 2 (1) ◽  
Author(s):  
Jeffrey Micher

We present a method for building a morphological generator from the output of an existing analyzer for Inuktitut, in the absence of a two-way finite state transducer which would normally provide this functionality. We make use of a sequence to sequence neural network which “translates” underlying Inuktitut morpheme sequences into surface character sequences. The neural network uses only the previous and the following morphemes as context. We report a morpheme accuracy of approximately 86%. We are able to increase this accuracy slightly by passing deep morphemes directly to output for unknown morphemes. We do not see significant improvement when increasing training data set size, and postulate possible causes for this.


2019 ◽  
Vol 1 ◽  
pp. 1-1
Author(s):  
Tee-Ann Teo

<p><strong>Abstract.</strong> Deep Learning is a kind of Machine Learning technology which utilizing the deep neural network to learn a promising model from a large training data set. Convolutional Neural Network (CNN) has been successfully applied in image segmentation and classification with high accuracy results. The CNN applies multiple kernels (also called filters) to extract image features via image convolution. It is able to determine multiscale features through the multiple layers of convolution and pooling processes. The variety of training data plays an important role to determine a reliable CNN model. The benchmarking training data for road mark extraction is mainly focused on close-range imagery because it is easier to obtain a close-range image rather than an airborne image. For example, KITTI Vision Benchmark Suite. This study aims to transfer the road mark training data from mobile lidar system to aerial orthoimage in Fully Convolutional Networks (FCN). The transformation of the training data from ground-based system to airborne system may reduce the effort of producing a large training data set.</p><p>This study uses FCN technology and aerial orthoimage to localize road marks on the road regions. The road regions are first extracted from 2-D large-scale vector map. The input aerial orthoimage is 10&amp;thinsp;cm spatial resolution and the non-road regions are masked out before the road mark localization. The training data are road mark’s polygons, which are originally digitized from ground-based mobile lidar and prepared for the road mark extraction using mobile mapping system. This study reuses these training data and applies them for the road mark extraction using aerial orthoimage. The digitized training road marks are then transformed to road polygon based on mapping coordinates. As the detail of ground-based lidar is much better than the airborne system, the partially occulted parking lot in aerial orthoimage can also be obtained from the ground-based system. The labels (also called annotations) for FCN include road region, non-regions and road mark. The size of a training batch is 500&amp;thinsp;pixel by 500&amp;thinsp;pixel (50&amp;thinsp;m by 50&amp;thinsp;m on the ground), and the total number of training batches for training is 75 batches. After the FCN training stage, an independent aerial orthoimage (Figure 1a) is applied to predict the road marks. The results of FCN provide initial regions for road marks (Figure 1b). Usually, road marks show higher reflectance than road asphalts. Therefore, this study uses this characteristic to refine the road marks (Figure 1c) by a binary classification inside the initial road mark’s region.</p><p>To compare the automatically extracted road marks (Figure 1c) and manually digitized road marks (Figure 1d), most road marks can be extracted using the training set from ground-based system. This study also selects an area of 600&amp;thinsp;m&amp;thinsp;&amp;times;&amp;thinsp;200&amp;thinsp;m in quantitative analysis. Among the 371 reference road marks, 332 can be extracted from proposed scheme, and the completeness reached 89%. The preliminary experiment demonstrated that most road marks can be successfully extracted by the proposed scheme. Therefore, the training data from the ground-based mapping system can be utilized in airborne orthoimage in similar spatial resolution.</p>


2020 ◽  
Vol 2020 ◽  
pp. 1-13 ◽  
Author(s):  
Suxia Cui ◽  
Yu Zhou ◽  
Yonghui Wang ◽  
Lujun Zhai

Recently, human being’s curiosity has been expanded from the land to the sky and the sea. Besides sending people to explore the ocean and outer space, robots are designed for some tasks dangerous for living creatures. Take the ocean exploration for an example. There are many projects or competitions on the design of Autonomous Underwater Vehicle (AUV) which attracted many interests. Authors of this article have learned the necessity of platform upgrade from a previous AUV design project, and would like to share the experience of one task extension in the area of fish detection. Because most of the embedded systems have been improved by fast growing computing and sensing technologies, which makes them possible to incorporate more and more complicated algorithms. In an AUV, after acquiring surrounding information from sensors, how to perceive and analyse corresponding information for better judgement is one of the challenges. The processing procedure can mimic human being’s learning routines. An advanced system with more computing power can facilitate deep learning feature, which exploit many neural network algorithms to simulate human brains. In this paper, a convolutional neural network (CNN) based fish detection method was proposed. The training data set was collected from the Gulf of Mexico by a digital camera. To fit into this unique need, three optimization approaches were applied to the CNN: data augmentation, network simplification, and training process speed up. Data augmentation transformation provided more learning samples; the network was simplified to accommodate the artificial neural network; the training process speed up is introduced to make the training process more time efficient. Experimental results showed that the proposed model is promising, and has the potential to be extended to other underwear objects.


2019 ◽  
Vol 8 (2) ◽  
pp. 5073-5081

Prediction of student performance is the significant part in processing the educational data. Machine learning algorithms are leading the role in this process. Deep learning is one of the important concepts of machine learning algorithm. In this paper, we applied the deep learning technique for prediction of the academic excellence of the students using R Programming. Keras and Tensorflow libraries utilized for making the model using neural network on the Kaggle dataset. The data is separated into testing data training data set. Plot the neural network model using neuralnet method and created the Deep Learning model using two hidden layers using ReLu activation function and one output layer using softmax activation function. After fine tuning process until the stable changes; this model produced accuracy as 85%.


2017 ◽  
Vol 14 (4) ◽  
pp. 172988141770357 ◽  
Author(s):  
Lei Tai ◽  
Shaohua Li ◽  
Ming Liu

The exploration problem of mobile robots aims to allow mobile robots to explore an unknown environment. We describe an indoor exploration algorithm for mobile robots using a hierarchical structure that fuses several convolutional neural network layers with decision-making process. The whole system is trained end to end by taking only visual information (RGB-D information) as input and generates a sequence of main moving direction as output so that the robot achieves autonomous exploration ability. The robot is a TurtleBot with a Kinect mounted on it. The model is trained and tested in a real world environment. And the training data set is provided for download. The outputs of the test data are compared with the human decision. We use Gaussian process latent variable model to visualize the feature map of last convolutional layer, which proves the effectiveness of this deep convolution neural network mode. We also present a novel and lightweight deep-learning library libcnn especially for deep-learning processing of robotics tasks.


2016 ◽  
Vol 12 (2) ◽  
Author(s):  
Urszula Smyczyńska ◽  
Joanna Smyczyńska ◽  
Ryszard Tadeusiewicz

AbstractIt is well known that the structure of neural network and the amount of available training data influence the accuracy of developed models; however, the exact character of this relation depends on the chosen problem. Thus, it was decided to analyze what impact these parameters have on the solution of the problem on which we work – the prediction of final height of children treated with growth hormone. It was observed that multilayer perceptron with a wide range of numbers of hidden neurons (from 1 to 100) could solve the problem almost equally well. Thus, this task seems to be rather simple, not requiring complex models. Larger networks tended to produce less accurate results and did not generalize well while working with the data not used in training. Repeating the experiment with the training data set reduced to 50% of its original content, as expected, caused a decrease in accuracy.


Author(s):  
G. Chaussonnet ◽  
S. Gepperth ◽  
S. Holz ◽  
R. Koch ◽  
H.-J. Bauer

Abstract A fully connected Artificial Neural Network (ANN) is used to predict the spray characteristics of prefilming airblast atomization. The model is trained from the planar prefilmer experiment from the PhD thesis of Gepperth [Experimentelle Untersuchung des Primärzerfalls an generischen luftgestützten Zerstäubern unter Hochdruckbedingungen, Vol. 75. Logos Verlag Berlin GmbH], in which shadowgraphy images of the liquid breakup at the atomizing edge capture the characteristics of the primary droplets and the ligaments. The quantities extracted from the images are the Sauter Mean Diameter, the mean droplet axial velocity, the mean ligament length and the mean ligament deformation velocity. These are the prescribed output of the ANN model. In total, the training database contains 322 different operating points at which different prefilmers, liquid types, ambient pressures, film loadings and gas velocities were investigated. Two types of model input quantities are investigated. First, nine dimensional parameters related to the geometry, the operating conditions and the properties of the liquid are used as inputs for the model. Second, nine non-dimensional groups commonly used for liquid atomization are derived from the first set of inputs. These two types of inputs are compared. The architecture providing the best fitting is determined after testing over 10000 randomly drawn ANN architectures, with up to 10 layers and up to 128 neurons per layer. The striking results is that for both types of model, the best architectures consist of a shallow net with the hidden layers in the form of a diabolo: three layers with a large number of neurons (≥ 64) in the first and the last layer, and very few neurons (≈12) in middle layer. This shape recalls the shape of an autoencoder, where the middle layer would be the feature space of reduced dimensionality. The trend highlighted by our results, to have a limited number of layers, is in contrast with recent observations in Deep Learning applied to computer vision and speech recognition. It was found that the model with dimensional input quantities always shows a lower test and validation errors than the one with non-dimensional input quantities. The best architectures for both types of inputs (dimensional and non-dimensional input) were tested versus the experiments. Both provide comparable accuracy, which is better than typical correlations of SMD and droplet velocity. As the models takes more input parameters into account compared to the correlations, they can predict the experimental data more accurately. Finally the extrapolation capability of the models was assessed by a training them on a confined domain of parameters and testing them outside this domain. It was found that the models can extrapolate at larger gas velocity. With a larger ambient pressure or a lower trailing edge thickness, the accuracy decreases drastically.


2020 ◽  
Vol 8 ◽  
Author(s):  
Huiying Ren ◽  
Z. Jason Hou ◽  
Bharat Vyakaranam ◽  
Heng Wang ◽  
Pavel Etingov

Detection and timely identification of power system disturbances are essential for situation awareness and reliable electricity grid operation. Because records of actual events in the system are limited, ensemble simulation-based events are needed to provide adequate data for building event-detection models through deep learning; e.g., a convolutional neural network (CNN). An ensemble numerical simulation-based training data set have been generated through dynamic simulations performed on the Polish system with various types of faults in different locations. Such data augmentation is proven to be able to provide adequate data for deep learning. The synchronous generators’ frequency signals are used and encoded into images for developing and evaluating CNN models for classification of fault types and locations. With a time-domain stacked image set as the benchmark, two different time-series encoding approaches, i.e., wavelet decomposition-based frequency-domain stacking and polar coordinate system-based Gramian Angular Field (GAF) stacking, are also adopted to evaluate and compare the CNN model performance and applicability. The various encoding approaches are suitable for different fault types and spatial zonation. With optimized settings of the developed CNN models, the classification and localization accuracies can go beyond 84 and 91%, respectively.


Sign in / Sign up

Export Citation Format

Share Document