Gradient Enhanced Surrogate Models Based on Adjoint CFD Methods for the Design of a Counter Rotating Turbofan

Author(s):  
Jan Backhaus ◽  
Marcel Aulich ◽  
Christian Frey ◽  
Timea Lengyel ◽  
Christian Voß

This paper studies the use of adjoint CFD solvers in combination with surrogate modelling in order to reduce the computational cost of the optimization of complex 3D turbomachinery components. The method is applied to a previously optimized counter rotating turbofan, with a shape parameterized by 104 CAD parameters. Through random changes on the reference design, a small number of design variations are created to serve as training samples for the surrogate models. A steady RANS solver and its discrete adjoint are then used to calculate objective function values and their corresponding sensitivities. Kriging and neural networks are used to build surrogate models from the training data. To study the impact of the additional information provided by the adjoint solver, each model is trained with and without the sensitivity information. The accuracy of the different surrogate model predictions is assessed by comparison against CFD calculations. The results show a considerable improvement of the fitness function approximation when the sensitivity information is taken into account. Through a gradient based optimization on one of the surrogate models, a design with higher isentropic efficiency at the aerodynamic design point is created. This application demonstrates that the improved surrogate models can be used for design and optimization.

Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 3867 ◽  
Author(s):  
Jaehyun Yoo

Machine learning-based indoor localization used to suffer from the collection, construction, and maintenance of labeled training databases for practical implementation. Semi-supervised learning methods have been developed as efficient indoor localization methods to reduce use of labeled training data. To boost the efficiency and the accuracy of indoor localization, this paper proposes a new time-series semi-supervised learning algorithm. The key aspect of the developed method, which distinguishes it from conventional semi-supervised algorithms, is the use of unlabeled data. The learning algorithm finds spatio-temporal relationships in the unlabeled data, and pseudolabels are generated to compensate for the lack of labeled training data. In the next step, another balancing-optimization learning algorithm learns a positioning model. The proposed method is evaluated for estimating the location of a smartphone user by using a Wi-Fi received signal strength indicator (RSSI) measurement. The experimental results show that the developed learning algorithm outperforms some existing semi-supervised algorithms according to the variation of the number of training data and access points. Also, the proposed method is discussed in terms of why it gives better performance, by the analysis of the impact of the learning parameters. Moreover, the extended localization scheme in conjunction with a particle filter is executed to include additional information, such as a floor plan.


2013 ◽  
Vol 762 ◽  
pp. 307-312
Author(s):  
Guang Liang Zhang ◽  
Zhang Wei Wang ◽  
Shi Hong Zhang

A fast optimization approach is demonstrated for design optimization of the multi-pass wire drawing process with the multi-objective genetic algorithm, and with the aims at minimizing both power consumption and temperature, via optimizing the process parameters involving pass number, pass schedule, die angle, bearing length and loops on capstan etc. A jump fitness function and a penalty fitness function are proposed for the survival of good designs and killing the bad designs which temperature, die wear factor, delta factor, or ratio of drawing stress to yield stress exceed the limits during optimization. The numerical examples show that the optimizer with the penalty fitness function, when its parameternranges from 1 to 2, presents the best performance in finding the minimum power consumption with a limit in temperature. Compared with a reference design, a significant reduction in the total power consumption about 300W, with the well control in temperature, delta factor and die life, has been achieved by the optimization. The penalty fitness function presents the better performance in the reduction of the iteration generations and computational cost to the jump fitness function.


2018 ◽  
Author(s):  
Roman Zubatyuk ◽  
Justin S. Smith ◽  
Jerzy Leszczynski ◽  
Olexandr Isayev

<p>Atomic and molecular properties could be evaluated from the fundamental Schrodinger’s equation and therefore represent different modalities of the same quantum phenomena. Here we present AIMNet, a modular and chemically inspired deep neural network potential. We used AIMNet with multitarget training to learn multiple modalities of the state of the atom in a molecular system. The resulting model shows on several benchmark datasets the state-of-the-art accuracy, comparable to the results of orders of magnitude more expensive DFT methods. It can simultaneously predict several atomic and molecular properties without an increase in computational cost. With AIMNet we show a new dimension of transferability: the ability to learn new targets utilizing multimodal information from previous training. The model can learn implicit solvation energy (like SMD) utilizing only a fraction of original training data, and archive MAD error of 1.1 kcal/mol compared to experimental solvation free energies in MNSol database.</p>


Author(s):  
Robert F Engle ◽  
Martin Klint Hansen ◽  
Ahmet K Karagozoglu ◽  
Asger Lunde

Abstract Motivated by the recent availability of extensive electronic news databases and advent of new empirical methods, there has been renewed interest in investigating the impact of financial news on market outcomes for individual stocks. We develop the information processing hypothesis of return volatility to investigate the relation between firm-specific news and volatility. We propose a novel dynamic econometric specification and test it using time series regressions employing a machine learning model selection procedure. Our empirical results are based on a comprehensive dataset comprised of more than 3 million news items for a sample of 28 large U.S. companies. Our proposed econometric specification for firm-specific return volatility is a simple mixture model with two components: public information and private processing of public information. The public information processing component is defined by the contemporaneous relation with public information and volatility, while the private processing of public information component is specified as a general autoregressive process corresponding to the sequential price discovery mechanism of investors as additional information, previously not publicly available, is generated and incorporated into prices. Our results show that changes in return volatility are related to public information arrival and that including indicators of public information arrival explains on average 26% (9–65%) of changes in firm-specific return volatility.


Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 692
Author(s):  
Clara Calvo ◽  
Carlos Ivorra ◽  
Vicente Liern ◽  
Blanca Pérez-Gladish

Modern portfolio theory deals with the problem of selecting a portfolio of financial assets such that the expected return is maximized for a given level of risk. The forecast of the expected individual assets’ returns and risk is usually based on their historical returns. In this work, we consider a situation in which the investor has non-historical additional information that is used for the forecast of the expected returns. This implies that there is no obvious statistical risk measure any more, and it poses the problem of selecting an adequate set of diversification constraints to mitigate the risk of the selected portfolio without losing the value of the non-statistical information owned by the investor. To address this problem, we introduce an indicator, the historical reduction index, measuring the expected reduction of the expected return due to a given set of diversification constraints. We show that it can be used to grade the impact of each possible set of diversification constraints. Hence, the investor can choose from this gradation, the set better fitting his subjective risk-aversion level.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Binu Melit Devassy ◽  
Sony George

AbstractDocumentation and analysis of crime scene evidences are of great importance in any forensic investigation. In this paper, we present the potential of hyperspectral imaging (HSI) to detect and analyze the beverage stains on a paper towel. To detect the presence and predict the age of the commonly used drinks in a crime scene, we leveraged the additional information present in the HSI data. We used 12 different beverages and four types of paper hand towel to create the sample stains in the current study. A support vector machine (SVM) is used to achieve the classification, and a convolutional auto-encoder is used to achieve HSI data dimensionality reduction, which helps in easy perception, process, and visualization of the data. The SVM classification model was re-established for a lighter and quicker classification model on the basis of the reduced dimension. We employed volume-gradient-based band selection for the identification of relevant spectral bands in the HSI data. Spectral data recorded at different time intervals up to 72 h is analyzed to trace the spectral changes. The results show the efficacy of the HSI techniques for rapid, non-contact, and non-invasive analysis of beverage stains.


Water ◽  
2021 ◽  
Vol 13 (13) ◽  
pp. 1830
Author(s):  
Gullnaz Shahzadi ◽  
Azzeddine Soulaïmani

Computational modeling plays a significant role in the design of rockfill dams. Various constitutive soil parameters are used to design such models, which often involve high uncertainties due to the complex structure of rockfill dams comprising various zones of different soil parameters. This study performs an uncertainty analysis and a global sensitivity analysis to assess the effect of constitutive soil parameters on the behavior of a rockfill dam. A Finite Element code (Plaxis) is utilized for the structure analysis. A database of the computed displacements at inclinometers installed in the dam is generated and compared to in situ measurements. Surrogate models are significant tools for approximating the relationship between input soil parameters and displacements and thereby reducing the computational costs of parametric studies. Polynomial chaos expansion and deep neural networks are used to build surrogate models to compute the Sobol indices required to identify the impact of soil parameters on dam behavior.


Author(s):  
Irsalan Arif ◽  
Hassan Iftikhar ◽  
Ali Javed

In this article design and optimization scheme of a three-dimensional bump surface for a supersonic aircraft is presented. A baseline bump and inlet duct with forward cowl lip is initially modeled in accordance with an existing bump configuration on a supersonic jet aircraft. Various design parameters for bump surface of diverterless supersonic inlet systems are identified, and design space is established using sensitivity analysis to identify the uncertainty associated with each design parameter by the one-factor-at-a-time approach. Subsequently, the designed configurations are selected by performing a three-level design of experiments using the Box–Behnken method and the numerical simulations. Surrogate modeling is carried out by the least square regression method to identify the fitness function, and optimization is performed using genetic algorithm based on pressure recovery as the objective function. The resultant optimized bump configuration demonstrates significant improvement in pressure recovery and flow characteristics as compared to baseline configuration at both supersonic and subsonic flow conditions and at design and off-design conditions. The proposed design and optimization methodology can be applied for optimizing the bump surface design of any diverterless supersonic inlet system for maximizing the intake performance.


2021 ◽  
pp. 089270572199320
Author(s):  
Prakhar Kumar Kharwar ◽  
Rajesh Kumar Verma

The new era of engineering society focuses on the utilization of the potential advantage of carbon nanomaterials. The machinability facets of nanocarbon materials are passing through an initial stage. This article emphasizes the machinability evaluation and optimization of Milling performances, namely Surface roughness (Ra), Cutting force (Fc), and Material removal rate (MRR) using a recently developed Grey wolf optimization algorithm (GWOA). The Taguchi theory-based L27 orthogonal array (OA) was employed for the Machining (Milling) of polymer nanocomposites reinforced by Multiwall carbon nanotube (MWCNT). The second-order polynomial equation was intended for the analysis of the model. These mathematical models were used as a fitness function in the GWOA to predict machining performances. The ANOVA outcomes efficiently explore the impact of machine parameters on Milling characteristics. The optimal combination for lower surface roughness value is 1.5 MWCNT wt.%, 1500 rpm of spindle speed, 50 mm/min of feed rate, and 3 mm depth of cut. For lower cutting force, 1.0 wt.%, 1500 rpm, 90 mm/min feed rate and 1 mm depth of cut and the maximize MRR was acquired at 0.5 wt.%, 500 rpm, 150 mm/min feed rate and 3 mm depth of cut. The deviation of the predicted value from the experimental value of Ra, Fc, and MRR are found as 2.5, 6.5 and 5.9%, respectively. The convergence plot of all Milling characteristics suggests the application potential of the GWO algorithm for quality improvement in a manufacturing environment.


2021 ◽  
Vol 7 (3) ◽  
pp. 59
Author(s):  
Yohanna Rodriguez-Ortega ◽  
Dora M. Ballesteros ◽  
Diego Renza

With the exponential growth of high-quality fake images in social networks and media, it is necessary to develop recognition algorithms for this type of content. One of the most common types of image and video editing consists of duplicating areas of the image, known as the copy-move technique. Traditional image processing approaches manually look for patterns related to the duplicated content, limiting their use in mass data classification. In contrast, approaches based on deep learning have shown better performance and promising results, but they present generalization problems with a high dependence on training data and the need for appropriate selection of hyperparameters. To overcome this, we propose two approaches that use deep learning, a model by a custom architecture and a model by transfer learning. In each case, the impact of the depth of the network is analyzed in terms of precision (P), recall (R) and F1 score. Additionally, the problem of generalization is addressed with images from eight different open access datasets. Finally, the models are compared in terms of evaluation metrics, and training and inference times. The model by transfer learning of VGG-16 achieves metrics about 10% higher than the model by a custom architecture, however, it requires approximately twice as much inference time as the latter.


Sign in / Sign up

Export Citation Format

Share Document