Developing a Flexible Methodology for Modeling and Solving Multiple Response Optimization Problems

2019 ◽  
Vol 34 (2) ◽  
pp. 103-113
Author(s):  
Taha-Hossein Hejazi ◽  
Maryam Moradpour

Abstract Simultaneous optimization of multiple quality characteristics (responses) of a product or process is required in many real-world problems. Multiresponse optimization (MRO) techniques tries to solve such problems; the ultimate objective of which is to adjust control factors that provides most desired values for the responses. Regression techniques are most commonly used methods to identify and estimate relationships between control variables and responses. Due to the industrial advances and hence the complexity of processes and systems, many relationships between input variables and quality characteristics have become much more complex. In such circumstances, classic regression techniques encounter difficulties to create a well-fitted model which can be easily optimized. The alternative approach proposed in this study is a regression tree method called CART, which is a data mining method. Since the output of CART consists of several if-then terms, NSGA-II algorithm was considered to solve the model and achieve the optimal solutions. Finally, we evaluate performance of the proposed method with a real data set about modeling and improvement of automotive engines.

Author(s):  
Sasadhar Bera ◽  
Indrajit Mukherjee

A common problem generally encountered during manufacturing process improvement involves simultaneous optimization of multiple ‘quality characteristics’ or so-called ‘responses’ and determining the best process operating conditions. Such a problem is also referred to as ‘multiple response optimization (MRO) problem’. The presence of interaction between the responses calls for trade-off solution. The term ‘trade-off’ is an explicit compromised solution considering the bias and variability of the responses around the specified targets. The global exact solution in such types of nonlinear optimization problems is usually unknown, and various trade-off solution approaches (based on process response surface (RS) models or without using process RS models) had been proposed by researchers over the years. Considering the prevalent and preferred solution approaches, the scope of this paper is limited to RS-based solution approaches and similar closely related solution framework for MRO problems. This paper contributes by providing a detailed step-by-step RS-based MRO solution framework. The applicability and steps of the solution framework are also illustrated using a real life in-house pin-on-disc design of experiment study. A critical review on solution approaches with details on inherent characteristic features, assumptions, limitations, application potential in manufacturing and selection norms (indicative of the application potential) of suggested techniques/methods to be adopted for implementation of framework is also provided. To instigate research in this field, scopes for future work are also highlighted at the end.


Author(s):  
Yunfeng Jin ◽  
Chao Liu ◽  
Xin Tian ◽  
Haizhou Huang ◽  
Gaofeng Deng ◽  
...  

Due to the complex and harsh environmental factors, the useful life of the filter in the gas turbine air intake system is usually less than its design life. When the filter is seriously degraded, the power and thermal efficiency of the gas turbine will decrease obviously due to the increase of inlet pressure loss. For evaluating the health condition of filters in the air intake system, this work forms a filter pressure loss model with the defined health index for the filter and five external environmental and control factors. By integrating the gas path component model, the combined model is applied in a real data set and the results show that (i) the proposed health index is efficient in representing the degradation state of the filter, (ii) the influencing factors on the pressure loss are successfully decoupled and their contributions on the pressure are quantitatively estimated, and (iii) the integrated model of filter pressure loss and gas path component can be used to better estimate the deterioration states of the filter as well as the gas turbine performance.


Mathematics ◽  
2021 ◽  
Vol 9 (18) ◽  
pp. 2181
Author(s):  
Alberto Garces-Jimenez ◽  
Jose-Manuel Gomez-Pulido ◽  
Nuria Gallego-Salvador ◽  
Alvaro-Jose Garcia-Tejedor

Buildings consume a considerable amount of electrical energy, the Heating, Ventilation, and Air Conditioning (HVAC) system being the most demanding. Saving energy and maintaining comfort still challenge scientists as they conflict. The control of HVAC systems can be improved by modeling their behavior, which is nonlinear, complex, and dynamic and works in uncertain contexts. Scientific literature shows that Soft Computing techniques require fewer computing resources but at the expense of some controlled accuracy loss. Metaheuristics-search-based algorithms show positive results, although further research will be necessary to resolve new challenging multi-objective optimization problems. This article compares the performance of selected genetic and swarm-intelligence-based algorithms with the aim of discerning their capabilities in the field of smart buildings. MOGA, NSGA-II/III, OMOPSO, SMPSO, and Random Search, as benchmarking, are compared in hypervolume, generational distance, ε-indicator, and execution time. Real data from the Building Management System of Teatro Real de Madrid have been used to train a data model used for the multiple objective calculations. The novelty brought by the analysis of the different proposed dynamic optimization algorithms in the transient time of an HVAC system also includes the addition, to the conventional optimization objectives of comfort and energy efficiency, of the coefficient of performance, and of the rate of change in ambient temperature, aiming to extend the equipment lifecycle and minimize the overshooting effect when passing to the steady state. The optimization works impressively well in energy savings, although the results must be balanced with other real considerations, such as realistic constraints on chillers’ operational capacity. The intuitive visualization of the performance of the two families of algorithms in a real multi-HVAC system increases the novelty of this proposal.


Crystals ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 830
Author(s):  
Farouq Mohammad A. Alam ◽  
Mazen Nassar

Compressive strength is a well-known measurement to evaluate the endurance of a given concrete mixture to stress factors, such as compressive loads. A suggested approach to assess compressive strength of concrete is to assume that it follows a probability model from which its reliability is calculated. In reliability analysis, a probability distribution’s reliability function is used to calculate the probability of a specimen surviving to a certain threshold without damage. To approximate the reliability of a subject of interest, one must estimate the corresponding parameters of the probability model. Researchers typically formulate an optimization problem, which is often nonlinear, based on the maximum likelihood theory to obtain estimates for the targeted parameters and then estimate the reliability. Nevertheless, there are additional nonlinear optimization problems in practice from which different estimators for the model parameters are obtained once they are solved numerically. Under normal circumstances, these estimators may perform similarly. However, some might become more robust under irregular situations, such as in the case of data contamination. In this paper, nine frequentist estimators are derived for the parameters of the Laplace Birnbaum-Saunders distribution and then applied to a simulated data set and a real data set. Afterwards, they are compared numerically via Monte Carlo comparative simulation study. The resulting estimates for the reliability based on these estimators are also assessed in the latter study.


2011 ◽  
Vol 110-116 ◽  
pp. 250-257 ◽  
Author(s):  
S. K. S. Yadav ◽  
Vinod Yadava

This paper presents the simultaneous optimization of multiple quality characteristics (Material removal rate and Average surface roughness) using Taguchi method for electrical discharge diamond cut-off grinding (EDDCG) during machining of cemented carbides material. The experiments are carried out on a self-developed electrical discharge diamond grinding setup in cut-off mode, In the present work four input parameters ( current, pulse on-time, duty factor and wheel RPM ) were selected in which each parameter was at three levels. The total degree of freedom (dof) has been calculated without considering the effects of interaction among the different control factors. Based on design of experiments (DOE), a standard L9 orthogonal array (OA) was taken for experimentation. The result shows the considerable improvement of 13.999dB in multiple S/N ratio as compared to initial condition. Also, the comparison of results from single and multi-objective optimization has been presented.


Author(s):  
Mareike van Heel ◽  
Gerhard Dikta ◽  
Roel Braekers

AbstractWe consider a binary multivariate regression model where the conditional expectation of a binary variable given a higher-dimensional input variable belongs to a parametric family. Based on this, we introduce a model-based bootstrap (MBB) for higher-dimensional input variables. This test can be used to check whether a sequence of independent and identically distributed observations belongs to such a parametric family. The approach is based on the empirical residual process introduced by Stute (Ann Statist 25:613–641, 1997). In contrast to Stute and Zhu’s approach (2002) Stute & Zhu (Scandinavian J Statist 29:535–545, 2002), a transformation is not required. Thus, any problems associated with non-parametric regression estimation are avoided. As a result, the MBB method is much easier for users to implement. To illustrate the power of the MBB based tests, a small simulation study is performed. Compared to the approach of Stute & Zhu (Scandinavian J Statist 29:535–545, 2002), the simulations indicate a slightly improved power of the MBB based method. Finally, both methods are applied to a real data set.


2019 ◽  
Vol XVI (2) ◽  
pp. 1-11
Author(s):  
Farrukh Jamal ◽  
Hesham Mohammed Reyad ◽  
Soha Othman Ahmed ◽  
Muhammad Akbar Ali Shah ◽  
Emrah Altun

A new three-parameter continuous model called the exponentiated half-logistic Lomax distribution is introduced in this paper. Basic mathematical properties for the proposed model were investigated which include raw and incomplete moments, skewness, kurtosis, generating functions, Rényi entropy, Lorenz, Bonferroni and Zenga curves, probability weighted moment, stress strength model, order statistics, and record statistics. The model parameters were estimated by using the maximum likelihood criterion and the behaviours of these estimates were examined by conducting a simulation study. The applicability of the new model is illustrated by applying it on a real data set.


Author(s):  
Parisa Torkaman

The generalized inverted exponential distribution is introduced as a lifetime model with good statistical properties. This paper, the estimation of the probability density function and the cumulative distribution function of with five different estimation methods: uniformly minimum variance unbiased(UMVU), maximum likelihood(ML), least squares(LS), weighted least squares (WLS) and percentile(PC) estimators are considered. The performance of these estimation procedures, based on the mean squared error (MSE) by numerical simulations are compared. Simulation studies express that the UMVU estimator performs better than others and when the sample size is large enough the ML and UMVU estimators are almost equivalent and efficient than LS, WLS and PC. Finally, the result using a real data set are analyzed.


2019 ◽  
Vol 21 (9) ◽  
pp. 662-669 ◽  
Author(s):  
Junnan Zhao ◽  
Lu Zhu ◽  
Weineng Zhou ◽  
Lingfeng Yin ◽  
Yuchen Wang ◽  
...  

Background: Thrombin is the central protease of the vertebrate blood coagulation cascade, which is closely related to cardiovascular diseases. The inhibitory constant Ki is the most significant property of thrombin inhibitors. Method: This study was carried out to predict Ki values of thrombin inhibitors based on a large data set by using machine learning methods. Taking advantage of finding non-intuitive regularities on high-dimensional datasets, machine learning can be used to build effective predictive models. A total of 6554 descriptors for each compound were collected and an efficient descriptor selection method was chosen to find the appropriate descriptors. Four different methods including multiple linear regression (MLR), K Nearest Neighbors (KNN), Gradient Boosting Regression Tree (GBRT) and Support Vector Machine (SVM) were implemented to build prediction models with these selected descriptors. Results: The SVM model was the best one among these methods with R2=0.84, MSE=0.55 for the training set and R2=0.83, MSE=0.56 for the test set. Several validation methods such as yrandomization test and applicability domain evaluation, were adopted to assess the robustness and generalization ability of the model. The final model shows excellent stability and predictive ability and can be employed for rapid estimation of the inhibitory constant, which is full of help for designing novel thrombin inhibitors.


2019 ◽  
Vol 14 (2) ◽  
pp. 148-156
Author(s):  
Nighat Noureen ◽  
Sahar Fazal ◽  
Muhammad Abdul Qadir ◽  
Muhammad Tanvir Afzal

Background: Specific combinations of Histone Modifications (HMs) contributing towards histone code hypothesis lead to various biological functions. HMs combinations have been utilized by various studies to divide the genome into different regions. These study regions have been classified as chromatin states. Mostly Hidden Markov Model (HMM) based techniques have been utilized for this purpose. In case of chromatin studies, data from Next Generation Sequencing (NGS) platforms is being used. Chromatin states based on histone modification combinatorics are annotated by mapping them to functional regions of the genome. The number of states being predicted so far by the HMM tools have been justified biologically till now. Objective: The present study aimed at providing a computational scheme to identify the underlying hidden states in the data under consideration. </P><P> Methods: We proposed a computational scheme HCVS based on hierarchical clustering and visualization strategy in order to achieve the objective of study. Results: We tested our proposed scheme on a real data set of nine cell types comprising of nine chromatin marks. The approach successfully identified the state numbers for various possibilities. The results have been compared with one of the existing models as well which showed quite good correlation. Conclusion: The HCVS model not only helps in deciding the optimal state numbers for a particular data but it also justifies the results biologically thereby correlating the computational and biological aspects.


Sign in / Sign up

Export Citation Format

Share Document