On the Selection of Tuning Parameters in Predictive Controllers Based on NSGA-II

Author(s):  
R. C. Gutiérrez-Urquídez ◽  
G. Valencia-Palomo ◽  
O. M. Rodríguez-Elías ◽  
F. R. López-Estrada ◽  
J. A. Orrante-Sakanassi
2017 ◽  
Vol 12 (3) ◽  
pp. 753-778 ◽  
Author(s):  
Vivekananda Roy ◽  
Sounak Chakraborty

2013 ◽  
Vol 2013 ◽  
pp. 1-11 ◽  
Author(s):  
Jia-Rou Liu ◽  
Po-Hsiu Kuo ◽  
Hung Hung

Large-p-small-ndatasets are commonly encountered in modern biomedical studies. To detect the difference between two groups, conventional methods would fail to apply due to the instability in estimating variances int-test and a high proportion of tied values in AUC (area under the receiver operating characteristic curve) estimates. The significance analysis of microarrays (SAM) may also not be satisfactory, since its performance is sensitive to the tuning parameter, and its selection is not straightforward. In this work, we propose a robust rerank approach to overcome the above-mentioned diffculties. In particular, we obtain a rank-based statistic for each feature based on the concept of “rank-over-variable.” Techniques of “random subset” and “rerank” are then iteratively applied to rank features, and the leading features will be selected for further studies. The proposed re-rank approach is especially applicable for large-p-small-ndatasets. Moreover, it is insensitive to the selection of tuning parameters, which is an appealing property for practical implementation. Simulation studies and real data analysis of pooling-based genome wide association (GWA) studies demonstrate the usefulness of our method.


Author(s):  
M. ISABEL REY ◽  
MARTA GALENDE ◽  
M. J. FUENTE ◽  
GREGORIO I. SAINZ-PALMERO

Fuzzy modeling is one of the most known and used techniques in different areas to model the behavior of systems and processes. In most cases, as in data-driven fuzzy modeling, these fuzzy models reach a high performance from the point of view of accuracy, but from other points of view, such as complexity or interpretability, they can present a poor performance. Several approaches are found in the bibliography to reduce the complexity and improve the interpretability of the fuzzy models. In this paper, a post-processing approach is carried out via rule selection, whose aim is to choose the most relevant rules for working together on the well-known accuracy-interpretability trade-off. The rule relevancy is based on Orthogonal Transformations, such as the SVD-QR rank revealing approach, the P-QR and OLS transformations. Rule selection is carried out using a genetic algorithm that takes into account the information obtained by the Orthogonal Transformations. The main objective is to check the true significance, drawbacks and advantages of the rule selection based on the orthogonal transformations via the rule firing strength matrix. In order to carry out this aim, a neuro-fuzzy system, FasArt (Fuzzy Adaptive System ART based), and several case studies, data sets from the KEEL Project Repository, are used to tune and check this selection of rules based on orthogonal transformations, genetic selection and accuracy-interpretability trade-off. This neuro-fuzzy system generates Mamdani fuzzy rule based systems (FRBSs), in an approximative way. NSGA-II is the MOEA tool used to tune the proposed rule selection.


1994 ◽  
pp. 27-42
Author(s):  
Rubiyah Yusof ◽  
Marzuki Khalid ◽  
Sigeru Omatu

One of the most recent development in the theories of adaptive methods in the form of self-tuning algorithms is in the area of self-tuning PID controllers (STPID). These controllers are a class of adaptive controllers but are essentially PID controllers with the capabilities of tuning their parameters automatically online. To this end, the theories of these types of controllers are still in the infancy stage. In this paper, we provide some interpretations of a STPID through some analytical and simulation results, thereby lending way for a better understanding of the algorithms and some insight into the usefullness of the algorithm. The interpretations also serve as an aid in the selection of the tuning parameters of this algorithm which can be a time consuming activity if done dilligently.


2020 ◽  
Vol 19 (01) ◽  
pp. 167-188
Author(s):  
Oulfa Labbi ◽  
Abdeslam Ahmadi ◽  
Latifa Ouzizi ◽  
Mohammed Douimi

The aim of this paper is to address the problem of supplier selection in a context of an integrated product design. Indeed, the product specificities and the suppliers’ constraints are both integrated into product design phase. We consider the case of improving the design of an existing product and study the selection of its suppliers adopting a bi-objective optimization approach. Considering multi-products, multi-suppliers and multi-periods, the mathematical model proposed aims to minimize supplying, transport and holding costs of product components as well as quality rejected items. To solve the bi-objective problem, an evolutionary algorithm namely, non-dominant sorting genetic algorithm (NSGA-II) is employed. The algorithm provides a set of Pareto front solutions optimizing the two objective functions at once. Since parameters values of genetic algorithms have a significant impact on their efficiency, we have proposed to study the impact of each parameter on the fitness functions in order to determine the optimal combination of these parameters. Thus, a number of simulations evaluating the effects of crossover rate, mutation rate and number of generations on Pareto fronts are presented. To evaluate performance of the algorithm, results are compared to those obtained by the weighted sum method through a numerical experiment. According to the computational results, the non-dominant sorting genetic algorithm outperforms the CPLEX MIP solver in both solution quality and computational time.


The quality of the software is a very important aspect in the development of software application. In order to make sure there is the software of good quality, testing is a critical activity of software development. Thus, software testing is the activity which focuses on the computation of an attribute or the ability of either a system or program that decides if user requirements are met. There is a proper strategy for the design of software for which testing has to be adopted. The techniques of test case selection attempt at reduction of the test cases that need to be executed at the same time satisfying the needs of testing that has been denoted by the test criteria. In the time of software testing, and the resource will be the primary constraints at the time of testing since this has been a highly neglected phase in the Software Development Life Cycle (SDLC). The optimizing of a test suite is very critical for the reduction of the testing phase and also the selection of the test cases that eliminate unwanted or redundant data. All work in literature will make use of techniques of single objective optimization that does not have to be efficient as the code coverage will play an important role at the time of selection of test case. As the test case choice is Non-Deterministic, the work also proposes a novel and multi-objective algorithm like the Non-Dominated Sorting Genetic Algorithm II (NSGA II) and the Stochastic Diffusion Search (SDS) algorithm that makes use of the cost of execution and code coverage as its objective function. The results prove a faster level of convergence of the algorithm with better coverage of code in comparison to the NSGA II.


2018 ◽  
Vol 18 (3) ◽  
pp. 23-36 ◽  
Author(s):  
Ketaki Bhalchandra Naik ◽  
G. Meera Gandhi ◽  
S. H. Patil

Abstract Cloud Data centers have adopted virtualization techniques for effective and efficient compilation of an application. The requirements of application from the execution perspective are fulfilled by scaling up and down the Virtual Machines (VMs). The appropriate selection of VMs to handle the unpredictable peak workload without load imbalance is a critical challenge for a cloud data center. In this article, we propose Pareto based Greedy-Non dominated Sorting Genetic Algorithm-II (G-NSGA2) for agile selection of a virtual machine. Our strategy generates Pareto optimal solutions for fair distribution of cloud workloads among the set of virtual machines. True Pareto fronts generate approximate optimal trade off solution for multiple conflicting objectives rather than aggregating all objectives to obtain single trade off solution. The objectives of our study are to minimize the response time, operational cost and energy consumption of the virtual machine. The simulation results evaluate that our hybrid NSGA-II outperforms as compared to the standard NSGA-II Multiobjective optimization problem.


Sign in / Sign up

Export Citation Format

Share Document