Cluster-Based Input Selection for Transparant Fuzzy Modeling

Author(s):  
Can Yang ◽  
Jun Meng ◽  
Shanan Zhu

Input selection is an important step in nonlinear regression modeling. By input selection, an interpretable model can be built with less computational cost. Input selection thus has drawn great attention in recent years. However, most available input selection methods are model-based. In this case, the input data selection is insensitive to changes. In this paper, an effective model-free method is proposed for the input selection. This method is based on sensitivity analysis using Minimum Cluster Volume (MCV) algorithm. The advantage of our proposed method is that with no specific model needed to be built in advance for checking possible input combinations, the computational cost is reduced and changes of data patterns can be captured automatically. The effectiveness of the proposed method is evaluated by using three well-known benchmark problems, which show that the proposed method works effectively with small and medium sized data collections. With an input selection procedure, a concise fuzzy model is constructed with high accuracy of prediction and better interpretation of data, which serves the purpose of patterns discovery in data mining well.

2008 ◽  
pp. 1138-1156
Author(s):  
Can Yang ◽  
Jun Meng ◽  
Shanan Zhu

Input selection is an important step in nonlinear regression modeling. By input selection, an interpretable model can be built with less computational cost. Input selection thus has drawn great attention in recent years. However, most available input selection methods are model-based. In this case, the input data selection is insensitive to changes. In this paper, an effective model-free method is proposed for the input selection. This method is based on sensitivity analysis using Minimum Cluster Volume (MCV) algorithm. The advantage of our proposed method is that with no specific model needed to be built in advance for checking possible input combinations, the computational cost is reduced and changes of data patterns can be captured automatically. The effectiveness of the proposed method is evaluated by using three well-known benchmark problems, which show that the proposed method works effectively with small and medium sized data collections. With an input selection procedure, a concise fuzzy model is constructed with high accuracy of prediction and better interpretation of data, which serves the purpose of patterns discovery in data mining well.


Author(s):  
Can Yang ◽  
Jun Meng ◽  
Shanan Zhu ◽  
Mingwei Dai

Input selection is a crucial step for nonlinear regression modeling problem, which contributes to build an interpretable model with less computation. Most of the available methods are model-based, and few of them are model-free. Model-based methods often make use of prediction error or sensitivity analysis for input selection and Model-free methods exploit consistency. In this paper, we show the underlying relationship between sensitivity analysis and consistency analysis for input selection, and then derive an efficient model-free method from our common sense, and then formulate this common sense by fuzzy logic, thus it can be called Fuzzy Consistency Analysis (FCA). In contrast to available methods, FCA has the following desirable properties: 1) it is a model-free method so that it will not be biased on a specific model, exploiting “what the data say” rather than “what the model say”, which is the essential point of data mining – input selection should not be biased on a specific model. 2) it is implemented as efficiently as classical model-free methods, but more flexible than them. 3) it can be directly applied to a data set with mix continuous and discrete inputs without doing rotation. Four benchmark problems study indicates that the proposed method works effectively for nonlinear problems. With the input selection procedure, the underlying reasons which effect the prediction are work out, which helps to gain an insight into a specific problem and servers the purpose of data mining very well.


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Gaoyang Li ◽  
Haoran Wang ◽  
Mingzi Zhang ◽  
Simon Tupin ◽  
Aike Qiao ◽  
...  

AbstractThe clinical treatment planning of coronary heart disease requires hemodynamic parameters to provide proper guidance. Computational fluid dynamics (CFD) is gradually used in the simulation of cardiovascular hemodynamics. However, for the patient-specific model, the complex operation and high computational cost of CFD hinder its clinical application. To deal with these problems, we develop cardiovascular hemodynamic point datasets and a dual sampling channel deep learning network, which can analyze and reproduce the relationship between the cardiovascular geometry and internal hemodynamics. The statistical analysis shows that the hemodynamic prediction results of deep learning are in agreement with the conventional CFD method, but the calculation time is reduced 600-fold. In terms of over 2 million nodes, prediction accuracy of around 90%, computational efficiency to predict cardiovascular hemodynamics within 1 second, and universality for evaluating complex arterial system, our deep learning method can meet the needs of most situations.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Sansit Patnaik ◽  
Fabio Semperlotti

AbstractThis study presents the formulation, the numerical solution, and the validation of a theoretical framework based on the concept of variable-order mechanics and capable of modeling dynamic fracture in brittle and quasi-brittle solids. More specifically, the reformulation of the elastodynamic problem via variable and fractional-order operators enables a unique and extremely powerful approach to model nucleation and propagation of cracks in solids under dynamic loading. The resulting dynamic fracture formulation is fully evolutionary, hence enabling the analysis of complex crack patterns without requiring any a priori assumption on the damage location and the growth path, and without using any algorithm to numerically track the evolving crack surface. The evolutionary nature of the variable-order formalism also prevents the need for additional partial differential equations to predict the evolution of the damage field, hence suggesting a conspicuous reduction in complexity and computational cost. Remarkably, the variable-order formulation is naturally capable of capturing extremely detailed features characteristic of dynamic crack propagation such as crack surface roughening as well as single and multiple branching. The accuracy and robustness of the proposed variable-order formulation are validated by comparing the results of direct numerical simulations with experimental data of typical benchmark problems available in the literature.


2021 ◽  
Vol 190 (3) ◽  
pp. 779-810
Author(s):  
Michael Garstka ◽  
Mark Cannon ◽  
Paul Goulart

AbstractThis paper describes the conic operator splitting method (COSMO) solver, an operator splitting algorithm and associated software package for convex optimisation problems with quadratic objective function and conic constraints. At each step, the algorithm alternates between solving a quasi-definite linear system with a constant coefficient matrix and a projection onto convex sets. The low per-iteration computational cost makes the method particularly efficient for large problems, e.g. semidefinite programs that arise in portfolio optimisation, graph theory, and robust control. Moreover, the solver uses chordal decomposition techniques and a new clique merging algorithm to effectively exploit sparsity in large, structured semidefinite programs. Numerical comparisons with other state-of-the-art solvers for a variety of benchmark problems show the effectiveness of our approach. Our Julia implementation is open source, designed to be extended and customised by the user, and is integrated into the Julia optimisation ecosystem.


Author(s):  
T. O. Ting ◽  
H. C. Ting ◽  
T. S. Lee

In this work, a hybrid Taguchi-Particle Swarm Optimization (TPSO) is proposed to solve global numerical optimization problems with continuous and discrete variables. This hybrid algorithm combines the well-known Particle Swarm Optimization Algorithm with the established Taguchi method, which has been an important tool for robust design. This paper presents the improvements obtained despite the simplicity of the hybridization process. The Taguchi method is run only once in every PSO iteration and therefore does not give significant impact in terms of computational cost. The method creates a more diversified population, which also contributes to the success of avoiding premature convergence. The proposed method is effectively applied to solve 13 benchmark problems. This study’s results show drastic improvements in comparison with the standard PSO algorithm involving continuous and discrete variables on high dimensional benchmark functions.


2021 ◽  
Author(s):  
Sellam Veerappan ◽  
Kannan Natarajan ◽  
Arunkumar Gopu ◽  
Ramesh Sekaran ◽  
Manikandan Ramachandran ◽  
...  

Abstract Modern computer sciences and information technologies are anticipated to bring transformative influence in part that mobile communication technologies play in society. To completely take advantage of the services bestowed by modern computer sciences and information technologies, the evidence of the economic and business case is an essential prerequisite. Existing research engulfs several transformative computing methods based on sensors area obtainable as service contain optimize resource management, data processing/storage and security provisioning. With transformative computing being on edge, real-time data must be necessitated for healthcare data analytics. The conventional cloud server cannot address the latency requirements of healthcare IoT sensors. To survive with how to handle these services, we introduce a hybrid method integrating Sugeno Fuzzy Inference (SFI) and Model-free Reinforcement Learning to enhance healthcare IoT and cloud latency. The objective is to lessen high latency between healthcare IoT devices. The proposed Sugeno Fuzzy Model-free Reinforcement Learning Data Computing (SF-MRLDC) method uses a Sugeno Fuzzy Inference model integrated with a Model-free Reinforcement Learning model data computing in a healthcare IoT data analytics environment. The simulation results of the SF-MRLDC method show that it is computationally efficient in terms of latency by ensuring better response time.


Author(s):  
Michael Lang

While the importance of continuous monitoring of electrocardiographic (ECG) or photoplethysmographic (PPG) signals to detect cardiac anomalies is generally accepted in preventative medicine, there remain major barriers to its actual widespread adoption. Most notably, current approaches tend to lack real-time capability, exhibit high computational cost, and be based on restrictive modeling assumptions or require large amounts of training data. We propose a lightweight and model-free approach for the online detection of cardiac anomalies such as ectopic beats in ECG or PPG signals based on the change detection capabilities of Singular Spectrum Analysis (SSA) and nonparametric rank-based cumulative sum (CUSUM) control charts. The procedure is able to quickly detect anomalies without requiring the identification of fiducial points such as R-peaks and is computationally significantly less demanding than previously proposed SSA-based approaches. Therefore, the proposed procedure is equally well suited for standalone use and as an add-on to complement existing (e.g. heart rate (HR) estimation) procedures.


2020 ◽  
Vol 12 (6) ◽  
pp. 97
Author(s):  
Francesco Curreri ◽  
Giacomo Fiumara ◽  
Maria Gabriella Xibilia

Soft Sensors (SSs) are inferential models used in many industrial fields. They allow for real-time estimation of hard-to-measure variables as a function of available data obtained from online sensors. SSs are generally built using industries historical databases through data-driven approaches. A critical issue in SS design concerns the selection of input variables, among those available in a candidate dataset. In the case of industrial processes, candidate inputs can reach great numbers, making the design computationally demanding and leading to poorly performing models. An input selection procedure is then necessary. Most used input selection approaches for SS design are addressed in this work and classified with their benefits and drawbacks to guide the designer through this step.


2020 ◽  
Vol 7 (3) ◽  
pp. 134
Author(s):  
Zakaria Chekakta ◽  
Mokhtar Zerikat ◽  
Yasser Bouzid ◽  
Anis Koubaa

Sign in / Sign up

Export Citation Format

Share Document