scholarly journals Automated classification of stages of anaesthesia by populations of evolutionary optimized fuzzy rules

2015 ◽  
Vol 1 (1) ◽  
pp. 77-79
Author(s):  
C. Walther ◽  
A. Wenzel ◽  
M. Schneider ◽  
M. Trommer ◽  
K.-P. Sturm ◽  
...  

AbstractThe detection of stages of anaesthesia is mainly performed on evaluating the vital signs of the patient. In addition the frontal one-channel electroencephalogram can be evaluated to increase the correct detection of stages of anaesthesia. As a classification model fuzzy rules are used. These rules are able to classify the stages of anaesthesia automatically and were optimized by multiobjective evolutionary algorithms. As a result the performance of the generated population of fuzzy rule sets is presented. A concept of the construction of an autonomic embedded system is introduced. This system should use the generated rules to classify the stages of anaesthesia using the frontal one-channel electroencephalogram only.

Fuzzy rule has been used extensively in data mining. This paper presents a fast and flexible method based on genetic algorithm to construct fuzzy decision rule with considering criteria of accuracy. First, the algorithm determines the width that divides each attribute into “n” intervals according to the number of fuzzy sets, after that calculates the parameters width according to that width. Rough Sets Model Based on Database Systems technique used to reduce the number of attributes if there exists then we use the algorithm for extracting initial fuzzy rules from fuzzy table using SQL statements with a smaller number of rules than the other models without needing to use a genetic algorithm – Based Rule Selection approach to select a small number of significant rules, then it calculates their accuracy and the confidence.. Multiobjective evolutionary algorithms (EAs) that use nondominated sorting and sharing have been criticized mainly for computational complexity and needing for specifying a sharing parameter but in our genetic model each fuzzy set represented by “Real number” from 0 to 9 forming a gene on chromosome (individual). Our genetic model is used to improve the accuracy of the initial rules and calculates the accuracy of the new rules again which be higher than the old rules The proposed approach is applied on the Iris dataset and the results compared with other models: Preselection with niches, ENORA and NSGA to show its validity.


2021 ◽  
Vol 263 (4) ◽  
pp. 2687-2698
Author(s):  
Nils Poschadel ◽  
Christian Gill ◽  
Stephan Preihs ◽  
Jürgen Peissig

Within the scope of the interdisciplinary project WEA-Akzeptanz, measurements of the sound emission of wind turbines were carried out at the Leibniz University Hannover. Due to the environment there are interfering components (e. g. traffic, birdsong, wind, rain, ...) in the recorded signals. Depending on the subsequent signal processing and analysis, it may be necessary to identify sections with the raw sound of a wind turbine, recordings with the purest possible background noise or even a specific combination of interfering noises. Due to the amount of data, a manual classification of the audio signals is usually not feasible and an automated classification becomes necessary. In this paper, we extend our previously proposed multi-class single-label classification model to a multi-class multi-label model, which reflects the real-world acoustic conditions around wind turbines more accurately and allows for finer-grained evaluations. We first provide a short overview of the data acquisition and the dataset. We then briefly summarize our previous approach, extend it to a multi-class multi-label formulation, and analyze the trained convolutional neural network regarding different metrics. All in all, the model delivers very reliable classification results with an overall example-based F1-score of about 80 % for a multi-label classification of 12 classes.


2015 ◽  
Vol 21 (4) ◽  
pp. 456-477 ◽  
Author(s):  
S. P. Sarmah ◽  
U. C. Moharana

Purpose – The purpose of this paper is to present a fuzzy-rule-based model to classify spare parts inventories considering multiple criteria for better management of maintenance activities to overcome production down situation. Design/methodology/approach – Fuzzy-rule-based approach for multi-criteria decision making is used to classify the spare parts inventories. Total cost is computed for each group considering suitable inventory policies and compared with other existing models. Findings – Fuzzy-rule-based multi-criteria classification model provides better results as compared to aggregate scoring and traditional ABC classification. This model offers the flexibility for inventory management experts to provide their subjective inputs. Practical implications – The web-based model developed in this paper can be implemented in various industries such as manufacturing, chemical plants, and mining, etc., which deal with large number of spares. This method classifies the spares into three categories A, B and C considering multiple criteria and relationships among those criteria. The framework is flexible enough to add additional criteria and to modify fuzzy-rule-base at any point of time by the decision makers. This model can be easily integrated to any customized Enterprise Resource Planning applications. Originality/value – The value of this paper is in applying Fuzzy-rule-based approach for Multi-criteria Inventory Classification of spare parts. This rule-based approach considering multiple criteria is not very common in classification of spare parts inventories. Total cost comparison is made to compare the performance of proposed model with the traditional classifications and the result shows that proposed fuzzy-rule-based classification approach performs better than the traditional ABC and gives almost the same cost as aggregate scoring model. Hence, this method is valid and adds a new value to spare parts classification for better management decisions.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Matjaž Kragelj ◽  
Mirjana Kljajić Borštnar

PurposeThe purpose of this study is to develop a model for automated classification of old digitised texts to the Universal Decimal Classification (UDC), using machine-learning methods.Design/methodology/approachThe general research approach is inherent to design science research, in which the problem of UDC assignment of the old, digitised texts is addressed by developing a machine-learning classification model. A corpus of 70,000 scholarly texts, fully bibliographically processed by librarians, was used to train and test the model, which was used for classification of old texts on a corpus of 200,000 items. Human experts evaluated the performance of the model.FindingsResults suggest that machine-learning models can correctly assign the UDC at some level for almost any scholarly text. Furthermore, the model can be recommended for the UDC assignment of older texts. Ten librarians corroborated this on 150 randomly selected texts.Research limitations/implicationsThe main limitations of this study were unavailability of labelled older texts and the limited availability of librarians.Practical implicationsThe classification model can provide a recommendation to the librarians during their classification work; furthermore, it can be implemented as an add-on to full-text search in the library databases.Social implicationsThe proposed methodology supports librarians by recommending UDC classifiers, thus saving time in their daily work. By automatically classifying older texts, digital libraries can provide a better user experience by enabling structured searches. These contribute to making knowledge more widely available and useable.Originality/valueThese findings contribute to the field of automated classification of bibliographical information with the usage of full texts, especially in cases in which the texts are old, unstructured and in which archaic language and vocabulary are used.


2015 ◽  
Vol 727-728 ◽  
pp. 876-879
Author(s):  
Min Chao Huang ◽  
Bao Yu Xing

Based on fuzzy rule sets match method which is a series of fuzzy neural networks, a system framework used for the fault diagnosis is proposed. This fault diagnosis system consists of five parts, including the extraction of fuzzy rules, fuzzy reference rule sets, the fuzzy rule scheduled to detect, the fuzzy match module and the diagnosis logic module. The extraction of fuzzy rules involves two steps: step one adaptively divides the whole space of the trained data into the subspaces in the form of hypersphere, which is expected efficiently to work out the recognition questions in the high dimension space; step two generates a fuzzy rule in each sample subspace and calculates the membership degree of each fuzzy rule. Many fuzzy reference rule sets are produced by the extraction module of fuzzy rules for the offline learning, and a fuzzy rule set to be detected is online formed while the monitoring process is happening. Beliefs estimated from the fuzzy match process of fuzzy rule sets, which indicate the existence of the working classes in the plant, the diagnosis logic module can export fault detection time, fault isolation time, fault type and fault degree. The simulation researches of the fault diagnosis in space propulsion system demonstrate the superior qualities of the fault diagnosis method on the basis of the fuzzy match of the fuzzy rule sets.


Author(s):  
Takeshi Furuhashi ◽  

Rule extraction from data is one of the key technologies for solving the bottlenecks in artificial intelligence. Artificial neural networks are well suited for representing any knowledge in given data. Extraction of logical/fuzzy rules from the trained artificial neural network is of great importance to researchers in the fields of artificial intelligence and soft computing. Fuzzy rule sets are capable of approximating any nonlinear mapping relationships. Extraction of rules from data has been discussed in terms of fuzzy modeling, fuzzy clustering, and classification with fuzzy rule sets. This special issue entitled"Rule Extraction from Data" is aimed at providing the readers with good insights into the advanced studies in the field of rule extraction from data using neural networks/fuzzy rule sets. I invited seven research papers best suited for the theme of this special issue. All the papers were reviewed rigorously by two reviewers each. The first paper proposes an interesting rule extraction method from data using neural networks. Ishikawa presents a combination of learning with an immediate critic and a structural learning with forgetting. This method is capable of generating skeletal networks for logical rule extraction from data with correct and wrong answers. The proposed method is applied to rule extraction from lense data. The second paper presents a new methodology for logical rule extraction based on transformation of MLP (multilayered perceptron) to a logical network. Duck et al. applied their C-MLP2LN to the Iris benchmark classification problem as well as real-world medical data with very good results. In the third paper, Geczy and Usui propose fuzzy rule extraction from trained artificial neural networks. The proposed algorithm is implied from their theoretical study, not from heuristics. Their study enables to initially consider derivation of crisp rules from trained artificial neural network, and in case of conflict, application of fuzzy rules. The proposed algorithm is experimentally demonstrated with the Iris benchmark classification problem. The fourth paper presents a new framework for fuzzy modeling using genetic algorithm. The authors have broken new ground of fuzzy rule extraction from neural networks. For the fuzzy modeling, they have proposed a particular type of neural networks containing nodes representing membership functions. In this fourth paper, the authors discuss input variable selection for the fuzzy modeling under multiple criteria with different importance. A target system with a strong nonlinearity is used for demonstrating the proposed method. Kasabov, et al. present, in the fifth paper, a method for extraction of fuzzy rules that have different level of abstraction depending on several modifiable thresholds. Explanation quality becomes better with higher threshold values. They apply the proposed method to the Iris benchmark classification problem and to a real world problem. J. Yen and W. Gillespie address interpretability issue of Takagi-Sugeno-Kang model, one of the most popular fuzzy mdoels, in the fifth paper. They propose a new approach of fuzzy modeling that ensures not only a high approximation of the input-output relationship in the data, but also good insights about the local behavior of the model. The proposed method is applied to fuzzy modeling of sinc function and Mackey-Glass chaotic time series data. The last paper discusses fuzzy rule extraction from numerical data for high-dimensional classification problems. H.Ishibuchi, et al. have been pioneering methods for classification of data using fuzzy rules and genetic algorithm. In this last paper, they introduced a new criterion, simplicity of each rule, together with the conventional ones, compactness of rule base and classification ability, for high-dimensional problem. The Iris data is used for demonstrating their new classification method. They applied it also to wine data and credit data. I hope that the readers will be encouraged to explore the frontier to establish a new paradigm in the field of knowledge representation and rule extraction.


Author(s):  
Michael R. Berthold ◽  
Klaus–Peter Huber

In this paper a technique is proposed to tolerate missing values based on a system of fuzzy rules for classification. The presented method is mathematically solid but nevertheless easy and efficient to implement. Three possible applications of this methodology are outlined: the classification of patterns with an incomplete feature vector, the completion of the input vector when a certain class is desired, and the training or automatic construction of a fuzzy rule set based on incomplete training data. In contrast to a static replacement of the missing values, here the evolving model is used to predict the most possible values for the missing attributes. Benchmark datasets are used to demonstrate the capability of the presented approach in a fuzzy learning environment.


Sign in / Sign up

Export Citation Format

Share Document