Local Covering: Adaptive Rule Generation Method Using Existing Rules for XCS

Author(s):  
Masakazu Tadokoro ◽  
Satoshi Hasegawa ◽  
Takato Tatsumi ◽  
Hiroyuki Sato ◽  
Keiki Takadama
2009 ◽  
Vol 20 (10) ◽  
pp. 2655-2666 ◽  
Author(s):  
Dong LIU ◽  
Xiang-Wu MENG ◽  
Jun-Liang CHEN ◽  
Ya-Mei XIA

2021 ◽  
Vol 11 (8) ◽  
pp. 3347
Author(s):  
Siqi Ma ◽  
Xin Wang ◽  
Xiaochen Wang ◽  
Hanyu Liu ◽  
Runtong Zhang

Although urban rail transit provides significant daily assistance to users, traffic risk remains. Turn-back faults are a common cause of traffic accidents. To address turn-back faults, machines are able to learn the complicated and detailed rules of the train’s internal communication codes, and engineers must understand simple external features for quick judgment. Focusing on turn-back faults in urban rail, in this study we took advantage of related accumulated data to improve algorithmic and human diagnosis of this kind of fault. In detail, we first designed a novel framework combining rules and algorithms to help humans and machines understand the fault characteristics and collaborate in fault diagnosis, including determining the category to which the turn-back fault belongs, and identifying the simple and complicated judgment rules involved. Then, we established a dataset including tabular and text data for real application scenarios and carried out corresponding analysis of fault rule generation, diagnostic classification, and topic modeling. Finally, we present the fault characteristics under the proposed framework. Qualitative and quantitative experiments were performed to evaluate the proposed method, and the experimental results show that (1) the framework is helpful in understanding the faults of trains that occur in three types of turn-back: automatic turn-back (ATB), automatic end change (AEC), and point mode end change (PEC); (2) our proposed framework can assist in diagnosing turn-back faults.


1996 ◽  
Vol 05 (01n02) ◽  
pp. 99-112 ◽  
Author(s):  
NING SHAN ◽  
HOWARD J. HAMILTON ◽  
NICK CERCONE

We present the three-step GRG approach for learning decision rules from large relational databases. In the first step, an attribute-oriented concept tree ascen sion technique is applied to generalize an information system. This step loses some information but substantially improves the efficiency of the following steps. In the second step, a reduction technique is applied to generate a minimalized information system called a reduct which contains a minimal subset of the generalized attributes and the smallest number of distinct tuples for those attributes. Finally, a set of maximally general rules are derived directly from the reduct. These rules can be used to interpret and understand the active mechanisms underlying the database.


2014 ◽  
Vol 1 (2) ◽  
pp. 62-74 ◽  
Author(s):  
Payel Roy ◽  
Srijan Goswami ◽  
Sayan Chakraborty ◽  
Ahmad Taher Azar ◽  
Nilanjan Dey

In the domain of image processing, image segmentation has become one of the key application that is involved in most of the image based operations. Image segmentation refers to the process of breaking or partitioning any image. Although, like several image processing operations, image segmentation also faces some problems and issues when segmenting process becomes much more complicated. Previously lot of work has proved that Rough-set theory can be a useful method to overcome such complications during image segmentation. The Rough-set theory helps in very fast convergence and in avoiding local minima problem, thereby enhancing the performance of the EM, better result can be achieved. During rough-set-theoretic rule generation, each band is individualized by using the fuzzy-correlation-based gray-level thresholding. Therefore, use of Rough-set in image segmentation can be very useful. In this paper, a summary of all previous Rough-set based image segmentation methods are described in detail and also categorized accordingly. Rough-set based image segmentation provides a stable and better framework for image segmentation.


2018 ◽  
Vol 27 (4) ◽  
pp. 555-563
Author(s):  
M. Priya ◽  
R. Kalpana

Abstract Challenging searching mechanisms are required to cater to the needs of search engine users in probing the voluminous web database. Searching the query matching keyword based on a probabilistic approach is attractive in most of the application areas, viz. spell checking and data cleaning, because it allows approximate search. A probabilistic approach with maximum likelihood estimation is used to handle real-world problems; however, it suffers from overfitting data. In this paper, a rule-based approach is presented for keyword searching. The process consists of two phases called the rule generation phase and the learning phase. The rule generation phase uses a new technique called N-Gram based Edit distance (NGE) to generate the rule dictionary. The Turing machine model is implemented to describe the rule generation using the NGE technique. In the learning phase, a log model with maximum-a-posterior estimation is used to select the best rule. When evaluated in real time, our system produces the best result in terms of efficiency and accuracy.


Sign in / Sign up

Export Citation Format

Share Document